IA307 - GPU for Deep Learning

By Elisabeth Brunet.


Pratical Assignments :

Minimal kernel with Google Colab : An addition

All the exercices of the module are to be done in the Google Colab environment. So, start by logging you on the Google Colab webpage.
Google Colab allows you to write and execute code in an interactive environment called a Colab notebook. You can write different kind of codes, including CUDA codes. In order to tell Google Colab that you want to use a GPU, you have to change the default runtime in the menu Runtime>Change Runtime Type and set Runtime type to Python 3 and Hardware accelerator to GPU.
You will find here a notebook in which :

The given program makes an addition on the GPU thanks to a kernel executed by a single thread.

  1. Read the program.
  2. Load, compile and then launch the program using the buttons "play" on the left of the code blocks.


Remarks :

Minimal kernel with Error management

CUDA calls on GPU fail with silent error. In this notebook, we highlight the value of protecting your CUDA calls to detect errors :

  1. In the first section named "Raw code", you can read a CUDA code written without any precaution. Run the code and observe the result. You may not be agree with the result of 2 + 2 = 0.
  2. In the second section, we introduce how debugging this code with the debugger cuda-gdb.
    For this purpose, you need to :
    • compile adding the options"-g -G" so that debugging symbols are included.
    • write in a file the sequence of instructions to be followed by the debugger. Indeed, cuda-gdb is interactive (you are expected to type commands as you go along), but running programs in the Colab environnement is not. Typical commands would go like this:
      1. set the debugger up to check lots of possible errors:
        1. memory checks : memcheck on,
        2. stop in case of API failures : api_failures stop,
        3. stop on exceptions : catch throw,
      2. run the program (possibly with command line options) : r option1 option2 ,
      3. show the kernel call stack (GPU) : bt,
      4. print all local variables : info locals,
      5. switch to the host thread : thread 1
      6. and show the host program call stack (CPU) : bt.
    • call the debugger with your program and execute the commands from debug_instructions.txt. If your program terminates fine, cuda-gdb will complain that there is no stack (since the program finished)
    After running all cells of the "Debugging" notebook section, you should get an exception and lots of information. There is an illegal address detected in line 5 of add.cu, which is in kernel add. You may identify and fix the problem by hand but it should have been caught by the cuda errors management, object of the next section.

    Note : If you do use printf to debug, be sure to flush the buffer by adding a line break at the end. This applies to any C program. Example: printf("Works up to here\n"); . Nevertheless, the interface between the Jupyter Notebook and the executed program is a little fragile. So if your program crashes, there might not be ANY output at all, even if you have printf everywhere.
  3. In the third section "Code with error management", we instrument the code to check the return error code of calls to CUDA. The program should fail now (and no longer crash and give a wrong result).
    As CUDA calls on GPU fail with silent error, it is required to retrieve and check the error code of all of them. Since kernel calls do not have a return value, first of all, you can check for invalid launch argument with the error code of cudaPeekAtLastError(); and then, you can check if errors occurred during the kernel execution thanks to the error code of cudaDeviceSynchronize, that forces the kernel completion. Note that most of the time, a developper will use the cudaMemcpy as synchronization primitive (the cudaDeviceSynchronize would be in duplicate then). In this cas, the cudaMemcpy call can return either errors which occurred during the kernel execution or those from the memory copy itself.
  4. In the last section, we have outsourced the error management code so that you can use it more easily in the rest of your exercises.
    Notice that the first line of the cell has changed. Now, each cell is saved as a file and the compilation and execution are launched explicitly in two additional cells with a shell command. Note that in a notebook, shell commands start with ! .
  5. Last but not least, it remains you to fix the problem.

First parallel computation kernel: SAXPY

We implement here the linear application y = ax + y on an large vector whose CPU code is:


  void saxpy(float *x, float *y, int len, float a){
    for (int i = 0; i < len; ++i)
      y[i] = a*x[i] + y[i];
  }

Warning! Launch your experiments within the same program so that the different tests can be performed on the same hardware and are therefore comparable.

Reduction

This exercise consists of implementing the sum of the elements of an array of integers following the different parallel versions presented in thoses slides.

Compare execution time.

Square Matrix Multiplication

Here you have to implement the multiplication of square matrices C = &alpha x A x B + &beta x C. In this notebook, you will find a code skeleton that will allow you to implement and compare the different matrix multiplication strategies you have to provide.
Matrixes are square and aligned on the blocks dimension in order to avoid to deal with borders and ease your implementation during the constraint lab time. In both first algorithms, CUDA blocks being limited to 1024 threads, in order to easily project your blocks on the matrix, you should use square blocks of 32x32 threads.
Take time to take in hand the data structures, organization of modules. To let you focus on the algorithm coding, we encapsulated matrix in a structure called f_matrix that encapsulates both data on the CPU and on the GPU. A set of features also comes with. Note that matrixes are initiliazed with predefined values that allows us to basically check the correctness of the result. So do not change matrix values unless you modify the checking error code, or even better, unless you compare your result with the one obtain thanks to the cuBLAS library. Max error must be equal to 0.
Let begin you experimentation on matrices of modest dimensions of 10x10 blocks. If you have time, go further with larger matrixes to perform time evaluation; e.g. 40x40 blocks, by modifying the constant SIZE.


  • Download your notebook as an .ipynb file and send it to elisabeth.brunet@telecom-sudparis.eu with the following subject : [5AI07] Matrix multiplication before 11pm.