IA307 - GPU for Deep Learning

By Elisabeth Brunet.


Pratical Assignments :

Minimal kernel with Google Colab : An addition

All the exercices of the module are to be done in the Google Colab environment. So, start by logging you on the Google Colab webpage.
Google Colab allows you to write and execute code in an interactive environment called a Colab notebook. You can write different kind of codes, including CUDA codes. In order to tell Google Colab that you want to use a GPU, you have to change the default runtime in the menu Runtime>Change Runtime Type and set Runtime type to Python 3 and Hardware accelerator to GPU.
You will find here a notebook in which :

The given program makes an addition on the GPU thanks to a kernel executed by a single thread.

  1. Read the program.
  2. Load, compile and then launch the program using the buttons "play" on the left of the code blocks.


Remarks :

Minimal kernel with Error management

CUDA calls on GPU fail with silent error. In this notebook, we highlight the value of protecting your CUDA calls to detect errors :

  1. In the first section named "Raw code", you can read a CUDA code written without any precautions. Run the code and observe the result. You may not be agree with the result of 2 + 2 = 0.
  2. In the second section, we introduce how debugging this code with the debugger cuda-gdb.
    For this purpose, you need to :
    • compile adding the options"-g -G" so that debugging symbols are included. For this purpose, you need to save your code in a file by beginning it with "%%writefile file_name.cu" (instead of "%%cu") and compile it explicitly in a separate cell. Note that in a notebook, shell commands start with ! .
    • write in a file the sequence of instructions to be followed by the debugger. Indeed, cuda-gdb is interactive (you are expected to type commands as you go along), but running programs in the Colab environnement is not. Typical commands would go like this:
      1. set the debugger up to check lots of possible errors:
        1. memory checks : memcheck on,
        2. stop in case of API failures : api_failures stop,
        3. stop on exceptions : catch throw,
      2. run the program (possibly with command line options) : r option1 option2 ,
      3. show the kernel call stack (GPU) : bt,
      4. print all local variables : info locals,
      5. switch to the host thread : thread 1
      6. and show the host program call stack (CPU) : bt.
    • call the debugger with your program and execute the commands from debug_instructions.txt. If your program terminates fine, cuda-gdb will complain that there is no stack (since the program finished)
    After running all cells of the "Debugging" notebook section, you should get an exception and lots of information. There is an illegal address detected in line 5 of add.cu, which is in kernel add. You may identify and fix the problem by hand but it should have been caught by the cuda errors management, object of the next section.

    Note : If you do use printf to debug, be sure to flush the buffer by adding a line break at the end. This applies to any C program. Example: printf("Works up to here\n"); . Nevertheless, the interface between the Jupyter Notebook and the executed program is a little fragile. So if your program crashes, there might not be ANY output at all, even if you have printf everywhere.
  3. In the third section "Code with error management", we instrument the code to check the return error code of calls to CUDA. The program should fail now (and no longer crash and give a wrong result).
    As CUDA calls on GPU fail with silent error, it is required to retrieve and check the error code of all of them. Since kernel calls do not have a return value, first of all, you can check for invalid launch argument with the error code of cudaPeekAtLastError(); and then, you can check if errors occurred during the kernel execution thanks to the error code of cudaDeviceSynchronize, that forces the kernel completion. Note that most of the time, a developper will use the cudaMemcpy as synchronization primitive (the cudaDeviceSynchronize would be in duplicate then). In this cas, the cudaMemcpy call can return either errors which occurred during the kernel execution or those from the memory copy itself.
  4. In the last section, we have outsourced the error management code so that you can use it more easily in the rest of your exercises.
    Notice that the first line of the cell has changed. Now, each cell is saved as a file and the compilation and execution are launched explicitly in two additional cells with a shell command. Note that in a notebook, shell commands start with ! .
  5. Last but not least, it remains you to fix the problem.

First parallel computation kernel: SAXPY

We implement here the linear application y = ax + y on an large vector whose CPU code is:


  void saxpy(float *x, float *y, int len, float a){
    for (int i = 0; i < len; ++i)
      y[i] = a*x[i] + y[i];
  }

Warning! Launch your experiments within the same program. so that the different tests can be performed on the same hardware. and are therefore comparable.

Convolution

The convolution application calculates a 2D diffusion (type of the heat equation). A matrix contains values (e.g. the temperature of a point in space), and at each iteration a 5-point stencil is applied: for each point (i,j), one calculates :
Vk+1(i,j) = (W(0,1)*Vk(i-1, j) + W(1,0)*Vk(i, j-1) + W(1,1)*Vk(i, j) + W(2,1)*Vk(i+1, j) + W(1,2)*Vk(i, j+1))/5 .
The program in this notebook generates a random number of "hot spots", calculates several iterations and writes the result to the result.dat file. This result can be visualized with the plot.gp script (which requires GNUplot software).



Congrats! You have reached the end of the session! See you next week to tackle the optimization of the basic operation of deep learning, the multiplication of matrices!

Square Matrix Multiplication

Here you have to implement the multiplication of square matrices C = A x B.