How to log GPU load?

You can use (tested with nvidia-smi 352.63):

while true; 
do nvidia-smi --query-gpu=utilization.gpu --format=csv >> gpu_utillization.log; sleep 1; 
done. 

The output will be (if 3 GPUs are attached to the machine):

utilization.gpu [%]
96 %
97 %
92 %
utilization.gpu [%]
97 %
98 %
93 %
utilization.gpu [%]
87 %
96 %
89 %
utilization.gpu [%]
93 %
91 %
93 %
utilization.gpu [%]
95 %
95 %
93 %

Theoretically, you could simply use nvidia-smi --query-gpu=utilization.gpu --format=csv --loop=1 --filename=gpu_utillization.csv, but it doesn't seem to work for me. (the flag -f or --filename logs the output to a specified file).

To log more information:

while true; 
do nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv >> gpu_utillization.log; sleep 1; 
done

outputs:

utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]
98 %, 15 %, 12287 MiB, 10840 MiB, 1447 MiB
98 %, 16 %, 12287 MiB, 10872 MiB, 1415 MiB
92 %, 5 %, 12287 MiB, 11919 MiB, 368 MiB
utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]
90 %, 2 %, 12287 MiB, 11502 MiB, 785 MiB
92 %, 4 %, 12287 MiB, 11180 MiB, 1107 MiB
92 %, 6 %, 12287 MiB, 11919 MiB, 368 MiB
utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]
97 %, 15 %, 12287 MiB, 11705 MiB, 582 MiB
94 %, 7 %, 12287 MiB, 11540 MiB, 747 MiB
93 %, 5 %, 12287 MiB, 11920 MiB, 367 MiB

use

nvidia-smi dmon -i 0 -s mu -d 5 -o TD

then you can easily dump this into a log file. this is the gpu usage for device 0 sampled at an interval of 5 seconds

 #Date       Time        gpu    fb  bar1    sm   mem   enc   dec   pwr  temp
#YYYYMMDD   HH:MM:SS    Idx    MB    MB     %     %     %     %     W     C
 20170212   14:23:15      0   144     4     0     0     0     0    62    36
 20170212   14:23:20      0   144     4     0     0     0     0    62    36
 20170212   14:23:25      0   144     4     0     0     0     0    62    36

see the man page for details on flags.


It's all there. You just didn't read carefuly :) Use the following python script which uses an optional delay and repeat like iostat and vmstat:

https://gist.github.com/matpalm/9c0c7c6a6f3681a0d39d

You can also use nvidia-settings:

nvidia-settings -q GPUUtilization -q useddedicatedgpumemory

...and wrap it up with some simple bash loop or setup a cron job or just use watch:

watch -n0.1 "nvidia-settings -q GPUUtilization -q useddedicatedgpumemory"'