TensorFlow will attempt to use (an equal fraction of the memory of) all GPU devices that are visible to it.
If you want to run different sessions on different GPUs, you should do the following.
- Run each session in a different Python process.
- Start each process with a different value for the CUDA_VISIBLE_DEVICES environment variable.
For example, if your script is calledmy_script.py
and you have 4 GPUs, you could run the following:
$ CUDA_VISIBLE_DEVICES=0 python my_script.py # Uses GPU 0
$ CUDA_VISIBLE_DEVICES=1 python my_script.py # Uses GPU 1
$ CUDA_VISIBLE_DEVICES=2,3 python my_script.py # Uses GPUs 2 and 3.
Note the GPU devices in TensorFlow will still be numbered from zero (i.e. "/gpu:0" etc.), but they will correspond to the devices that you have made visible with CUDA_VISIBLE_DEVICES
You can set environment variables in the notebook using os.environ
.
Do the following before initializing TensorFlow to limit TensorFlow to first GPU.
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"