Using Marqo with a GPU
This section outlines how to use Marqo with GPU's as well as some troubleshooting. Note that the following configurations are for development only, if you would like to run Marqo in production, we recommend you use Marqo Cloud or follow our Marqo on Kubernetes guide.
Deploying Marqo on a single GPU instance within AWS
- Navigate into the ec2 console, select Instances on the left panel and then select Launch instances.
- Select the correct AMI. The recommended AMI is "Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.3". Select the Ubuntu version of this AMI. The AMI ID in us-east-1 is
ami-03db1a48758a57ae6
. - Configure the access key as needed, then select an instance type with an NVIDIA GPU. We recommend g4dn.xlarge (due to its price performance) for development. Ensure you configure the instance with sufficient storage for your dataset (100-200GB of disk space should give you a decent margin).
- Connect to the instance, and in the terminal, run the following command to start Marqo:
docker run --name marqo --gpus all -p 8882:8882 marqoai/marqo:latest
--gpus all
has been added.
In another window connected to the instance you can run the following command to check Marqo is running:
cURL -XGET 'http://localhost:8882/'
You should see the following response:
{"message":"Welcome to Marqo","version":"2.10.0"}
Deploying single instance Marqo with GPU on other machines and providers
Currently, only CUDA based (Nvidia) GPU's are supported. If you have a GPU on the host machine and want to use it with Marqo, there are two things to do;
- Install nvidia-docker2.
- Add a
--gpus all
flag to the Docker run command. Note that this flag should appear after therun
command but before the end. See the full Docker command in step 2 below.
Detailed instructions
-
Install nvidia-docker2 which is required for the GPU to work with Docker. The three steps below will install it for a Ubuntu based machine (refer to the original instructions for more details);
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && cURL -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && cURL -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update sudo apt-get install -y nvidia-docker2
-
Once nvidia-docker2 is installed, a simple modification to the Docker command is all that is needed. This is achieved by adding a
--gpus all
flag to thedocker run
command. For example, the Docker command would become,Note thatdocker run --name marqo --gpus all -p 8882:8882 marqoai/marqo:latest
--gpus all
has been added.
Using Marqo outside of Docker
Marqo outside Docker will rely on the system setup to use the GPU. If you can use a GPU normally with pytorch then it should be good to go. The usual caveats apply though, the CUDA version of pytorch will need to match that of the GPU drivers (see below on how to check).
Troubleshooting
Drivers
In order for the GPU to be used within Marqo, the underlying host needs to have NVIDIA drivers installed. The current driver can be easily accessed by typing
nvidia-smi
in a terminal. If there is no output then there may be something wrong with the GPU setup and installing or updating drivers may be necessary.
CUDA
Aside from having the correct drivers installed, a matching version of CUDA is required. The marqo Dockerfile comes setup to use CUDA 11.4.2 by default. The Dockerfile can be easily modified to support different versions of CUDA.
Checking the status of your GPU and CUDA
To see if a GPU is available when using pytorch, the following can be used to check (from python);
import torch
torch.cuda.is_available() # is a GPU available
torch.version.cuda # get the CUDA version
torch.cuda.device_count() # get the number of devices
nvidia-smi
Marqo will use CUDA if available, you can test if CUDA is working by forcing CUDA with the device="cuda" argument.
mq.index("my-first-index").add_documents(
[
{
"Title": "The Travels of Marco Polo",
"Description": "A 13th-century travelogue describing Polo's travels",
},
{
"Title": "Extravehicular Mobility Unit (EMU)",
"Description": "The EMU is a spacesuit that provides environmental protection, "
"mobility, life support, and communications for astronauts",
"_id": "article_591",
},
],
tensor_fields=["Title", "Description"],
device="cuda",
)