TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
AI / Databases / Large Language Models

Build a RAG App With Nvidia NIM and Milvus Running Locally

How to build a RAG-based LLM application with Nvidia NIM and Milvus running locally on a GPU-boosted machine.
Aug 26th, 2024 7:21am by
Featued image for: Build a RAG App With Nvidia NIM and Milvus Running Locally
Image via Unsplash+. 

In the previous post, we built an application that consumes Nvidia NIM APIs and a hosted Zilliz vector database. In this tutorial, we will switch to self-hosted local deployments of the same components while maintaining the same codebase.

Nvidia NIM is available as both APIs hosted within Nvidia’s infrastructure and as containers that can be deployed in an on-premises environment. Similarly, we can deploy Milvus as a stand-alone vector database running in containers. Since Milvus is one of the first open source vector databases to take advantage of GPU acceleration, we can leverage the available GPUs to run the entire stack on an accelerated computing infrastructure.

Let’s start by exploring the environment where we deploy this stack. For my generative AI testbed, I installed two Nvidia GeForce RTX 4090 GPUs. Having two GPUs helps us dedicate one for the LLM while scheduling the embeddings model and the vector database on the other.

I also installed Docker and the Nvidia Container Toolkit to enable the containers to access the underlying GPUs. The Nvidia container runtime is configured as the default runtime environment for Docker.

Let us begin deploying the building blocks — the LLM, embeddings model and vector database — on the GPU machine.

Step 1: Deploy the Llama3 8B Parameter LLM

Let’s start by setting the environment variables.


You can use the same key generated in the last tutorial or by logging into NGC.

Run the command below to deploy meta/llama3-8b-instruct as a container.


The model will be downloaded to the /opt/nim/.cache directory. This will help in caching the model weights rather than downloading them each time the container is started. Notice that we are setting the --gpus device parameter to 0, which will dedicate the first GPU to the LLM.

Wait for the model weights to be downloaded. In a few minutes, the container will be ready for inference. You can check the status by running the docker logs -f $CONTAINER_NAME command.

If we run the nvidia-smi command again, we will see that the first GPU (device 0) is 100% utilized.

Let’s perform a test to check if the model is ready for inference.

The above output confirms that the LLM is ready to serve.

Step 2: Deploy the Text Embeddings Model

In this step, we will deploy the same model that we used in the last tutorial: nvidia/nv-embedqa-e5-v5.

Similar to the steps mentioned above, we will run the model within the container.


Let’s deploy the container.


We are using the second GPU for the embeddings model by setting the --gpus device parameter to 1. Wait for the container to get initialized.

Running nvidia-smi confirms that the embeddings model is loaded on the second GPU.

Since the model is not as heavy as the LLM, it only takes 1.4 GiB of the GPU VRAM.

Let’s test by generating the embeddings for a phrase through curl. The output in the screenshot is truncated for brevity.

We now have both the LLM and the embeddings model running locally. In the next step, we will deploy Milvus, the vector database.

Step 3: Deploy the GPU-Accelerated Vector Database

Milvus is an open source vector database that was created and primarily developed by Zilliz. Zilliz is the company behind Milvus, with its founders and engineers being the main contributors to the Milvus project.

Follow the below steps to deploy Milvus.


This downloads the Docker Compose file, which has the definitions and dependencies for running Milvus as a stand-alone vector database on GPU.


I changed the CUDA_VISIBLE_DEVICES and device_ids to 1. This will collocate the container on the second GPU, which has room even after running the embeddings model.

Launch Milvus by running docker compose up -d command.

At this point, we have the below API endpoints running each of the required building blocks.

LLM – http://0.0.0.0:8000
Embeddings Model – http://0.0.0.0:8000
Vector Database – http://0.0.0.0:19530

Step 4: Run the RAG Notebook With Updated API Endpoints

We will use exactly the same Jupyter Notebook that we used in the last tutorial. The only change we need to make is to the .env file. Update it to reflect the new endpoints.


The complete code is shown below. You can also download the Notebook from GitHub.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.