Skip to content

Note

Click here to download the full example code

Python API

The Python API is the best place to get started with Ragna and understand its key components. It's also the best way to continue experimenting with components and configurations for your particular use case.

This tutorial walks you through basic steps of using Ragnas Python API.

Before we start this tutorial, we import some helpers.

import sys
from pathlib import Path

sys.path.insert(0, str(Path.cwd().parent))

import documentation_helpers

Step 1: Select relevant documents

Ragna uses the RAG technique to answer questions. The context in which the questions will be answered comes from documents that you provide. For this tutorial, let's use a sample document that includes some information about Ragna.

document_path = documentation_helpers.assets / "ragna.txt"

with open(document_path) as file:
    print(file.read())

Out:

Ragna is an open source project built by Quansight. It is designed to allow
organizations to explore the power of Retrieval-augmented generation (RAG) based
AI tools. Ragna provides an intuitive API for quick experimentation and built-in
tools for creating production-ready applications allowing you to quickly leverage
Large Language Models (LLMs) for your work.

The Ragna website is https://ragna.chat/. The source code is available at
https://github.com/Quansight/ragna under the BSD 3-Clause license.

Tip

Ragna supports the following document types:

Step 2: Select a source storage

To effectively retrieve the relevant content of the documents, it needs to be stored in a SourceStorage. In a regular use case this is a vector database, but any database with text search capabilities can be used. For this tutorial, we are going to use a demo source storage for simplicity.

from ragna.source_storages import RagnaDemoSourceStorage

Tip

Ragna has builtin support for the following source storages:

Step 3: Select an assistant

Now that we have a way to retrieve relevant sources for a given user prompt, we now need something to actually provide an answer. This is performed by an Assistant, which is Ragnas abstraction around Large Language Models (LLMs). For this tutorial, we are going to use a demo assistant for simplicity.

from ragna.assistants import RagnaDemoAssistant

Step 4: Start chatting

We now have all parts to start a chat.

from ragna import Rag

chat = Rag().chat(
    documents=[document_path],
    source_storage=RagnaDemoSourceStorage,
    assistant=RagnaDemoAssistant,
)

Before we can ask a question, we need to prepare the chat, which under the hood stores the documents we have selected in the source storage.

_ = await chat.prepare()

Note

Ragna chats are asynchronous for better performance in real-world scenarios. You can check out Python's asyncio documentation for more information. In practice, you don't need to understand all the details, only use the async and await keywords with the function definition and call respectively.

Finally, we can get an answer to a question.

print(await chat.answer("What is Ragna?"))

Out:

I'm a demo assistant and can be used to try Ragnas workflow.
I will only mirror back my inputs. 

Your prompt was:

> What is Ragna?

These are the sources I was given:

- ragna.txt: Ragna is an open source project built by Quansight. It is designed to allow organizations to [...]

Total running time of the script: ( 0 minutes 0.008 seconds)

Download Python source code: gallery_python_api.py

Download Jupyter notebook: gallery_python_api.ipynb

Gallery generated by mkdocs-gallery