mudler
d13d4d95ce
|
2 years ago | |
---|---|---|
.. | ||
data | 2 years ago | |
models | 2 years ago | |
.gitignore | 2 years ago | |
README.md | 2 years ago | |
docker-compose.yml | 2 years ago | |
query.py | 2 years ago | |
store.py | 2 years ago | |
update.py | 2 years ago |
README.md
Data query example
This example makes use of Llama-Index to enable question answering on a set of documents.
It loosely follows the quickstart.
Summary of the steps:
- prepare the dataset (and store it into
data
) - prepare a vector index database to run queries on
- run queries
Requirements
For this in order to work, you will need LocalAI and a model compatible with the llama.cpp
backend. This is will not work with gpt4all, however you can mix models (use a llama.cpp one to build the index database, and gpt4all to query it).
The example uses WizardLM
for both embeddings and Q&A. Edit the config files in models/
accordingly to specify the model you use (change HERE
in the configuration files).
You will also need a training data set. Copy that over data
.
Setup
Start the API:
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI/examples/query_data
# Copy your models, edit config files accordingly
# start with docker-compose
docker-compose up -d --build
Create a storage
In this step we will create a local vector database from our document set, so later we can ask questions on it with the LLM.
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=sk-
python store.py
After it finishes, a directory "storage" will be created with the vector index database.
Query
We can now query the dataset.
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=sk-
python query.py
Update
To update our vector database, run update.py
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=sk-
python update.py