renovate[bot]
8286bfbab7
|
2 years ago | |
---|---|---|
.devcontainer | 2 years ago | |
.github/workflows | 2 years ago | |
.vscode | 2 years ago | |
api | 2 years ago | |
charts/local-ai | 2 years ago | |
examples | 2 years ago | |
models | 2 years ago | |
pkg/model | 2 years ago | |
prompt-templates | 2 years ago | |
tests/fixtures | 2 years ago | |
.dockerignore | 2 years ago | |
.env | 2 years ago | |
.gitignore | 2 years ago | |
.goreleaser.yaml | 2 years ago | |
Dockerfile | 2 years ago | |
Dockerfile.dev | 2 years ago | |
Earthfile | 2 years ago | |
LICENSE | 2 years ago | |
Makefile | 2 years ago | |
README.md | 2 years ago | |
docker-compose.yaml | 2 years ago | |
entrypoint.sh | 2 years ago | |
go.mod | 2 years ago | |
go.sum | 2 years ago | |
main.go | 2 years ago | |
renovate.json | 2 years ago |
README.md
LocalAI
LocalAI is a straightforward, drop-in replacement API compatible with OpenAI for local CPU inferencing, based on llama.cpp, gpt4all and ggml, including support GPT4ALL-J which is licensed under Apache 2.0.
- OpenAI compatible API
- Supports multiple-models
- Once loaded the first time, it keep models loaded in memory for faster inference
- Support for prompt templates
- Doesn't shell-out, but uses C bindings for a faster inference and better performance. Uses go-llama.cpp and go-gpt4all-j.cpp.
LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by mudler at the SpectroCloud OSS Office.
Socials and community chatter
-
Follow @LocalAI_API on twitter.
-
Reddit post about LocalAI.
-
Hacker news post - help us out by voting if you like this project.
-
Tutorial to use k8sgpt with LocalAI - excellent usecase for localAI, using AI to analyse Kubernetes clusters.
Model compatibility
It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.
Tested with:
- Vicuna
- Alpaca
- GPT4ALL
- GPT4ALL-J
- Koala
- cerebras-GPT with ggml
It should also be compatible with StableLM and GPTNeoX ggml models (untested)
Note: You might need to convert older models to the new format, see here for instance to run gpt4all
.
Usage
LocalAI
comes by default as a container image. You can check out all the available images with corresponding tags here.
The easiest way to run LocalAI is by using docker-compose
:
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# copy your models to models/
cp your-model.bin models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Example: Use GPT4ALL-J model
# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
# (optional) Edit the .env file to set things like context size and threads
# vim .env
# start with docker-compose
docker-compose up -d --build
# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-gpt4all-j",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
To build locally, run make build
(see below).
Other examples
To see other examples on how to integrate with other projects for instance chatbot-ui, see: examples.
Prompt templates
The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.
The below instruction describes a task. Write a response that appropriately completes the request.
### Instruction:
{{.Input}}
### Response:
See the prompt-templates directory in this repository for templates for some of the most popular models.
Installation
Currently LocalAI comes as container images and can be used with docker or a containre engine of choice.
Run LocalAI in Kubernetes
LocalAI can be installed inside Kubernetes with helm.
- Add the helm repo
helm repo add go-skynet https://go-skynet.github.io/helm-charts/
- Create a values files with your settings:
cat <<EOF > values.yaml
deployment:
image: quay.io/go-skynet/local-ai:latest
env:
threads: 4
contextSize: 1024
modelsPath: "/models"
# Optionally create a PVC, mount the PV to the LocalAI Deployment,
# and download a model to prepopulate the models directory
modelsVolume:
enabled: true
url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
pvc:
size: 6Gi
accessModes:
- ReadWriteOnce
auth:
# Optional value for HTTP basic access authentication header
basic: "" # 'username:password' base64 encoded
service:
type: ClusterIP
annotations: {}
# If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
# service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
EOF
- Install the helm chart:
helm repo update
helm install local-ai go-skynet/local-ai -f values.yaml
Check out also the helm chart repository on GitHub.
API
LocalAI
provides an API for running text generation as a service, that follows the OpenAI reference and can be used as a drop-in. The models once loaded the first time will be kept in memory.
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4
You should see:
┌───────────────────────────────────────────────────┐
│ Fiber v2.42.0 │
│ http://127.0.0.1:8080 │
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............. 1 Processes ........... 1 │
│ Prefork ....... Disabled PID ................. 1 │
└───────────────────────────────────────────────────┘
You can control the API server options with command line arguments:
local-api --models-path <model_path> [--address <address>] [--threads <num_threads>]
The API takes takes the following parameters:
Parameter | Environment Variable | Default Value | Description |
---|---|---|---|
models-path | MODELS_PATH | The path where you have models (ending with .bin ). |
|
threads | THREADS | Number of Physical cores | The number of threads to use for text generation. |
address | ADDRESS | :8080 | The address and port to listen on. |
context-size | CONTEXT_SIZE | 512 | Default token context size. |
debug | DEBUG | false | Enable debug mode. |
config-file | CONFIG_FILE | empty | Path to a LocalAI config file. |
Once the server is running, you can start making requests to it using HTTP, using the OpenAI API.
Supported OpenAI API endpoints
You can check out the OpenAI API reference.
Following the list of endpoints/parameters supported.
Note:
- You can also specify the model as part of the OpenAI token.
- If only one model is available, the API will use it for all the requests.
Chat completions
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
Completions
To generate a completion, you can send a POST request to the /v1/completions
endpoint with the instruction as per the request body:
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "ggml-koala-7b-model-q4_0-r2.bin",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Available additional parameters: top_p
, top_k
, max_tokens
List models
curl http://localhost:8080/v1/models
Advanced configuration
LocalAI can be configured to serve user-defined models with a set of default parameters and templates.
For instance, a configuration file (gpt-3.5-turbo.yaml
) can be declaring the "gpt-3.5-turbo" model but backed by the "testmodel" model file:
name: gpt-3.5-turbo
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
Specifying a config-file
via CLI allows to declare models in a single file as a list, for instance:
- name: list1
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
- name: list2
parameters:
model: testmodel
context_size: 512
threads: 10
stopwords:
- "HUMAN:"
- "### Response:"
roles:
user: "HUMAN:"
system: "GPT:"
template:
completion: completion
chat: ggml-gpt4all-j
See also chatbot-ui as an example on how to use config files.
Windows compatibility
It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2
Build locally
Pre-built images might fit well for most of the modern hardware, however you can and might need to build the images manually.
In order to build the LocalAI
container image locally you can use docker
:
# build the image
docker build -t LocalAI .
docker run LocalAI
Or build the binary with make
:
make build
Frequently asked questions
Here are answers to some of the most common questions.
How do I get models?
Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.
What's the difference with Serge, or XXX?
LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.
Can I use it with a Discord bot, or XXX?
Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!
Can this leverage GPUs?
Not currently, as ggml doesn't support GPUs yet: https://github.com/ggerganov/llama.cpp/discussions/915.
Where is the webUI?
Does it work with AutoGPT?
AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!
Projects already using LocalAI to run local models
Feel free to open up a PR to get your project listed!
Blog posts and other articles
- https://medium.com/@tyler_97636/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
- https://kairos.io/docs/examples/localai/
Short-term roadmap
- Mimic OpenAI API (https://github.com/go-skynet/LocalAI/issues/10)
- Binary releases (https://github.com/go-skynet/LocalAI/issues/6)
- Upstream our golang bindings to llama.cpp (https://github.com/ggerganov/llama.cpp/issues/351) and gpt4all
- Multi-model support
- Have a webUI!
- Allow configuration of defaults for models.
- Enable automatic downloading of models from a curated gallery, with only free-licensed models, directly from the webui.
Star history
License
LocalAI is a community-driven project. It was initially created by mudler at the SpectroCloud OSS Office.
MIT
Acknowledgements
- llama.cpp
- https://github.com/tatsu-lab/stanford_alpaca
- https://github.com/cornelk/llama-go for the initial ideas
- https://github.com/antimatter15/alpaca.cpp for the light model version (this is compatible and tested only with that checkpoint model!)