🤖 Self-hosted, community-driven, local OpenAI-compatible API with Keycloak Auth Flak app as frontend. 🏠
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
renovate[bot] 77ce8b953e
fix(deps): update github.com/donomii/go-rwkv.cpp digest to af62fcc (#171)
2 years ago
.devcontainer feature: add devcontainer for live debugging (#60) 2 years ago
.github ci: minor fixups 2 years ago
.vscode feature: add devcontainer for live debugging (#60) 2 years ago
api rwkv fixes and examples (#185) 2 years ago
examples localai: Include the WebUI project example (#130) 2 years ago
models Add docker-compose 2 years ago
pkg/model feat: add rwkv support (#158) 2 years ago
prompt-templates docs: enhancements (#133) 2 years ago
tests/fixtures feat: config files and SSE (#83) 2 years ago
.dockerignore feat: config files and SSE (#83) 2 years ago
.env fix(makefile): fix go-gpt2 folder and add verification before git clone (#51) 2 years ago
.gitignore feat: config files and SSE (#83) 2 years ago
.goreleaser.yaml Rename project to LocalAI (#35) 2 years ago
Dockerfile feat: make images to build sources on start (#124) 2 years ago
Dockerfile.dev docs: add discord-bot example (#126) 2 years ago
Earthfile Rename project to LocalAI (#35) 2 years ago
LICENSE First import 2 years ago
Makefile ⬆️ Update go-skynet/go-llama.cpp (#184) 2 years ago
README.md localai: Include the WebUI project example (#130) 2 years ago
docker-compose.yaml docs: add discord-bot example (#126) 2 years ago
entrypoint.sh feat: make images to build sources on start (#124) 2 years ago
go.mod fix(deps): update github.com/donomii/go-rwkv.cpp digest to af62fcc (#171) 2 years ago
go.sum fix(deps): update github.com/donomii/go-rwkv.cpp digest to af62fcc (#171) 2 years ago
main.go fix: hardcode default number of cores to '4' (#186) 2 years ago
renovate.json ci: manually update deps 2 years ago

README.md



LocalAI

tests build container images

LocalAI is a drop-in replacement REST API compatible with OpenAI for local CPU inferencing. It allows to run models locally or on-prem with consumer grade hardware. It is based on llama.cpp, gpt4all, rwkv.cpp and ggml, including support GPT4ALL-J which is licensed under Apache 2.0.

  • OpenAI compatible API
  • Supports multiple-models
  • Once loaded the first time, it keep models loaded in memory for faster inference
  • Support for prompt templates
  • Doesn't shell-out, but uses C bindings for a faster inference and better performance.

LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by mudler at the SpectroCloud OSS Office.

News

Socials and community chatter

Model compatibility

It is compatible with the models supported by llama.cpp supports also GPT4ALL-J and cerebras-GPT with ggml.

Tested with:

Vicuna, Alpaca, LLaMa...

llama.cpp based models are compatible

GPT4ALL

Note: You might need to convert older models to the new format, see here for instance to run gpt4all.

GPT4ALL-J

No changes required to the model.

RWKV

A full example on how to run a rwkv model is in the examples.

Note: rwkv models have an associated tokenizer along that needs to be provided with it:

36464540 -rw-r--r--  1 mudler mudler 1.2G May  3 10:51 rwkv_small
36464543 -rw-r--r--  1 mudler mudler 2.4M May  3 10:51 rwkv_small.tokenizer.json

Others

It should also be compatible with StableLM and GPTNeoX ggml models (untested).

Hardware requirements

Depending on the model you are attempting to run might need more RAM or CPU resources. Check out also here for ggml based backends. rwkv is less expensive on resources.

Usage

LocalAI comes by default as a container image. You can check out all the available images with corresponding tags here.

The easiest way to run LocalAI is by using docker-compose:


git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.bin",            
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Example: Use GPT4ALL-J model

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI

# (optional) Checkout a specific LocalAI tag
# git checkout -b build <TAG>

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env

# start with docker-compose
docker-compose up -d --build

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}

To build locally, run make build (see below).

Other examples

To see other examples on how to integrate with other projects for instance chatbot-ui, see: examples.

Advanced configuration

LocalAI can be configured to serve user-defined models with a set of default parameters and templates.

You can create multiple yaml files in the models path or either specify a single YAML configuration file. Consider the following models folder in the example/chatbot-ui:

base ❯ ls -liah examples/chatbot-ui/models 
36487587 drwxr-xr-x 2 mudler mudler 4.0K May  3 12:27 .
36487586 drwxr-xr-x 3 mudler mudler 4.0K May  3 10:42 ..
36465214 -rw-r--r-- 1 mudler mudler   10 Apr 27 07:46 completion.tmpl
36464855 -rw-r--r-- 1 mudler mudler 3.6G Apr 27 00:08 ggml-gpt4all-j
36464537 -rw-r--r-- 1 mudler mudler  245 May  3 10:42 gpt-3.5-turbo.yaml
36467388 -rw-r--r-- 1 mudler mudler  180 Apr 27 07:46 gpt4all.tmpl

In the gpt-3.5-turbo.yaml file it is defined the gpt-3.5-turbo model which is an alias to use gpt4all-j with pre-defined options.

For instance, consider the following that declares gpt-3.5-turbo backed by the ggml-gpt4all-j model:

name: gpt-3.5-turbo
# Default model parameters
parameters:
  # Relative to the models path
  model: ggml-gpt4all-j
  # temperature
  temperature: 0.3
  # all the OpenAI request options here..

# Default context size
context_size: 512
threads: 10
# Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with.
backend: gptj # available: llama, stablelm, gpt2, gptj rwkv
# stopwords (if supported by the backend)
stopwords:
- "HUMAN:"
- "### Response:"
# define chat roles
roles:
  user: "HUMAN:"
  system: "GPT:"
template:
  # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files
  completion: completion
  chat: ggml-gpt4all-j

Specifying a config-file via CLI allows to declare models in a single file as a list, for instance:

- name: list1
  parameters:
    model: testmodel
  context_size: 512
  threads: 10
  stopwords:
  - "HUMAN:"
  - "### Response:"
  roles:
    user: "HUMAN:"
    system: "GPT:"
  template:
    completion: completion
    chat: ggml-gpt4all-j
- name: list2
  parameters:
    model: testmodel
  context_size: 512
  threads: 10
  stopwords:
  - "HUMAN:"
  - "### Response:"
  roles:
    user: "HUMAN:"
    system: "GPT:"
  template:
    completion: completion
   chat: ggml-gpt4all-j

See also chatbot-ui as an example on how to use config files.

Prompt templates

The API doesn't inject a default prompt for talking to the model. You have to use a prompt similar to what's described in the standford-alpaca docs: https://github.com/tatsu-lab/stanford_alpaca#data-release.

You can use a default template for every model present in your model path, by creating a corresponding file with the `.tmpl` suffix next to your model. For instance, if the model is called `foo.bin`, you can create a sibling file, `foo.bin.tmpl` which will be used as a default prompt and can be used with alpaca:
The below instruction describes a task. Write a response that appropriately completes the request.

### Instruction:
{{.Input}}

### Response:

See the prompt-templates directory in this repository for templates for some of the most popular models.

For the edit endpoint, an example template for alpaca-based models can be:

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{{.Instruction}}

### Input:
{{.Input}}

### Response:

CLI

You can control LocalAI with command line arguments, to specify a binding address, or the number of threads.

Usage:

local-ai --models-path <model_path> [--address <address>] [--threads <num_threads>]
Parameter Environment Variable Default Value Description
models-path MODELS_PATH The path where you have models (ending with .bin).
threads THREADS Number of Physical cores The number of threads to use for text generation.
address ADDRESS :8080 The address and port to listen on.
context-size CONTEXT_SIZE 512 Default token context size.
debug DEBUG false Enable debug mode.
config-file CONFIG_FILE empty Path to a LocalAI config file.

Setup

Currently LocalAI comes as a container image and can be used with docker or a container engine of choice. You can check out all the available images with corresponding tags here.

Docker

Example of starting the API with `docker`:
docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-ai:latest --models-path /path/to/models --context-size 700 --threads 4

You should see:

┌───────────────────────────────────────────────────┐ 
│                   Fiber v2.42.0                   │ 
│               http://127.0.0.1:8080               │ 
│       (bound on host 0.0.0.0 and port 8080)       │ 
│                                                   │ 
│ Handlers ............. 1  Processes ........... 1 │ 
│ Prefork ....... Disabled  PID ................. 1 │ 
└───────────────────────────────────────────────────┘ 

Build locally

In order to build the LocalAI container image locally you can use docker:

# build the image
docker build -t LocalAI .
docker run LocalAI

Or you can build the binary with make:

make build

Build on mac

Building on Mac (M1 or M2) works, but you may need to install some prerequisites using brew.

The below has been tested by one mac user and found to work. Note that this doesn't use docker to run the server:

# install build dependencies
brew install cmake
brew install go

# clone the repo
git clone https://github.com/go-skynet/LocalAI.git

cd LocalAI

# build the binary
make build

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

# Use a template from the examples
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

# Run LocalAI
./local-ai --models-path ./models/ --debug

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-gpt4all-j",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9 
   }'

Windows compatibility

It should work, however you need to make sure you give enough resources to the container. See https://github.com/go-skynet/LocalAI/issues/2

Run LocalAI in Kubernetes

LocalAI can be installed inside Kubernetes with helm.

  1. Add the helm repo
    helm repo add go-skynet https://go-skynet.github.io/helm-charts/
    
  2. Create a values files with your settings:
cat <<EOF > values.yaml
deployment:
  image: quay.io/go-skynet/local-ai:latest
  env:
    threads: 4
    contextSize: 1024
    modelsPath: "/models"
# Optionally create a PVC, mount the PV to the LocalAI Deployment,
# and download a model to prepopulate the models directory
modelsVolume:
  enabled: true
  url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
  pvc:
    size: 6Gi
    accessModes:
    - ReadWriteOnce
  auth:
    # Optional value for HTTP basic access authentication header
    basic: "" # 'username:password' base64 encoded
service:
  type: ClusterIP
  annotations: {}
  # If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout
  # service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"
EOF
  1. Install the helm chart:
helm repo update
helm install local-ai go-skynet/local-ai -f values.yaml

Check out also the helm chart repository on GitHub.

Supported OpenAI API endpoints

You can check out the OpenAI API reference.

Following the list of endpoints/parameters supported.

Note:

  • You can also specify the model as part of the OpenAI token.
  • If only one model is available, the API will use it for all the requests.

Chat completions

For example, to generate a chat completion, you can send a POST request to the `/v1/chat/completions` endpoint with the instruction as the request body:
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens

Edit completions

To generate an edit completion you can send a POST request to the `/v1/edits` endpoint with the instruction as the request body:
curl http://localhost:8080/v1/edits -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "instruction": "rephrase",
     "input": "Black cat jumped out of the window",
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens.

Completions

To generate a completion, you can send a POST request to the /v1/completions endpoint with the instruction as per the request body:

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "ggml-koala-7b-model-q4_0-r2.bin",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'

Available additional parameters: top_p, top_k, max_tokens

List models

You can list all the models available with:
curl http://localhost:8080/v1/models

Frequently asked questions

Here are answers to some of the most common questions.

How do I get models?

Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all should also work: https://github.com/nomic-ai/gpt4all.

What's the difference with Serge, or XXX?

LocalAI is a multi-model solution that doesn't focus on a specific model type (e.g., llama.cpp or alpaca.cpp), and it handles all of these internally for faster inference, easy to set up locally and deploy to Kubernetes.

Can I use it with a Discord bot, or XXX?

Yes! If the client uses OpenAI and supports setting a different base URL to send requests to, you can use the LocalAI endpoint. This allows to use this with every application that was supposed to work with OpenAI, but without changing the application!

Can this leverage GPUs?

Not currently, as ggml doesn't support GPUs yet: https://github.com/ggerganov/llama.cpp/discussions/915.

Where is the webUI?

There is the availability of localai-webui and chatbot-ui in the examples section and can be setup as per the instructions. However as LocalAI is an API you can already plug it into existing projects that provides are UI interfaces to OpenAI's APIs. There are several already on github, and should be compatible with LocalAI already (as it mimics the OpenAI API)

Does it work with AutoGPT?

AutoGPT currently doesn't allow to set a different API URL, but there is a PR open for it, so this should be possible soon!

Projects already using LocalAI to run local models

Feel free to open up a PR to get your project listed!

Short-term roadmap

Star history

LocalAI Star history Chart

License

LocalAI is a community-driven project. It was initially created by mudler at the SpectroCloud OSS Office.

MIT

Golang bindings used

Acknowledgements

Contributors