examples: use gallery in chatbot-ui, add flowise (#438)
Signed-off-by: mudler <mudler@mocaccino.org>renovate/github.com-imdario-mergo-1.x
parent
577d36b596
commit
11af09faf3
@ -1 +0,0 @@ |
|||||||
{{.Input}} |
|
@ -1,16 +0,0 @@ |
|||||||
name: gpt-3.5-turbo |
|
||||||
parameters: |
|
||||||
model: ggml-gpt4all-j |
|
||||||
top_k: 80 |
|
||||||
temperature: 0.2 |
|
||||||
top_p: 0.7 |
|
||||||
context_size: 1024 |
|
||||||
stopwords: |
|
||||||
- "HUMAN:" |
|
||||||
- "GPT:" |
|
||||||
roles: |
|
||||||
user: " " |
|
||||||
system: " " |
|
||||||
template: |
|
||||||
completion: completion |
|
||||||
chat: gpt4all |
|
@ -1,4 +0,0 @@ |
|||||||
The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. |
|
||||||
### Prompt: |
|
||||||
{{.Input}} |
|
||||||
### Response: |
|
@ -0,0 +1,26 @@ |
|||||||
|
# flowise |
||||||
|
|
||||||
|
Example of integration with [FlowiseAI/Flowise](https://github.com/FlowiseAI/Flowise). |
||||||
|
|
||||||
|
![Screenshot from 2023-05-30 18-01-03](https://github.com/go-skynet/LocalAI/assets/2420543/02458782-0549-4131-971c-95ee56ec1af8) |
||||||
|
|
||||||
|
You can check a demo video in the Flowise PR: https://github.com/FlowiseAI/Flowise/pull/123 |
||||||
|
|
||||||
|
## Run |
||||||
|
|
||||||
|
In this example LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". See the `docker-compose.yaml` |
||||||
|
```bash |
||||||
|
# Clone LocalAI |
||||||
|
git clone https://github.com/go-skynet/LocalAI |
||||||
|
|
||||||
|
cd LocalAI/examples/flowise |
||||||
|
|
||||||
|
# start with docker-compose |
||||||
|
docker-compose up --pull always |
||||||
|
|
||||||
|
``` |
||||||
|
|
||||||
|
## Accessing flowise |
||||||
|
|
||||||
|
Open http://localhost:3000. |
||||||
|
|
@ -0,0 +1,37 @@ |
|||||||
|
version: '3.6' |
||||||
|
|
||||||
|
services: |
||||||
|
api: |
||||||
|
image: quay.io/go-skynet/local-ai:latest |
||||||
|
# As initially LocalAI will download the models defined in PRELOAD_MODELS |
||||||
|
# you might need to tweak the healthcheck values here according to your network connection. |
||||||
|
# Here we give a timespan of 20m to download all the required files. |
||||||
|
healthcheck: |
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"] |
||||||
|
interval: 1m |
||||||
|
timeout: 20m |
||||||
|
retries: 20 |
||||||
|
build: |
||||||
|
context: ../../ |
||||||
|
dockerfile: Dockerfile |
||||||
|
ports: |
||||||
|
- 8080:8080 |
||||||
|
environment: |
||||||
|
- DEBUG=true |
||||||
|
- MODELS_PATH=/models |
||||||
|
# You can preload different models here as well. |
||||||
|
# See: https://github.com/go-skynet/model-gallery |
||||||
|
- 'PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}]' |
||||||
|
volumes: |
||||||
|
- ./models:/models:cached |
||||||
|
command: ["/usr/bin/local-ai" ] |
||||||
|
flowise: |
||||||
|
depends_on: |
||||||
|
api: |
||||||
|
condition: service_healthy |
||||||
|
image: flowiseai/flowise |
||||||
|
ports: |
||||||
|
- 3000:3000 |
||||||
|
volumes: |
||||||
|
- ~/.flowise:/root/.flowise |
||||||
|
command: /bin/sh -c "sleep 3; flowise start" |
Loading…
Reference in new issue