|
|
|
@ -21,9 +21,9 @@ In a nutshell: |
|
|
|
|
- NO GPU required. NO Internet access is required either |
|
|
|
|
- Optional, GPU Acceleration is available in `llama.cpp`-compatible LLMs. See also the [build section](https://localai.io/basics/build/index.html). |
|
|
|
|
- Supports multiple models: |
|
|
|
|
- π Text generation with GPTs (`llama.cpp`, `gpt4all.cpp`, ... and more) |
|
|
|
|
- π£ Text to Audio πΊπ |
|
|
|
|
- π Audio to Text (Audio transcription with `whisper.cpp`) |
|
|
|
|
- π Text generation with GPTs (`llama.cpp`, `gpt4all.cpp`, ... and more) |
|
|
|
|
- π¨ Image generation with stable diffusion |
|
|
|
|
- π Once loaded the first time, it keep models loaded in memory for faster inference |
|
|
|
|
- β‘ Doesn't shell-out, but uses C++ bindings for a faster inference and better performance. |
|
|
|
|