From a06e467a1afcfc9d68ff282677ac0fbedbb9d825 Mon Sep 17 00:00:00 2001 From: Ettore Di Giacinto Date: Wed, 28 Jun 2023 19:38:35 +0200 Subject: [PATCH] Update README.md (#694) --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index d5cc93a..cc0599d 100644 --- a/README.md +++ b/README.md @@ -20,10 +20,10 @@ In a nutshell: - Local, OpenAI drop-in alternative REST API. You own your data. - NO GPU required. NO Internet access is required either - Optional, GPU Acceleration is available in `llama.cpp`-compatible LLMs. See also the [build section](https://localai.io/basics/build/index.html). -- Supports multiple models: +- Supports multiple models: + - πŸ“– Text generation with GPTs (`llama.cpp`, `gpt4all.cpp`, ... and more) - πŸ—£ Text to Audio πŸŽΊπŸ†• - πŸ”ˆ Audio to Text (Audio transcription with `whisper.cpp`) - - πŸ“– Text generation with GPTs (`llama.cpp`, `gpt4all.cpp`, ... and more) - 🎨 Image generation with stable diffusion - πŸƒ Once loaded the first time, it keep models loaded in memory for faster inference - ⚑ Doesn't shell-out, but uses C++ bindings for a faster inference and better performance.