diff --git a/README.md b/README.md index f8a08e3..2a376dd 100644 --- a/README.md +++ b/README.md @@ -17,10 +17,11 @@ - Support for prompt templates - Doesn't shell-out, but uses C bindings for a faster inference and better performance. Uses [go-llama.cpp](https://github.com/go-skynet/go-llama.cpp) and [go-gpt4all-j.cpp](https://github.com/go-skynet/go-gpt4all-j.cpp). -Reddit post: https://www.reddit.com/r/selfhosted/comments/12w4p2f/localai_openai_compatible_api_to_run_llm_models/ - LocalAI is a community-driven project, focused on making the AI accessible to anyone. Any contribution, feedback and PR is welcome! It was initially created by [mudler](https://github.com/mudler/) at the [SpectroCloud OSS Office](https://github.com/spectrocloud). +Twitter: https://twitter.com/LocalAI_API +Reddit post: https://www.reddit.com/r/selfhosted/comments/12w4p2f/localai_openai_compatible_api_to_run_llm_models/ + ## Model compatibility It is compatible with the models supported by [llama.cpp](https://github.com/ggerganov/llama.cpp) supports also [GPT4ALL-J](https://github.com/nomic-ai/gpt4all) and [cerebras-GPT with ggml](https://huggingface.co/lxe/Cerebras-GPT-2.7B-Alpaca-SP-ggml). @@ -430,7 +431,7 @@ Feel free to open up a PR to get your project listed! - [x] Multi-model support - [x] Have a webUI! - [x] Allow configuration of defaults for models. -- [ ] Enable automatic downloading of models from a curated gallery, with only free-licensed models. +- [ ] Enable automatic downloading of models from a curated gallery, with only free-licensed models, directly from the webui. ## Star history