Add support for Auto-GPT · Issue #2 · keldenl/gpt-llama.cpp · GitHub
Auto-GPT is currently blocked on the following two items:
- Embeddings that aren't working quite right or are inconsistent between models. It would be good to have a built-in local embedding option for Auto-GPT.
- The ability to work around and set the BASE_URL by modifying the code, or have it as an option/env variable so it's easier to get started.
According to the conversation in the GitHub issue, users have experienced issues with getting responses from the server when running Auto-GPT. The issue may be related to the power of the model, as users have reported better luck with the Vicuna model, particularly the 13B model. However, others have suggested trying different prompting methods before resorting to fine-tuning with a lora.
A guide for Auto-GPT + gpt-llama.cpp is now available on GitHub, along with a demo and examples of scripts and API configurations. Users have also provided their own pull requests and suggestions to help with the issues.










