What is AutoGPT?
AutoGPT is a next-generation, open-source artificial intelligence application that utilizes the cutting-edge GPT-4 model developed by OpenAI. This powerful technology allows the machine to generate its own ideas and suggestions based on human prompts, without requiring constant human interaction. AutoGPT is capable of performing a wide range of tasks, including generating test cases, debugging code, and generating innovative business ideas.
How can you access AutoGPT?
While AutoGPT is still in its experimental phase and is not yet widely available for public use, its introduction has generated significant buzz within the tech community. The application is available for the wider development community to use, modify, and improve upon as they see fit. As an open-source application, AutoGPT represents a powerful tool for driving progress and innovation within the field of artificial intelligence.
In order to use AutoGPT, you will need to obtain your OpenAI API key from https://platform.openai.com/account/api-keys. To use your OpenAI API key for Auto-GPT, you will need to have billing set up (AKA a paid account). You can set up a paid account at https://platform.openai.com/account/billing/overview.
Installing AutoGPT
To install AutoGPT, follow these steps:
- Open a CMD, Bash, or Powershell window by navigating to a folder on your computer and typing CMD in the folder path at the top, then press enter.
- Execute the following commands:
Activity and error logs are located in the ./output/logs. Here are some common arguments you can use when running Auto-GPT:
- --gpt3only and --continuous mode
- Replace anything in angled brackets (<>) to a value you want to specify
Use this to use TTS (Text-to-Speech) for Auto-GPT:
``` python -m autogpt --tts "hello world" ```This section is optional, use the official google api if you are having issues with error 429 when running a google search. To use the google_official_search command, you need to set up your Google API keys in your environment variables.
Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches.
For Windows Users:
``` set GOOGLE_CLOUD_PROJECT=For macOS and Linux users:
``` export GOOGLE_CLOUD_PROJECT="CAUTION: This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all. You can specify the memory index for Redis using the following:
``` export REDIS_INDEX=Pinecone enables the storage of vast amounts of vector-based memory, allowing for only relevant memories to be loaded for the agent at any given time. In the .env file set:
``` MEMORY_BACKEND=pinecone PINECONE_API_KEY=Alternatively, you can set them from the command line (advanced):
For Windows Users:
``` set MEMORY_BACKEND=pinecone set PINECONE_API_KEY=For macOS and Linux users:
``` export MEMORY_BACKEND=pinecone export PINECONE_API_KEY=Milvus is an open-source, highly scalable vector database to store huge amounts of vector-based memory and provide fast relevant search.
This section is optional:
Weaviate is an open-source vector database. It allows for the storage of data objects and vector embeddings from ML-models and scales seamlessly to billions of data objects. An instance of Weaviate can be created locally (using Docker), on Kubernetes, or using Weaviate Cloud Services. Although still experimental, Embedded Weaviate is supported which allows the Auto-GPT process itself to start a Weaviate instance. To enable it, set USE_WEAVIATE_EMBEDDED to True and make sure you pip install "weaviate-client>=3.15.4". Install the Weaviate client before usage. In your .env file set the following:
``` MEMORY_BACKEND=weaviate WEAVIATE_URL=http://localhost:8080/ WEAVIATE_THING_CLASS_NAME=AutoGPTMemory WEAVIATE_THING_PROPERTY_NAME=content WEAVIATE_THING_PROPERTY_TYPE=string WEAVIATE_THING_VECTOR_FIELD_NAME=vector ```View memory usage by using the --debug flag. Memory pre-seeding allows you to ingest files into memory and pre-seed it before running AutoGPT. In the example above, the script initializes the memory, ingests all files within the Auto-Gpt/autogpt/auto_gpt_workspace/DataFolder directory into memory with an overlap between chunks of 100 and a maximum length of each chunk of 2000. Note that you can also use the --file argument to ingest.