Agent-LLM is a powerful large language model variation of GPT, fine-tuned for large-scale language understanding and has the potential to revolutionize natural language processing.
Table of Contents
AgentLLM is an AI Automation Platform that enables effective AI instruction management across numerous suppliers. Our agents have adaptive memory, and this adaptable solution includes a robust plugin system that enables a wide range of instructions, including web browsing. AgentLLM is continually expanding to enable varied applications, with growing support for numerous AI providers and models.
Run this in Docker or a Virtual Machine
You are free to ignore this notice, but if you do and the AI determines that the best course of action for its mission is to create a command to format your whole computer, that is all on you. Understand that we have given this complete unrestricted terminal access by design and have no plans to install any protections. This project aspires to be lightweight and adaptable in order to get the best potential research results.
Monitor Your Usage
Please keep in mind that using some AI providers (such as OpenAI’s GPT-4 API) might be costly! To prevent paying unexpected costs, always monitor your usage. It is not responsible for your usage in any way.
This project is still in active development and may encounter complications. If you have any trouble, first check the open issues. If your problem is not listed, please submit a new issue outlining the mistake or difficulty you encountered.
Key Features of AgentLLM
- Long-term and short-term memory management that is adaptive.
- For many AI models, a versatile plugin system with extendable commands is available.
- Compatibility with a wide range of AI providers, including: OpenAI GPT-3.5, GPT-4, Oobabooga Text Generation Web UI, Kobold, llama.cpp, FastChat, Google Bard
- Capabilities for web browsing and command execution.
- Help with code evaluation.
- Docker deployment is seamless.
- Hugging Face integration for audio-to-text conversion
- Interoperability with Twitter, GitHub, Google, DALL-E, and other platforms.
- Options for text-to-speech include Brian TTS, Mac OS TTS, and ElevenLabs.
- Support for new AI providers and services is constantly being expanded.
You can also read How to Use Auto GPT and Agent GPT
Web Application Features
- Manage agents: viewing the list of available agents, adding new agents, deleting agents, and switching between agents.
- Set objectives: Enter goals for the chosen agent to achieve.
- Start tasks: Instruct the task manager to begin executing tasks based on the specified objective.
- Instruct agents: Interact with bots by delivering instructions and getting replies using a chat-like interface.
- Available commands: View the list of possible commands and then click on one to insert it into the target or instruction input fields.
- Dark mode: Switch between bright and dark frontend themes.
- Developed with NextJS and Material-UI
- API endpoints are used to communicate with the backend.
Get an OpenAI API key
- Obtain an OpenAI API key from OpenAI and add it to your
- Set the
.envfile using the provided .env.example as a template.
wget https://raw.githubusercontent.com/Josh-XT/Agent-LLM/main/docker-compose.yml wget https://raw.githubusercontent.com/Josh-XT/Agent-LLM/main/.env.example mv .env.example .env
3.Run the following Docker command in the folder with your
docker compose up -d
4.Access the web interface at http://localhost
Running a Mac?
You’ll need to run
docker compose to build if the command above does not work.
docker compose -f docker-compose-mac.yml up -d
Not using OpenAI? Not a problem!
Look through Jupyter Notebooks for quick beginnings on these:
Reminder: Run this in Docker or a Virtual Machine!
For more detailed setup and configuration instructions, refer to the sections below.
Configuration of AgentLLM
AgentLLM utilizes a
.env configuration file to store AI language model settings, API keys, and other options. Use the supplied
.env.example as a template to create your personalized
.env file. Configuration settings include:
- INSTANCE CONFIG: Set the agent name, objective, and initial task.
- AI_PROVIDER: Choose between OpenAI, llama.cpp, or Oobabooga for your AI provider.
- AI_PROVIDER_URI: Set the URI for custom AI providers such as Oobabooga Text Generation Web UI (default is http://127.0.0.1:7860).
- MODEL_PATH: Set the path to the AI model if using llama.cpp or other custom providers.
- COMMANDS_ENABLED: Enable or disable command extensions.
- MEMORY SETTINGS: Configure short-term and long-term memory settings.
- AI_MODEL: Specify the AI model to be used (e.g., gpt-3.5-turbo, gpt-4, text-davinci-003, Vicuna, etc.).
- AI_TEMPERATURE: Set the AI temperature (leave default if unsure).
- MAX_TOKENS: Set the maximum number of tokens for AI responses (default is 2000).
- WORKING_DIRECTORY: Set the agent’s working directory.
- EXTENSIONS_SETTINGS: Configure settings for OpenAI, Hugging Face, Selenium, Twitter, and GitHub.
- VOICE_OPTIONS: Choose between Brian TTS, Mac OS TTS, or ElevenLabs for text-to-speech.
For a detailed explanation of each setting, refer to the
.env.example file provided in the repository.
For controlling agents, prompts, and chains, AgentLLM provides many API endpoints.
Visit the API documentation to learn more about the API endpoints and how to use them.
This documentation is hosted locally, and these links will only work if the frontend is running.
To add additional commands, create a new Python file in the
commands folder and declare a class that extends the
Commands class. Implement the needed functionality as class methods and add them to the
To change AI providers, change the
AI_PROVIDER value in the
.env file. The software works with OpenAI, Oobabooga Text Generation Web UI, and llama.cpp. Create a new Python file in the
provider folder and implement the appropriate functionality.
This article is to help you learn about AgentLLM. We trust that it has been helpful to you. Please feel free to share your thoughts and feedback in the comment section below.