Setup My Independent AI locally¶
The easiest way to get My Independent AI running on your local machine is using Docker Compose. This will spin up the Dashboard, Ollama (the LLM engine), and Qdrant (the vector database) automatically.
Quick Start¶
-
Clone the Repository:
-
Start the Stack:
-
Access the Dashboard: Open your browser and go to http://localhost:8501.
[!NOTE] On the first run, it will automatically pull
llama3.2(~2 GB) andnomic-embed-text(~300 MB). Please be patient as this might take a few minutes depending on your internet speed.
Running without Docker (Development)¶
If you'd like to run components individually for development:
-
Install dependencies:
-
Start Backing Services:
-
Run the Dashboard: