Skip to content

Setup My Independent AI locally

The easiest way to get My Independent AI running on your local machine is using Docker Compose. This will spin up the Dashboard, Ollama (the LLM engine), and Qdrant (the vector database) automatically.

Quick Start

  1. Clone the Repository:

    git clone https://github.com/stefanbinder/myindependent-ai.git
    cd myindependent-ai
    
  2. Start the Stack:

    docker compose up
    
  3. Access the Dashboard: Open your browser and go to http://localhost:8501.

[!NOTE] On the first run, it will automatically pull llama3.2 (~2 GB) and nomic-embed-text (~300 MB). Please be patient as this might take a few minutes depending on your internet speed.

Running without Docker (Development)

If you'd like to run components individually for development:

  1. Install dependencies:

    uv sync --all-packages --group dev
    
  2. Start Backing Services:

    # Terminal 1: Ollama
    ollama serve
    
    # Terminal 2: Qdrant
    docker run -p 6333:6333 qdrant/qdrant
    
  3. Run the Dashboard:

    uv run streamlit run apps/admin-dashboard/app.py