Deployment Options¶
My Independent AI is designed to be flexible, supporting setups ranging from a single laptop to a multi-device distributed ecosystem.
1. Single Device Setup (Recommended for Beginners)¶
If you are only using one device (e.g., your laptop or a single desktop), using Docker Compose is the fastest way to get started.
- How it works: One
docker-compose.ymlfile spins up the Dashboard, Ollama, and Qdrant. - Benefits: Instant setup, no manual configuration of backing services, all data stays in the local Docker volume.
- See Guide: Setup locally
2. Multi-Device Setup (Distributed)¶
Status: WIP
If you want to access your AI and sync data across multiple devices (e.g., MacBook Pro, Mac Studio, and a NAS), a distributed approach is required.
- How it works:
- Use Google Cloud Storage (GCS) as a central sync hub for your data and importer states.
- Central Resources: The following components must be accessed centrally (e.g., via GCS/Litestream sync) to ensure consistency:
- Vector DB: A shared Qdrant instance (either on a dedicated VM or a primary device).
- Privacy Mapping: The PII
mapping.dbSQLite database. - Importer State: The
importer.dbwhich tracks the state of imported files. - System Settings: The
admin-dashboard.db(WIP - will replaceconfig.yamlto centrally manage all settings).
- Manual installation (
uv sync) on each node.
- Benefits: Seamless access from any device, centralized data sovereignty.
- Note: This setup currently requires more manual configuration (Work In Progress).
3. Offloading to the Cloud (For Low-End Hardware)¶
Status: WIP
If your local hardware is not powerful enough to run modern LLMs or embeddings efficiently, you can offload heavy compute tasks to Google Cloud Platform.
- How it works:
- Use Terraform to provision a GCP VM for the Vector DB (Qdrant).
- Optionally use Vertex AI for generating embeddings and inference.
- Benefits: High performance even on weak local hardware, "Scale to Zero" support to minimize costs.
- Note: This is an advanced setup (Work In Progress).