Docker & Deployment
The system is designed with a container-first architecture to ensure consistency between development, backtesting, and production environments. Using Docker, you can orchestrate the multi-agent trading system, the FastAPI backend, and the React-based dashboard with a single command.
Infrastructure Overview
The deployment stack consists of three primary components:
- Trading Backend (FastAPI): Orchestrates the LangChain agents, manages the vector store (RAG), and handles data ingestion from NewsAPI and CoinGecko.
- Dashboard Frontend (React/Vite): Provides the real-time visualization of equity curves, trade logs, and agent reasoning.
- Persistent Storage: SQLite database for trade history and a local volume for the vector database (ChromaDB) indexing.
Prerequisites
Before deploying, ensure you have the following installed:
- Docker (20.10+)
- Docker Compose (v2.0+)
- An API key for your chosen LLM provider (Groq or a local Ollama instance).
Configuration
Environment variables are managed through a .env file in the root directory. Copy the template and fill in your credentials:
# Core API Keys
GROQ_API_KEY=your_groq_key_here
NEWSAPI_KEY=your_newsapi_key_here
# Backend Configuration
DATABASE_URL=sqlite:///./trading_system.db
LOG_LEVEL=info
# LLM Selection (options: groq, ollama)
MODEL_BACKEND=groq
OLLAMA_BASE_URL=http://host.docker.internal:11434
Deployment Steps
1. Build and Launch
To start the entire ecosystem, run:
docker-compose up --build
This command builds the frontend and backend images, links them via a virtual network, and exposes the following ports:
- Frontend Dashboard:
http://localhost:5173 - Backend API (Swagger Docs):
http://localhost:8000/docs
2. Initializing the Vector Store (RAG)
The system requires a knowledge base to inform agent decisions. Once the containers are running, you must index the trading documents:
docker-compose exec backend python -m agents.rag.indexer
3. Running a Backtest in Docker
You can trigger backtests within the containerized environment via the CLI. This will populate the dashboard with data:
docker-compose exec backend python run_backtest.py --days 30 --strategy multi-agent
Docker Compose Structure
The system uses a standard docker-compose.yml to manage service dependencies. Below is the service definition for the production-ready stack:
services:
backend:
build:
context: .
dockerfile: backend.Dockerfile
ports:
- "8000:8000"
volumes:
- ./logs:/app/logs
- ./knowledge:/app/knowledge
- ./data:/app/data
env_file: .env
frontend:
build:
context: ./dashboard/frontend
dockerfile: Dockerfile
ports:
- "5173:5173"
environment:
- VITE_API_BASE=http://localhost:8000
depends_on:
- backend
Production Considerations
Persistence
The system uses Docker volumes to ensure that trade history and backtest reports persist across container restarts. Ensure the ./logs and ./data folders on your host machine have appropriate write permissions.
Local LLM Connectivity
If using Ollama for local inference:
- On Linux: Ensure the backend container can reach
http://localhost:11434. - On macOS/Windows: Use
http://host.docker.internal:11434in your.envto allow the container to communicate with the host machine's Ollama service.
Resource Allocation
The Multi-Agent Orchestrator can be resource-intensive when running parallel "debate" phases between agents. It is recommended to allocate at least 4GB of RAM to the Docker engine for stable performance.