Warning:
You can self-host CORE on your own infrastructure using Docker.
The following instructions will use Docker Compose to spin up a CORE instance.
Make sure to read the self-hosting overview first.
As self-hosted deployments tend to have unique requirements and configurations, we don’t provide specific advice for securing your deployment, scaling up, or improving reliability.
This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.
Should the burden ever get too much, we’d be happy to see you on CORE Cloud where we deal with these concerns for you.

Requirements

These are the minimum requirements for running the webapp and background job components. They can run on the same, or on separate machines. It’s fine to run everything on the same machine for testing. To be able to scale your workers, you will want to run them separately.

Prerequisites

To run CORE, you will need:
  • Docker 20.10.0+
  • Docker Compose 2.20.0+
  • Node.js 18+
  • pnpm

System Requirements

Webapp & Database Machine:
  • 4+ vCPU
  • 8+ GB RAM
  • 20+ GB Storage
Background Jobs Machine (if running separately):
  • 2+ vCPU
  • 4+ GB RAM
  • 10+ GB Storage

Deployment Options

CORE offers two deployment approaches depending on your needs: The fastest way to get CORE running locally with all components:
pnpm dlx @redplanethq/core init
This command will:
  1. Set up all required services (PostgreSQL, Neo4j, Redis, Trigger.dev)
  2. Configure environment variables automatically
  3. Start the CORE webapp
  4. Deploy background job tasks
  5. Provide you with access URLs and credentials
This approach is perfect for:
  • Development and testing
  • Small-scale deployments
  • Getting familiar with CORE

Option 2: Modular Deployment

For production environments or when you need more control, you can run CORE and Trigger.dev separately. This approach allows you to:
  • Use Trigger.dev Cloud instead of self-hosting
  • Scale background jobs independently
  • Connect multiple CORE instances to the same Trigger setup
  • Deploy on different infrastructure

Modular Deployment Setup

Step 1: Deploy CORE Core Services

First, deploy the main CORE application without background jobs:
# Clone the repository
git clone https://github.com/RedPlanetHQ/core.git
cd core

# Install dependencies
pnpm install

# Start core services (webapp, postgres, neo4j, redis)
docker-compose -f docker-compose.yml up -d

Step 2: Configure Trigger.dev Connection

CORE uses Trigger.dev to handle background jobs like data ingestion and memory formation. You need to configure the connection between CORE and your Trigger.dev instance.

Environment Variables

Create or update your .env file with these critical Trigger.dev configuration variables:
## Trigger.dev Configuration ##
TRIGGER_PROJECT_ID=your_project_id_here
TRIGGER_SECRET_KEY=your_secret_key_here
TRIGGER_API_URL=http://localhost:8030  # For local Trigger instance
# TRIGGER_API_URL=https://api.trigger.dev  # For Trigger.dev Cloud
Variable Explanations:
  • TRIGGER_PROJECT_ID: Your unique project identifier from Trigger.dev
  • TRIGGER_SECRET_KEY: Authentication key for secure communication with Trigger.dev
  • TRIGGER_API_URL: The endpoint URL for your Trigger.dev instance
    • Use http://localhost:8030 for local self-hosted Trigger.dev
    • Use https://api.trigger.dev for Trigger.dev Cloud

Step 3A: Using Trigger.dev Cloud

If you want to use Trigger.dev Cloud (recommended for production):
  1. Sign up at trigger.dev
  2. Create a new project
  3. Copy your project ID and secret key to your .env file
  4. Set TRIGGER_API_URL=https://api.trigger.dev

Step 3B: Self-Hosting Trigger.dev

If you prefer to self-host Trigger.dev:
# Start local Trigger.dev instance
cd trigger && docker-compose -f docker-compose.yml up -d

# Set TRIGGER_API_URL=http://localhost:8030 in your .env

Step 4: Deploy Background Jobs

Once you have Trigger.dev configured, you need to deploy CORE’s background job tasks.
  1. Login to your Trigger.dev instance:
    # For local Trigger.dev instance
    npx trigger.dev@4.0.0-v4-beta.22 login -a http://localhost:8030
    
    # For Trigger.dev Cloud
    npx trigger.dev@4.0.0-v4-beta.22 login
    
    This will open a browser window for authentication.
  2. Build required dependencies:
    # From CORE root directory
    pnpm install && pnpm build --filter=@core/database --filter=@core/types
    
    This builds the database schema and TypeScript types that the background jobs depend on.
  3. Deploy the tasks:
    # Deploy all background job tasks to Trigger.dev
    pnpm trigger:deploy
    
    This command will:
    • Bundle your background job code
    • Upload it to your Trigger.dev instance
    • Register the job schedules and triggers
    • Verify the deployment was successful

Troubleshooting Manual Deployment

If deployment fails, check:
  1. Authentication: Ensure you’re logged in to the correct Trigger.dev instance
  2. Environment Variables: Verify TRIGGER_PROJECT_ID, TRIGGER_SECRET_KEY, and TRIGGER_API_URL are correct
  3. Network Connectivity: Ensure CORE can reach your Trigger.dev instance
  4. Dependencies: Make sure @core/database and @core/types built successfully
# Check if you're logged in
npx trigger.dev@4.0.0-v4-beta.22 whoami

# Verify environment variables
echo $TRIGGER_PROJECT_ID
echo $TRIGGER_API_URL

# Check build output
pnpm build --filter=@core/database --filter=@core/types

Step 5: Verify Deployment

After deployment, verify everything is working:
  1. Check CORE webapp: Visit http://localhost:3000
  2. Check Trigger.dev dashboard: Visit your Trigger.dev instance dashboard
  3. Verify job registration: Ensure CORE background jobs appear in your Trigger.dev project
  4. Test memory operations: Create some memory entries to trigger background processing

Next Steps

Once deployed, you can:
  • Configure your AI providers (OpenAI, Anthropic, etc.)
  • Set up integrations (Slack, GitHub, Gmail)
  • Start building your memory graph
  • Explore the CORE API and SDK