Skip to main content

Prerequisites

Step 1: Add CORE MCP in Windsurf

  1. Open Windsurf IDE
  2. Navigate to Windsurf SettingsCascade section
  3. Open MCP Marketplace -> Settings OR View raw config to open the configuration file
  4. Add the following to your mcp_config.json:
{
  "mcpServers": {
    "core-memory": {
      "serverUrl": "https://core.heysol.ai/api/v1/mcp?source=windsurf"
    }
  }
}
  1. Save the file and restart Windsurf IDE

Step 2: Authenticate with CORE

  1. After saving the config, Windsurf will open a browser window for authentication
  2. Grant Windsurf permission to access your CORE memory

Step 3: Verify Connection

  1. Go to Cascade EditorPlugin Icon -> Hit Refresh icon
  2. Confirm core-memory shows as Active with green indicator
Add your first memory:
“Summarise the whole project in detail and add it to CORE Memory”

Alternative: Using API Key Method

If the OAuth authentication doesn’t work, use the API key method instead:

Step 1: Get Your API Key

  1. Log into your CORE dashboard at core.heysol.ai
  2. Navigate to Settings (bottom left)
  3. Go to API KeyGenerate new key → Name it “windsurf”
  4. Copy the generated key

Step 2: Update MCP Configuration

Replace your mcp_config.json configuration with:
{
  "mcpServers": {
    "core-memory": {
      "serverUrl": "https://core.heysol.ai/api/v1/mcp/source=windsurf",
      "headers": {
        "Authorization": "Bearer <YOUR_TOKEN>"
      }
    }
  }
}
Replace <YOUR_TOKEN> with the API key you copied from Step 1.

Step 3: Restart and Verify

  1. Save the file and restart Windsurf IDE
  2. Go to Cascade EditorPlugin Icon → Hit Refresh
  3. Confirm core-memory shows as Active with green indicator

Create AGENTS.md in your project root (if it doesn’t exist, just append if it already exists):
touch AGENTS.md
Add the following to AGENTS.md:
---
trigger: always_on
---

⚠️ **CRITICAL: READ THIS FIRST - MANDATORY MEMORY PROTOCOL** ⚠️

You are an AI coding assistant with access to CORE Memory - a persistent knowledge system that maintains project context, learnings, and continuity across all coding sessions.

## 🔴 MANDATORY STARTUP SEQUENCE - DO NOT SKIP 🔴

**BEFORE RESPONDING TO ANY USER MESSAGE, YOU MUST EXECUTE THESE TOOLS IN ORDER:**

### STEP 1 (REQUIRED): Search for Relevant Context

EXECUTE THIS TOOL FIRST:
`memory_search`

- Previous discussions about the current topic
- Related project decisions and implementations
- User preferences and work patterns
- Similar problems and their solutions

**Additional search triggers:**

- User mentions "previously", "before", "last time", or "we discussed"
- User references past work or project history
- Working on the CORE project (this repository)
- User asks about preferences, patterns, or past decisions
- Starting work on any feature or bug that might have history

**How to search effectively:**

- Write complete semantic queries, NOT keyword fragments
- Good: `"Manoj's preferences for API design and error handling"`
- Bad: `"manoj api preferences"`
- Ask: "What context am I missing that would help?"
- Consider: "What has the user told me before that I should remember?"

### Query Patterns for Memory Search

**Entity-Centric Queries** (Best for graph search):

- ✅ GOOD: `"Manoj's preferences for product positioning and messaging"`
- ✅ GOOD: `"CORE project authentication implementation decisions"`
- ❌ BAD: `"manoj product positioning"`
- Format: `[Person/Project] + [relationship/attribute] + [context]`

**Multi-Entity Relationship Queries** (Excellent for episode graph):

- ✅ GOOD: `"Manoj and Harshith discussions about BFS search implementation"`
- ✅ GOOD: `"relationship between entity extraction and recall quality in CORE"`
- ❌ BAD: `"manoj harshith bfs"`
- Format: `[Entity1] + [relationship type] + [Entity2] + [context]`

**Semantic Question Queries** (Good for vector search):

- ✅ GOOD: `"What causes BFS search to return empty results? What are the requirements for BFS traversal?"`
- ✅ GOOD: `"How does episode graph search improve recall quality compared to traditional search?"`
- ❌ BAD: `"bfs empty results"`
- Format: Complete natural questions with full context

**Concept Exploration Queries** (Good for BFS traversal):

- ✅ GOOD: `"concepts and ideas related to semantic relevance in knowledge graph search"`
- ✅ GOOD: `"topics connected to hop distance weighting and graph topology in BFS"`
- ❌ BAD: `"semantic relevance concepts"`
- Format: `[concept] + related/connected + [domain/context]`

**Temporal Queries** (Good for recent work):

- ✅ GOOD: `"recent changes to search implementation and reranking logic"`
- ✅ GOOD: `"latest discussions about entity extraction and semantic relevance"`
- ❌ BAD: `"recent search changes"`
- Format: `[temporal marker] + [specific topic] + [additional context]`

## 🔴 MANDATORY SHUTDOWN SEQUENCE - DO NOT SKIP 🔴

**AFTER FULLY RESPONDING TO THE USER, YOU MUST EXECUTE THIS TOOL:**

### FINAL STEP (REQUIRED): Store Conversation Memory

EXECUTE THIS TOOL LAST:
`memory_ingest`
Include the spaceId parameter using the ID from your initial memory_get_space call.

⚠️ **THIS IS NON-NEGOTIABLE** - You must ALWAYS store conversation context as your final action.

**What to capture in the message parameter:**

From User:

- Specific question, request, or problem statement
- Project context and situation provided
- What they're trying to accomplish
- Technical challenges or constraints mentioned

From Assistant:

- Detailed explanation of solution/approach taken
- Step-by-step processes and methodologies
- Technical concepts and principles explained
- Reasoning behind recommendations and decisions
- Alternative approaches discussed
- Problem-solving methodologies applied

**Include in storage:**

- All conceptual explanations and theory
- Technical discussions and analysis
- Problem-solving approaches and reasoning
- Decision rationale and trade-offs
- Implementation strategies (described conceptually)
- Learning insights and patterns

**Exclude from storage:**

- Code blocks and code snippets
- File contents or file listings
- Command examples or CLI commands
- Raw data or logs

**Quality check before storing:**

- Can someone quickly understand project context from memory alone?
- Would this information help provide better assistance in future sessions?
- Does stored context capture key decisions and reasoning?

---

## Summary: Your Mandatory Protocol

1. **FIRST ACTION**: Execute `memory_search` with semantic query about the user's request
2. **RESPOND**: Help the user with their request
3. **FINAL ACTION**: Execute `memory_ingest` with conversation summary and spaceId

**If you skip any of these steps, you are not following the project requirements.**

How It Works

Once connected, CORE memory integrates with Windsurf’s Cascade:
  • Auto-recall: Cascade searches your memory at conversation start
  • Auto-store: Key insights saved automatically after conversations
  • Cross-platform: Memory shared across Windsurf, Cursor, Claude Code, ChatGPT
  • Project continuity: Context persists across all coding sessions

Troubleshooting

Connection Issues:
  • Ensure core-memory MCP is active (green indicator)
  • Try toggling the MCP off and on
  • Restart Windsurf IDE completely
Authentication Problems:
  • Make sure you completed the OAuth flow in browser
  • Check that your CORE account is active at core.heysol.ai
MCP Not Appearing:
  • Verify mcp_config.json syntax is valid JSON
  • Restart Windsurf after config changes

Need Help?

Join our Discord community - ask in #core-support channel.