What is CORE

C.O.R.E is a portable memory graph built from your LLM interactions and personal data, making all your context and workflow history accessible to any AI tool, just like a digital brain. This eliminates the need for repeated context sharing across platforms.

Key Benefits

  1. Unified, Portable Memory: Add and recall context seamlessly, and connect your memory across apps like Claude, Cursor, Windsurf and more.
  2. Relational, Not Just Flat Facts: CORE organizes your knowledge, storing both facts and relationships for a deeper, richer memory like a real brain.
  3. User Owned: You decide what to keep, update or delete and share your memory across the tools you want—be freed from vendor lock-in.

Why CORE Exists

In a world filled with AI agents, most still operate in isolation—forgetting your context, blind to what’s happening around you, and unable to share memory across tools or assistants. We built CORE because a true assistant needs more than just a powerful language model. It needs:
  1. Contextual Observation: The ability to know what’s happening around you (emails, code changes, Slack messages, etc.) and within you (conversations, thoughts, commands).
  2. Long-term Recall: A persistent, structured memory of what matters—not just chat history or ephemeral state.
CORE serves as your personal memory that any AI agent can tap into, making every interaction smarter and more contextual.

What CORE Observes

CORE observes everything that happens around you and through you, forming the raw stream of context that assistants can use to reason and act:
  1. Activity from connected apps (Gmail, GitHub, Slack, etc.): Ingested via integrations and exposed via outbound webhooks
  2. Conversations from multiple agents and interfaces: Captures what you say and see across ChatGPT, Claude, Cursor, SOL, and more
  3. Text Inputs: Notes, thoughts, and unstructured context—useful for journaling, reflection, or prototyping memory via plain text