πŸš€ Join our Agentic AI Hackathon in SF, April 25th! Apply now
Home< BlogDeep Dives
Apr 28, 2026
5 minutes read

Memory as a Decorator

Veljko Kovac
Veljko KovacHead of FDE

Adding memory to agentic workflows has never been straightforward. One of the main challenges is the lack of a standardized framework for building these workflows. We started with LangChain, then watched as a wave of new frameworks emerged β€” each slightly different, each fragmenting the ecosystem further. At one point, it was feasible for a memory provider to tightly integrate with a single framework. That's no longer the case.

So we decided to rethink the problem entirely.

A Simpler Approach to Agentic Memory

Instead of forcing users into rigid systems or complex integrations, we focused on what people actually wanted:

Seamless memory integration into any custom agentic workflow.

The solution we landed on is intentionally simple:

That's it. One line.

A decorator that automatically captures LLM interactions and turns them into structured, reusable memory.

No complex setup. No need to rethink your architecture. No requirement to adopt a specific framework.

Why a Decorator?

Our original vision for Cognee was to build a highly customizable, open-source system. Something modular enough to serve everyone β€” from solo developers experimenting with AI agents to large enterprises like pharmaceutical companies needing custom ontologies for research and discovery.

We built components like:

  • Ontology mapping
  • Memory systems
  • Retrieval pipelines
  • Feedback loops
  • Data access control

But we noticed a pattern: many users didn't want to think about infrastructure. They just wanted memory to work β€” immediately.

So we asked ourselves:

How do we make agentic memory accessible to everyone, regardless of expertise?

The answer was clear:

Make it one line.

With a decorator, you don't need to worry about where memory belongs in your workflow. You don't need to restructure your system or rely on specific agent frameworks. You just add it β€” and it works.

Putting It to the Test

To validate this approach, we designed an experiment.

Setup

We built a simulated sales environment:

Cognee pipeline: Multi-Modal Ingestion β†’ Knowledge Structuring β†’ Access Control β†’ Retrieval β†’ Memory β†’ Feedback β†’ Smarter Agents
  • A sales agent pitching six core features:
    • Multimodal ingestion
    • Knowledge structuring
    • Access control
    • Retrieval
    • Memory
    • Feedback loops
  • 198 simulated customer leads
  • 6 distinct buyer personas

Each lead had a hidden profile, including:

  • A must-have feature
  • Preferred messaging style
  • Objection behavior
  • A deal-breaker

The sales agent had two conversation rounds to identify the customer's needs and close the deal.

Strategies Compared

We tested three approaches:

  1. No Memory (Baseline) β€” every interaction starts from scratch.
  2. Context Stuffing β€” past conversations are appended into prompts and summarized as needed.
  3. Cognee Memory (Decorator-Based) β€” structured knowledge graph memory, automatically captured via the decorator.

How Conversations Worked

Each interaction was a back-and-forth between two agents:

Sales Agent receives:

  • Current conversation
  • Customer message
  • Feature catalog
  • Optionally, memory from past interactions

Customer Agent has a hidden profile and evaluates each pitch as interested, skeptical, or ready to buy.

Outcome rules: deal closes β†’ Win. No close after 2 rounds β†’ Loss.

After each interaction, the decorator automatically stores a structured memory trace, and the agent queries past insights before the next conversation.

Results

MetricNo MemoryContext StuffingCognee Memory
First-pitch accuracy49%60%78%
Win rate90%91%97%
Tokens used353K928K597K

Key Takeaways

  • Context stuffing improves performance β€” but at a massive token cost (2.6Γ— higher).
  • Cognee memory significantly boosts accuracy while remaining efficient.
  • Structured memory outperforms raw text accumulation.

While context stuffing may be "good enough" in some cases, it becomes inefficient and costly at scale.

What Makes Cognee Different

Traditional approaches treat past conversations as plain text. Cognee treats them as knowledge.

Instead of storing:

"Sales conversation with startup CTO. Outcome: won. Winning pitch: feedback framed as developer experience."

Cognee extracts relationships like:

  • startup_cto β†’ pitched_with β†’ feedback
  • feedback β†’ framed_as β†’ developer_experience
  • startup_cto β†’ closed_with β†’ feedback

Now, when a similar customer appears, the agent doesn't search for similar text. It asks:

"What actually worked for this type of customer?"

And gets a precise, structured answer.

From Memory to Learning

This is the core shift:

  • Not just storing context
  • Not just retrieving text
  • But learning from experience

Cognee transforms conversation logs into a queryable knowledge graph, enabling every new interaction to benefit from past ones.

The agent doesn't just remember. It learns.

Try It Yourself

Install Cognee, add the decorator, and see how it performs in your own workflows:


Have questions or ideas? Want to share what you're building? Join the community on Discord and connect with others exploring agentic memory.

Cognee is the fastest way to start building reliable Al agent memory.

Latest

Memory as a Decorator
Deep DivesApr 28, 2026

Memory as a Decorator

Adding memory to agentic workflows used to mean restructuring your stack. One decorator changes that. We ran 198 simulated sales conversations β€” and the results make a strong case for structured memory.
Cognee's CLI Replaces MCP OAuth in 100 Lines
MCP has real auth built in. CLI doesn't β€” or so the claim goes. The Claude Code plugin that wraps cognee-cli runs a full register-login-token handshake before the first command fires.
Agents Don't Need Another Protocol. They Need a Good CLI.
Your agent forgets everything between sessions. The fix isn't a bigger context window β€” it's persistent memory via a CLI. Four commands give your agent cross-session, graph-structured memory.