Skip to content
Artificial Intelligence

How I Evolved My Second Brain for Claude Code with Semantic Folders and Wikilinks

·5 min read·1 views

A few weeks ago I published about how I built a shared second brain for Claude Code. The idea was simple: Git + symlinks + Obsidian to sync memory across machines. It worked.

But after months of real-world use, the system started showing its limitations. This post is about how I identified the problems and what I did to fix them.

What happened after 3 months

The vault grew from 31 to over 40 files. Notes about projects, security incidents, competitive intelligence, architecture decisions, personal feedback — all mixed in a flat folder.

The MEMORY.md file, which served as Claude Code's index, became bloated. Every time the agent started a conversation, it loaded the complete list of memories into context — spending tokens on information irrelevant to the task at hand.

Asking "restart nginx" shouldn't require loading notes about Meta Ads.

The inspiration: OpenClaw Brain Guide

I found the OpenClaw Brain Guide — a project by Matheus Soier that uses OpenClaw + Obsidian + Syncthing to create a system similar to mine, but with important architectural differences.

I compared the two approaches:

AspectBrain Vault (mine)OpenClaw Brain Guide
SyncGit + GitHubSyncthing (P2P)
Searchmcpvault MCP (semantic)Wikilink navigation
StructureFlat (everything at root)Hierarchical (MOCs, claims, methods)
MaintenanceManualAutomated nightly script
Entry pointMEMORY.md (full list)index.md (progressive wikilinks)

Each has strengths. Git provides full history and robust merge resolution. Syncthing is pure P2P with no intermediary. mcpvault enables semantic search. Wikilinks save tokens.

What I did: combined the best of both.

Improvement 1: Semantic folders

I reorganized the 40+ flat files into meaningful folders:

brain/
├── index.md          # entry point with wikilinks
├── MEMORY.md         # Claude Code index
├── projects/         # active projects (hubnews, papeou, reino...)
├── intel/            # competitive intelligence
├── deep-dives/       # detailed analyses
├── references/       # docs and technical references
├── incidents/        # incident records
├── feedback/         # preferences and feedback
├── daily/            # daily checkpoints (auto-generated)
├── templates/        # note templates
└── maintenance/      # maintenance scripts

I used git mv for all moves to preserve history. Each folder has a clear purpose. When Claude Code needs project context, it goes to projects/. Investigating an incident? incidents/. Need a technical reference? references/.

Before, MEMORY.md listed all memories linearly. Now, index.md works as a navigation map with Obsidian wikilinks:

# Brain Vault

## Active Projects
- [[projects/hubnews]]  HubNews.ai (Laravel + Next.js)
- [[projects/papeou]]  Papeou SaaS
- [[projects/sistema-reino-growth]]  Sistema Reino

## Incidents
- [[incidents/malware-20260218]]  Malware (Feb 2026)

## References
- [[references/brain-vault]]  How this system works

The agent enters through the index, follows links as needed, and loads only what's relevant. This is what OpenClaw calls progressive disclosure — and it's the improvement that most impacted token usage.

Improvement 3: Automated maintenance

Inspired by OpenClaw's nightly maintenance, I created a bash script that runs every night at 23:00 UTC:

# /root/brain/maintenance/nightly-brain.sh
# Cron: 0 23 * * *

What it does:

  1. Detects orphan notes — files that no other note references via wikilink
  2. Generates daily checkpoint — a daily/2026-03-23.md file summarizing what changed
  3. Validates sync — confirms git is synchronized with remote

The daily checkpoint is useful. When Claude Code needs context about "what happened yesterday," it reads the checkpoint instead of digging through git history. Faster, fewer tokens.

Improvement 4: Progressive disclosure in practice

Before:

  • Agent starts → loads MEMORY.md (40+ references) → spends ~2k tokens just on initial context

After:

  • Agent starts → loads MEMORY.md (summary + vault structure) → follows wikilinks as needed

A deploy task only opens [[projects/hubnews]]. An incident investigation opens [[incidents/malware-20260218]]. Context is loaded on demand.

What I kept different from OpenClaw

Not everything from OpenClaw made sense for my setup:

  1. Git instead of Syncthing — I keep Git + GitHub. Version history and merge resolution are irreplaceable. If a conflict happens between server and laptop, git resolves it. Syncthing can have silent conflicts.

  2. mcpvault MCP — I keep semantic search via MCP. OpenClaw navigates only through wikilinks, which works but is limited when you don't know exactly where the information is. mcpvault lets you search "what was the security incident?" and find the right file.

  3. Symlinks — The symlink trick (Claude Code memory → git repository) remains the most elegant piece of the architecture. Zero agent configuration, zero middleware.

The result

The vault now has:

  • 9 semantic folders organized by function
  • index.md as a navigable entry point
  • Nightly maintenance with automatic orphan detection
  • Auto-generated daily checkpoints
  • Progressive disclosure that saves tokens

And keeps everything that already worked:

  • Git sync every 5 minutes
  • Obsidian on personal computers
  • mcpvault for semantic search
  • Total cost: zero

For those who already implemented the original setup

If you followed the previous post and already have a working vault:

# 1. Create folders
mkdir -p ~/brain/{projects,intel,deep-dives,references,incidents,feedback,daily,templates,maintenance}

# 2. Move files (use git mv to preserve history)
git mv file.md projects/

# 3. Create index.md with wikilinks

# 4. Create maintenance script
# (full code in the repository)

# 5. Install cron
(crontab -l; echo "0 23 * * * ~/brain/maintenance/nightly-brain.sh") | crontab -

The setup takes 15 minutes if you already have the vault running.

What's next

Two things I still want to implement:

  1. RAG (Retrieval-Augmented Generation) — Search by meaning instead of keywords. The knowledge-rag MCP server already exists, I just need to integrate it.
  2. Automatic graph view — Generate a web-accessible visualization of the notes graph without needing Obsidian open.

But that's for the next post.

Want to apply this to your project?

Career, code & digital product consulting.

Work with Billy
Billy

Billy

Full Stack Dev & Empreendedor Solo

Building products with code and AI. Creator of HubNews and Sistema Reino.