31 files. Documented incidents, architecture decisions, deploy runbooks, personal preferences, bug history. All in Markdown, all built over months of real work. And all of it trapped on a single machine.
That was my problem. I use Claude Code in 4 different contexts — a Telegram bot running on my server, direct SSH into the terminal, and two different computers. Each instance started from scratch. I'd teach something on the server and, when I opened the terminal on the other computer, I had to teach it all over again.
The problem isn't memory — it's isolation
Claude Code already has a decent memory system. It saves everything in ~/.claude/projects/<path>/memory/ as .md files with YAML frontmatter. It works fine. The problem is that each machine has its own directory, with its own files, with zero connection between them.
On my production server, Claude Code knows that Redis listens on port 6379 with auth enabled, that MySQL has per-app users, that there were security incidents in February, that Sistema Reino deploys use PHP 8.4. That's months of accumulated context.
On my other computer? It knows nothing. It's an amnesiac Claude Code.
The epiphany: the files already are a knowledge base
One day, looking at the memory file structure, I noticed something obvious. .md files with YAML frontmatter, organized in folders, with cross-references between them. This already is a knowledge base ready for Obsidian — a free app for organizing Markdown notes, like a digital second brain.
I just had to open the folder in Obsidian and everything worked — graph view, backlinks, tags, all of it. The notes folder (which Obsidian calls a "vault") already existed. I just didn't know it.
From there, the solution assembled itself.
Turning local memory into a Git repository
First step: turn the memory files into a Git repository.
mkdir /root/brain
cp -r ~/.claude/projects/-root/memory/* /root/brain/
cd /root/brain
git init
git add -A
git commit -m "initial: import claude code memory files"
git remote add origin git@github.com:billy/brain.git
git push -u origin main
Then, the key move: replace the original directory with a symlink.
rm -rf ~/.claude/projects/-root/memory
ln -s /root/brain ~/.claude/projects/-root/memory
Claude Code keeps reading and writing to the same paths as always. It doesn't even notice the difference. But now everything lives inside a Git repository.
Auto-sync: cron every 5 minutes
A simple cronjob handles synchronization:
*/5 * * * * cd /root/brain && git add -A && git diff --cached --quiet || (git commit -m "auto-sync $(date +\\%F-\\%H\\%M)" && git push) 2>/dev/null
Every 5 minutes, it checks for changes. If there are any, it commits and pushes. No noise, no notifications, no overhead. Git sync is free, robust, and every dev already knows how to use it.
MCP connector: the knowledge base works without a GUI
On the server, there's no Obsidian installed. It's a headless Ubuntu box. But I wanted Claude Code to be able to search, read, and write to the knowledge base with structured tools, not just reading raw files.
That's where MCP (Model Context Protocol) comes in — a protocol that lets AI use external tools, like accessing files, databases, or APIs. mcpvault (@bitbonsai/mcpvault) is an MCP server that exposes 14 tools: keyword search, read, write, patch, frontmatter manipulation, tags.
Claude Code can search "what was the security incident?" and find the right file without me pointing to a path.
{
"mcpServers": {
"vault": {
"command": "npx",
"args": ["-y", "@bitbonsai/mcpvault", "/root/brain"]
}
}
}
On the server, that's all you need. No GUI, no Electron, no 500MB app. Just a Node process exposing the notes folder via MCP protocol.
On personal computers: Obsidian + Git plugin
On my personal computers, the story is different. I clone the same repository and open it in Obsidian.
git clone git@github.com:billy/brain.git ~/brain
The Obsidian Git plugin handles auto-pull and auto-push every 5 minutes. Same logic as the cron, but integrated into the interface.
The visual result is satisfying. Obsidian's graph view shows the connections between files — how a security incident relates to MySQL hardening, how the HubNews deploy references Supervisor configurations, how billing decisions connect to the payment gateway.
And Claude Code on the other computers? Same trick: symlink the memory directory to the cloned repository.
The absolute path problem
There's a detail that almost blocked me. Claude Code indexes memory by the absolute path of the project. On the server, a project lives at /home/deploy/api.hubnews.ai/. On the other computer, it's at /Users/billy/projects/api.hubnews.ai/.
Different paths, so Claude Code treats them as different projects and creates separate memory directories.
The simplest solution: a post-sync script that copies files to the correct paths on each machine. Not elegant, but it works. Another option is keeping everything under a single generic project directory and using mcpvault as the primary access interface.
# post-sync.sh
rsync -a ~/brain/ ~/.claude/projects/-Users-billy/memory/
The result: 4 synchronized brains
Now, when Claude Code on the server documents an incident at 3 AM — within 5 minutes that information is available on all other instances.
When I make a decision on my personal computer about a project's architecture, the Telegram bot already knows about it on the next interaction.
31 memory files, synchronized across 4 machines, accessible both via CLI and through Obsidian's visual interface. The total cost of this solution: zero. Git is free. Obsidian is free. mcpvault is open source.
Bonus: semantic search with RAG
For those who want to go further, there's the path of RAG (Retrieval-Augmented Generation) — a technique that combines intelligent search with AI text generation. Instead of keyword search, RAG understands the meaning of your question. You ask "how to handle high CPU load" and it finds the incident runbook even if the file doesn't contain those exact words.
The knowledge-rag MCP server implements exactly this over your notes folder files. I haven't deployed this in production yet, but it's on the list.
What matters here
Claude Code's memory is powerful, but only if it's portable. If knowledge dies when you switch machines, it has half the value.
The beauty of this solution is that it uses tools every dev already knows — Git, Markdown, cron, symlinks. No vendor lock-in, no intermediary SaaS, no magic. It's basic infrastructure applied to a new problem.
If you use Claude Code on more than one machine, set this up. It takes 20 minutes. And the first time you open a terminal on another machine and Claude Code already knows what you did yesterday, you'll understand why I wrote this post.