tools

Karpathy Tweeted About a Second Brain. I Already Had One — Now It Actually Works.

Karpathy posted about LLM Knowledge Bases. I built it that night. 629 notes became 73 wiki pages.

Karpathy Tweeted About a Second Brain. I Already Had One — Now It Actually Works.

I'd been building a second brain for months. 629 notes in Obsidian, synced from Notion, organized by domain. I thought I was ahead. Then Karpathy posted about LLM Knowledge Bases and I realized my system was missing the one layer that actually matters — the part where AI turns your notes into something usable. One session later, 73 synthesized wiki pages. Now my Claude reads my own thinking every time I start a session.

Quick Check: Who Is This For?

If you've got hundreds of saved notes, bookmarks, or screenshots collecting dust. If you've tried a "second brain" system and it didn't stick. If you use Claude Code (or any AI tool) and wish it knew more about you and your work. I'm a financial planner in Hong Kong who builds with AI. No CS degree. I just talk to my computer until things work.

The Tweet That Changed My System

Karpathy — the guy who ran AI at Tesla and co-founded OpenAI — posted about what he called an "LLM Knowledge Base." The idea is simple: take all your messy notes, let an LLM synthesize them into clean wiki pages, and feed those pages back into your LLM as context.

Three layers:

  1. Raw notes — the mess. Your bookmarks, highlights, voice memos, screenshots. Untouched.
  2. Wiki pages — the LLM reads your raw notes and writes clean, cross-referenced summaries. One concept per page.
  3. Schema — the rules. Your AI reads the wiki layer and follows the patterns it finds.

I already had Layer 1. Six hundred and twenty-nine notes, synced from Notion every 12 hours, sorted into 7 domains. I thought that was the hard part. Turns out I'd been doing the easy part on repeat — saving, filing, organizing — and skipping the part that actually creates value.

The tweet didn't teach me something new. It showed me what was missing from the system I'd already built.

I opened Claude Code.

629 Notes. 0 Outputs.

Here's what I found when I actually looked at my vault.

629 notes across 7 domains — AI, creative, learning, marketing, finance, health, travel. Sounds impressive. But when I ran the numbers:

About 60% were dead links to Instagram and Facebook reels. Just a title and a URL. The actual knowledge was trapped inside a video I watched once, three months ago, at 2x speed, while eating lunch.

The YouTube notes were slightly better — at least transcripts exist. But even those were just raw dumps. No synthesis. No "what did I actually learn from this?"

I had 629 inputs and zero outputs. A bookmark graveyard pretending to be a knowledge system.

This is the problem Karpathy was talking about. And it's the problem most second brain systems don't solve. They make it easy to capture. Nobody makes it easy to use.

Why I Didn't Just Bookmark It

This is the part that matters to me personally.

I could have read Karpathy's tweet, nodded, and saved it. Note number 630. Another smart person's framework sitting in my vault, filed neatly, never touched again.

That's what I used to do. Save Alex Hormozi frameworks. Bookmark Naval tweets. Screenshot business models at 1am. All of it organized by domain, looking productive, doing absolutely nothing. I had the system. I had the notes. I had the infrastructure. What I didn't have was the habit of actually running things instead of filing them.

This time I didn't copy from his GitHub. I didn't clone a repo. I told Claude what Karpathy's architecture looked like, pointed it at my existing 629 notes, and said: enhance this. Make my system do what his system does.

Not building from scratch. Upgrading what I already had. That's the shift. When you run something through your own system — your own notes, your own domains, your own mess — it stops being someone else's idea. It becomes part of how you work. The implementation IS the learning.

What Claude Actually Built

Five AI agents ran in parallel — one per domain. Each agent read every raw note in its domain, identified clusters of related ideas, and synthesized them into wiki pages.

Two hours later:

  • 73 wiki pages across 7 domains
  • Every page has frontmatter: domain, cluster, source count, last updated
  • Every claim backlinks to the raw note it came from
  • A master index connecting everything
  • A changelog tracking what was created and when

The biggest domain was Learning — 252 notes condensed into 24 wiki pages. That's 10 raw notes averaged into a single page of synthesized knowledge. Stuff I'd been saving from YouTube videos, podcast clips, and book notes for months — finally connected.

The AI also found 32 notes in the wrong folder. Travel content filed under Marketing. Food posts filed under Business. My brain's filing system was lying to me about what I actually pay attention to.

How It Actually Helps My Claude

Here's where the Karpathy pattern pays off for real.

Before this, Claude started every session with my CLAUDE.md file — workspace rules, project list, hard rules. Good. But it didn't know what I'd been learning. It didn't know what frameworks I've been studying. It didn't know that I've watched 15 hours of Hormozi content on pricing and offers.

Now, the wiki layer is part of the system. When I ask Claude to help me price a consulting package, it can reference the wiki page that synthesizes everything I've saved about pricing — from Hormozi, from my own notes, from client conversations. Not because I told it in the chat. Because the knowledge is already in the system.

It's like the difference between having a new assistant every day versus having one who's been reading your journal for six months.

It Saves Tokens

This is the practical part nobody talks about. I talked about context windows and memory in an earlier post — your context window is your desk, and everything on it costs tokens. Before the wiki layer, if I wanted Claude to know about pricing strategies, I'd have to paste in raw notes. Fifteen Hormozi transcripts. Three podcast summaries. My own scattered thoughts. That's thousands of tokens just to set the scene.

Now? One wiki page. The synthesis already happened. Claude loads the pricing wiki — a few hundred tokens — and has everything it needs. The raw notes are still there if it needs to go deeper, but it doesn't start there. It starts with the summary. Faster loading, less burn, same knowledge.

It Creates Deep Links for Personal Discovery

Here's the part I didn't expect. When the AI synthesized 252 learning notes into 24 wiki pages, it didn't just summarize — it connected things I never connected myself. A pricing concept from Hormozi linked to a negotiation framework from a podcast I forgot I saved. A content strategy note from one domain showed up as relevant to a marketing problem in another.

The wiki layer is like having someone read everything you've ever saved and say: "Did you know these three things are actually about the same idea?"

That's personal discovery. Not the AI teaching you something new — the AI showing you what you already knew but hadn't put together. The raw notes sitting in separate folders would never have found each other. The synthesis layer forces the connections.

It Feeds Everything Else

Memory files tell Claude what to DO. Wiki pages tell Claude what you KNOW. In the hooks, crons, and skills post, I showed how Claude runs things without you. The wiki layer feeds those skills. When the news briefing skill runs, it checks against what I've already learned. When I brainstorm a new feature, Claude references what I've been studying. The more I feed in, the smarter the whole system gets.

The Part Where Everything Broke (A Little)

Not everything went smoothly. The YouTube transcript tool I was using (an MCP server) broke overnight — YouTube changed their API and the library behind it just stopped working. Every request returned a 401 error.

Instead of debugging someone else's broken library, Claude pivoted to yt-dlp — a tool I already had installed. It extracted 36 out of 44 video transcripts that way. The 8 that failed were Chinese-only videos with no auto-captions. Fair enough.

The lesson: when a tool breaks, don't fix it. Find the tool that already works. I talked about this idea in the first post about OpenClaw dying — different shell, same concept. The tool is replaceable. The goal isn't.

I also couldn't read Karpathy's original tweet directly — X.com returned a 402 paywall error. So Claude searched for the Reddit thread and GitHub gist that quoted it. Got the same information through a side door. That's what AI-assisted research feels like in practice. Not clean. Not linear. But it gets there.

Why This Matters If You're Not Technical

You don't need to be a developer to do this. You need:

  1. A pile of notes. Obsidian, Notion, Apple Notes, Google Keep — doesn't matter. If you've been saving things, you have raw material.
  2. An AI that can read them. Claude Code, ChatGPT with file uploads, Gemini — anything that can process text in bulk.
  3. The willingness to actually run it. Not bookmark it. Not "I'll try this weekend." Open the tool. Point it at your notes. Say "synthesize this."

The synthesis is the part everyone skips. That's why second brain systems fail. Not because the tools are bad. Because the bookkeeping is boring. And now AI does the boring part.

The Takeaways

  1. Run it, don't file it. The next time you see a smart person's framework, don't bookmark it. Run it on your own system. Not from scratch — enhance what you already have. The implementation is the learning. Saving is procrastination wearing a productivity costume.
  2. Your notes are probably a bookmark graveyard. If 60% of your "knowledge base" is dead links to videos you half-watched, you don't have a second brain. You have a to-do list you'll never finish. Let AI extract and synthesize before the links rot.
  3. Feed your AI what you know. The more your AI knows about your thinking, the less you have to explain every session. Wiki pages as context > raw notes as context. Synthesis > dump.
  4. The bookkeeping problem is solved. Karpathy's real insight isn't "use AI with notes." It's that the reason knowledge systems fail — cross-referencing, summarizing, indexing — is exactly what LLMs are best at. The bottleneck just disappeared.

Extra: The Architecture If You Want to Try It

The pattern is simple enough to copy:

  • Layer 1 (Raw): Your notes app syncs to a folder. Don't touch these files. They're your source of truth.
  • Layer 2 (Wiki): AI reads the raw folder, writes synthesized pages to a separate Wiki folder. One concept per page. Every claim links back to its source.
  • Layer 3 (Schema): Your CLAUDE.md or system prompt references the Wiki folder. Now your AI has your knowledge built in.

Ownership rule: humans curate sources (Layer 1). AI maintains synthesis (Layer 2). The schema (Layer 3) connects them to your workflow.

I ran 629 notes through this in one session. You can start with 50. The pattern scales.

What's the framework you already saved that you haven't run yet?

Enjoyed this post?

Get new guides and tools delivered to your inbox every week.