beginner

I Built a Financial Brain for My Insurance Team in 7 Days

From empty GitHub stubs to 162 funds tracked, two AI agents that argue, and a system that says no.

I Built a Financial Brain for My Insurance Team in 7 Days

# I Built a Financial Brain for My Insurance Team in 7 Days

!A young developer surrounded by holographic stock charts as a fiery bear and lightning bull clash above him — the AI debate engine in action

I sat down with Claude Code for the first time wanting to build a simple chat for my team. Five days later I had two AI agents arguing about whether to move retirement funds. This is how a financial planner with no coding background turned an internal knowledge tool into something I didn't think was possible — and why the most important thing I learned is that AI still can't count, with the right data.

Quick Check: Who Is This For?

If you lead a team and wish your tools were smarter. If you've ever started building one thing and ended up with something completely different. If you think AI in finance means chatbots that say "I can't give financial advice." I'm a financial planner in Hong Kong running an insurance team at AIA. I don't write code — I describe what I want and AI builds it. The difference between a casual builder and what we're doing is simple: we try to build a business case every time we sit down to code. That's where we want AI to help us excel.

Day 0: Starting With Claude Code and a Bad Idea

This was our first time starting in Claude Code and we wanted to know what it could actually do. What capabilities could enhance our workflow?

The first idea: an internal chat system. Something that makes it easier for the team to search documents and pull from our knowledge base. Straightforward. Useful. Safe.

We started building it. And... it wasn't responding properly. I don't know why. The chat was supposed to work but it just wasn't cooperating. Messages going in, nothing useful coming back.

But here's the thing — while we were fighting with the chat, I realized something. A document search chatbot? That's nice. But it's a waste of what this thing can actually do.

My team doesn't need a fancier search bar. They need something no human head can do alone.

Day 1: The Pivot Nobody Planned

Instead of fixing the chat, I changed direction completely.

MPF. Hong Kong's mandatory retirement fund system. Every working person has one. Every insurance advisor needs to understand fund performance, market conditions, when to recommend switching. And there is no tool for this. Not because nobody wants one — because the data is a mess.

There's no official API. The government publishes Excel files that are always behind. Fund house websites look like they were built in 2008. My team was trying to learn and rebalance their portfolio based on... memory? Whatever they saw last quarter? Vibes?

So the new direction: an internal learning platform for MPF performance. Not a chatbot. A financial brain. Something that could track every fund, read the news, and help us make better decisions.

We decided the direction and started putting it together that same day. Database schema. Price tracking tables. News tables. The skeleton of something real.

Day 2: Every Door Was Locked

This is where it got painful.

We started building the database and trying to figure out how to actually get the data in. Fund prices. News articles. Market updates. We needed all of it.

And we couldn't get any of it.

The fund price sources were either stale, JavaScript-rendered (which breaks scrapers) (no idea what this is but thats the learning process), or behind paywalls. The news APIs either cost $449 a month or blocked cloud server IPs. We tried deep search. We tried Playwright to scrape websites directly. We tried everything we could think of.

Every approach hit a wall. Stale data. Blocked requests. Rate limits. Wrong formats. It was one of those days where you feel like you're running full speed into a series of locked doors. The app had a beautiful database schema with nowhere to get the data to fill it.

Day 3: A Friend, a Dinner, and a Miracle

I had dinner with a friend. We were talking about the project and the data problem. And somewhere in the conversation it hit me — I hadn't turned on Brave Search.

I knew about Brave Search as a tool for Claude Code. I knew it could search the web and find things that regular API calls couldn't. But I didn't want to pay for it. I thought it was a premium feature.

Turns out they give you free credits every month. And you can set a cap on usage so you never go over. Technically free.

I went home and turned it on.

God, it was a miracle.

Brave Search didn't just find news articles. It found data sources I didn't know existed. It found undocumented API endpoints on official sites — backend endpoints that the public pages use to populate fund tables. Nobody documented these. No blog post. No Stack Overflow answer.

One endpoint gave me current prices for all the funds. The other gave me historical prices going back to 2000. Twenty-six years of daily data. Sitting right there behind an endpoint that Brave Search surfaced because it could actually crawl and reason about what it found.

Everything clicked. The prices started flowing. The news articles started classifying. The database wasn't empty anymore.

One tool. One dinner conversation. That's all it took to unlock the entire project.

Day 4: Why AI Can Never Fully Take Over

So everything was working. Prices loading. News classified. Charts rendering. I was feeling good.

Then I looked at the fund count. Twenty-five.

That's wrong. AIA's MPF scheme has twenty funds. Not twenty-five.

What happened? The AI had searched for fund data and found old records — discontinued funds that haven't existed for years. It pulled them in alongside the active ones. No error. No warning. Just five extra funds that shouldn't be there, sitting in the database looking perfectly legitimate.

I had to go to the official website. Manually. Count the funds. One by one. Cross-reference each name. Remove the dead ones. Correct the names that didn't match.

This is why AI can never fully take over. It can search faster than you. It can build faster than you. But it can't tell you that the data it found is out of date. It doesn't know that "China Fund" was discontinued in 2019. You know that because you work in this industry and you checked.

The human in the loop isn't a nice-to-have. It's the thing that keeps the system honest.

Day 5: Two AIs Walk Into a Debate

With clean data and working pipelines, it was time to build the thing I actually wanted. The decision engine.

Here's the idea: instead of one AI looking at everything and making a call, what if two AIs searched independently? Different data. Different logic. Different philosophy. Then they come together and argue their way to a conclusion.

The Quant Agent looks only at numbers. Sharpe ratios, Sortino ratios, drawdowns, volatility, momentum. No feelings. No headlines. Pure math.

The News Agent reads every classified article. Geopolitical risk, market sentiment, regulatory changes. No numbers. Pure narrative.

Then they debate. Where does the math agree with the news? Where do they contradict? What's being missed?

A fourth AI — the Mediator — reads the entire debate transcript and makes the final call on what the reference portfolio should look like.

The whole thing runs in about a minute.

The first time it ran live, it looked at the data, looked at the news, debated... and said hold. Don't change anything.

The reasoning: "Short-term macro noise doesn't justify rebalancing for a 28-year-old with a 35+ year horizon."

That was the moment I knew this was real. Not because it made a trade. Because it decided not to. A system that can say no — that's not a toy. That's a tool.

What It Became

That was a lot for five days. And I keep improving it. Since then, the app has grown into something I didn't plan:

  • 20 active MPF funds tracked with daily prices and full quant metrics
  • Defensive-first prompts — after the debate system once held 100% equity during a geopolitical crisis, I rewrote all four agents to think like a risk committee, not a portfolio manager
  • A self-learning scorer that evaluates past decisions and feeds lessons back into future ones
  • A second fund module already in the works — more on that in the next post
  • Six automated crons running while I sleep — prices, news, metrics, weekly debates, monthly reports
  • Discord alerts for every rebalance decision so the team sees it in real time

The total AI cost? Less than a coffee per month. The entire intelligence layer — news classification, debates, monthly reports — runs through Vercel AI Gateway at fractions of a cent per request.

The Takeaways

  1. Start with a business case, not a feature. We didn't build a chatbot because chatbots are cool. We built a decision engine because our team needed better MPF recommendations. The business need drove every feature.
  2. Your first idea is probably too small. We started with "search documents." We ended with "two AIs debating portfolio allocation." Don't limit yourself to what sounds reasonable — ask what would actually change how your team works.
  3. The tools you're ignoring might be the unlock. Brave Search was sitting right there the whole time. One dinner conversation and five minutes of setup unblocked the entire project. Check what you already have access to before you build workarounds.
  4. AI can't count your funds. It pulled in 25 when there were 20. It doesn't know what's discontinued. The human checking the official source — that's not overhead. That's the product.

I'll keep updating you as this evolves. The debate system is getting smarter every week. The backtester is proving the defensive approach works. And the team is actually using it.

If you're curious how I got Claude Code running in Hong Kong in the first place, that's a whole other story.

What tool does your team need that doesn't exist yet?

Enjoyed this post?

Get new guides and tools delivered to your inbox every week.