Hi All,
As a part of today’s weekly BrainDrive development update @DJJones and I discussed and decided we are going to open source ModelMatch.
Below is a recording of this part of the discussion followed by an AI powered overview for those that prefer to read instead of watch.
Questions, comments, concerns, and ideas welcome as always. Just hit the reply button.
Thanks!
Dave W.
Should BrainDrive Model Match Be Open Source? We’ve Decided.
We just wrapped a long discussion about whether or not to open source BrainDrive Model Match—the engine we’re building to help evaluate and compare AI models in a transparent and customizable way. It wasn’t a quick decision, but we’re excited to share: we’re open sourcing it.
Here’s the thinking that led us there:
The Case Against Open Sourcing
We’re not naive about the risks. In fact, this mirrors our early debates around open sourcing BrainDrive Core itself. Anyone can clone an idea. AI makes reverse engineering faster than ever. And there’s always the risk that someone will fork your work, slap a logo on it, and grab the spotlight—especially if they’ve got Big Tech backing.
If we kept Model Match closed-source, it would be harder for bad actors to game the system. We could reveal only the results—not the prompts or configurations—making it tougher to over-optimize for the leaderboard. We’d be playing things closer to the vest, to keep things honest.
But we don’t want to play defense.
The Case For Open Sourcing
Transparency is a core value. Our mission is to make it easy to build, control, and benefit from your own AI system—not just for us, but for the whole community .
Open sourcing Model Match sends a clear message:
We aren’t gaming our own system.
We aren’t being paid to rank models higher.
We have nothing to hide.
It also invites the community to help improve it.
If someone wants to build a better therapy evaluator, or adapt it for drug addiction, or tweak it for education—they can. Post your settings in the forum. Submit a pull request. Use it for your own evaluations. Let’s build a library of evaluations that anyone can extend or remix.
That’s how we go from good to great.
Where We’re Headed
Model Match won’t just be code. It’ll become an engine powered by community-built configurations. No coding required—just create a config file to define your own evaluation method.
Eventually, we’ll add:
- A visual playground for testing new configs
- A BrainDrive-powered AI assistant to help you build evaluations
- A curated library of use case-specific evaluations on BrainDrive.ai
This approach gives you freedom and flexibility. You can create your own evaluator or use one from the community. And if you’re a builder, you can still stand out—by creating high-quality evaluations or value-added tools that sit on top of Model Match.
Ownership & Licensing
Model Match will be released under the MIT license—just like BrainDrive Core .
It’s yours to run, fork, and extend. Just don’t pretend to be BrainDrive—we’ve got trademarks to protect the integrity of the brand .
We Want Your Feedback
This is a community-first decision. We’re doing this to build trust, expand reach, and invite collaboration.
So if you’ve got ideas for how to improve Model Match—or want to help us build the first wave of evaluation configs—drop them in the forum. Let’s shape this together.
Final Thoughts
Open source isn’t just a license. It’s a philosophy. We’re not here to dominate—we’re here to participate in a decentralized AI future .
“No mo’ is the moat.”
We believe the best way to protect our mission is to make BrainDrive—and its tools—so good, so open, and so widely used that the only thing worth doing… is building with us.
Let’s build together.
—The BrainDrive Team