@DJJones - heads up, the Context Garden Generator just got a significant fix and your review queue got bigger (in a good way).
What Happened
We’ve been seeing 4-11 of 11 Pass 1 batches fail consistently when running CGG on BrainDrive-Core, which meant only 58% of the codebase was getting analyzed (360 of 619 files). We assumed it was rate limiting.
It wasn’t. Today we added diagnostics to capture the raw model responses on failure, and discovered the model was returning perfectly valid JSON every time — just wrapped in prose:
I'll analyze the BrainDrive-Core repository to identify behavioral groupings...
```json
{ "behaviors": [...] }
```
The parser only handled responses that started with a code fence. When the response started with prose, the valid JSON inside was silently discarded. Every retry hit the same issue because the model consistently adds a preamble.
The Fix
A 25-line _extract_json() function that handles four response patterns: bare JSON, fenced JSON, prose + fenced JSON, and prose + bare JSON. Plus batch diagnostics output so we’d catch issues like this immediately in the future.
Results
| Before (v3) | After (v4) | |
|---|---|---|
| Pass 1 batches succeeded | 7/11 | 11/11 |
| File coverage | 360/619 (58%) | 619/619 (100%) |
| Behaviors found | 40 | 63 |
| Puzzle pieces generated | 40 | 63 |
23 entirely new behaviors from the previously-lost files: database migrations, accessibility/WCAG, developer tools, drag-drop system, animation system, responsive layout, data export/backups, testing infrastructure, and more.
What This Means For You
The v4 output is ready for your quality review — 63 puzzle pieces at ~/BrainDrive-Core-context-v4/. The Chat Plugin output (16 pieces) is still at ~/BrainDrive-Chat-Plugin-context/.
The tag taxonomy will need expansion for the new behaviors (many new tags the taxonomy doesn’t cover yet), but that’s a refinement pass after your review.
Code is committed and pushed: 6175b2e on braindrive-context-garden-generator, project docs updated in the Library.