Here is the recording from today’s dev call livestream followed by an AI powered summary of the discussion. We Livestream every Monday at 10am EST. We’d love it if you’d join us live and are also happy to address any questions, concerns, and ideas you have here in the forum. Just hit the reply button.
Thanks
Dave
Recording:
AI Powered Summary:
What BrainDrive Is (quick overview given on call)
MIT-licensed, modular alternative to ChatGPT that you own and control.
Tasks timestamp fix: background task dates now reflect local timezone (not UTC).
System info export: Profile → Open System Info → downloadable JSON with non-sensitive environment details (OS, browser, commit, plugins & versions) to include in GitHub issues. Keys/chat content never included.
Models & Providers
Ollama models + background tasks: downloads now tracked in Tasks; progress also mirrored on the model page.
OpenRouter plugin:
Shows costs, token info, availability; quick model testing.
API key stored encrypted; masked in UI.
Optional plugin—skip if running fully local.
Personas
Personas = named system prompts + parameters for chats.
Quick edit: clicking a persona card opens editor.
Parameters framework expanded internally to enable future settings.
Plugin Install UX (big cleanup)
Old “long landing page” replaced with compact drop-link/drag-file UI and immediate feedback.
Supports:
GitHub URL install
Local .zip install (for private/custom plugins)
Security notice present (to be unified across flows).
Page Builder (recap)
Drag-and-drop pages from plugin modules; can mix modules from multiple plugins.
Publish to “Your Pages” with one click.
Demo showed non-chat experiences (e.g., small game) built via the same system.
Chat Interface Updates
Model picker includes OpenRouter models (filterable, e.g., “free”) and local Ollama models.
Persona selector integrated; persona adherence bugs from 2 weeks ago fixed.
“CAG” / Context Augmentation (First Step of “Chat with Docs”)
New document upload to chat context:
Accepts text/ASCII files (e.g., .md). (PDF support planned via a PDF-reader plugin; full-text docs offline reading planned.)
Two modes:
One-shot (no DB save; ephemeral)
Save to conversation (stored with chat; lives in DB)
Internals (MVP): document is segmented into window-fit segments; injected as context to the LLM (no vectors/metadata yet).
Demo validated: baseline “What is BrainDrive?” answer was wrong before; accurate after loading the whitepaper.
Note: temperature was high during some tests; hallucination caveat shown in UI.
DevRel, Docs & Community
Plugins now live in the Community (ad-hoc marketplace for now).
Intro course: “How to own your own AI system” — 9 lessons, text course done; videos planned. Designed for non-technical setup to fully working BrainDrive.
Navaneeth building Y-Finder: first community use-case plugin (helps users “find their why”); rock-paper-scissors was his first test plugin.
Avatar System (Novel Knowledge Packaging) — Concept & Early Demo
Problem with traditional RAG: setup complexity (DBs, embeddings, migrations), non-technical friction, varied local hardware.
Avatar idea: Modular, tradeable knowledge bases as PNG images containing embedded data (chunks + lightweight vector/graph structure). Think “trading cards for knowledge.”
Vision:
Create “Dave Jones — BrainDrive Dev” avatar; add more cards (e.g., D&D) and compose them at chat time.
Share/sell knowledge cards; build a decentralized knowledge economy where creators benefit.
Personas apply over avatars; supports multiple avatars combined (e.g., compare two sports teams’ stats).
Conversations/history could be stored inside the avatar; future editor to curate avatar contents (add/remove/fix).
Proof-of-concept:
Server side MCP source + plugin; image drag-drop → upload → use as context.
Worked in late-week demo; minor demo glitch live (still pre-pre-beta).
Next steps:
Build creator/editor UI for avatars.
Backend hooks to let plugins replace/extend pipeline stages (true modular backend).
Updated default welcome message with quick links (Docs, Community, BrainDrive) in header/footer across the app.
White-label work took longer due to bug found/fixed.
YFinder & Ikigai Use Case
YFinder plugin: guided chat to derive a personal “Why” (based on Simon Sinek).
Next: add Ikigai (what you love/are good at/the world needs/can be paid for).
Persona system: store Why/Ikigai into a persona for contextual advice (career moves, goals, etc.).
Idea: daily “personal dashboard” powered by Avatar system—reminders, journaling, wins/losses, motivational nudges; no extra DB tables needed if backed by personas/avatars.
Training & Course Content
“How to Own Your AI System” course on the forum (10 lessons; currently text + new videos).
5 of 9 videos produced using Descript; editing agents/automation highlighted.
Broader note: rapid improvements in image/audio tooling.
Roadmap & Foundation Status
Five-phase roadmap on docs; “foundation + polish” underway.
Current foundation pieces:
Installer v1 (done), Chat UI
Local models (Ollama) & API models (OpenRouter)
Personas
CAG (demoed), RAG in progress next
Page Builder (drag-and-drop)
Plugin system (1-click install/update/delete from any GitHub)
Docs site & training materials
Planned: Brain Drive Concierge (AI support)
RAG (Retrieval-Augmented Generation) Plan
Architecture:
Separate background processes (likely FastAPI); move beyond Docker-only; clean port mgmt.
RAG will be a non-default plugin (to keep first-run simple/light).
UX goals:
Single chat page; when RAG is installed, new controls appear (no extra page).
Adopt “+” menu near input (like OpenAI/Claude) for uploads/knowledge connections and mobile friendliness.
Build “Projects/Knowledge Bases” (collections of docs) via a RAG button/flow.
Background processing: chunking (target ~2k tokens), embeddings, and auto Q&A generation at ingest for faster, validated retrieval.
“Connect to” selector at input to ground chats in one or multiple projects.
Page Builder config: set a default project so a page auto-loads its knowledge base.
CAG vs RAG:
CAG: minimal processing; drops chunks into context; no LLM at ingest.
RAG: full indexing pipeline (embeddings, chunks, Q&A) with persistent vector DB for any future chats.
UI Considerations
Follow established patterns (Claude/OpenAI): keep RAG/CAG actions in the input area’s “+” menu.
Helps avoid clutter and works well on mobile.
Community & Outreach
Clip CAG demo and post to community forum; solicit format requests/bug reports.
Share progress update on r/localLLaMA (follow-up to contest entry).
GitHub Project board shows to-do’s; invites community contributions.
Near-Term Focus (as discussed)
Dave J: Implement/install RAG backend processes; then wire up UI controls on the main chat page.
Dave W: Finish training videos; coordinate outreach; plan Concierge spec; show YFinder soon.
Navineeth: Wrap Ikigai; then start on Concierge (ML/fine-tuning background suits it).