Chat w/ Documents Plugin Development Updates

Hi Folks,

Today I spoke with Beck about starting development on our chat with your documents Rag plugin. The goal of this plugin is to make it easy for you to:

  • Upload and chat with your documents 100% privately with the local model of your choice.
  • The ability to create groups of documents, for example around health or personal finance that will allow you to create a page in BrainDrive and chat with only those documents (similar to projects functionality in ChatGPT.
  • The ability to associate documents with different personas that you create in BrainDrive so a specific model can work off of your documents automatically as part of it’s context.

In the future our plan is to also make this easy to export to BrainDrive Memory as well.

Here is the full recording of the video discussion going over this with Beck.

Questions, comments, concerns welcome as always.

Thanks,
Dave W.

Hi Guys,

Below is the recording of my call with Beck today to discuss his progress on the new chat with documents functionality he is building for BrainDrive Owners. It’s followed by an AI powered summary for those that prefer to read instead of watch.

Questions, comments, concerns, and ideas welcome as always, just hit the reply button.

Thanks,
Dave W.

:rocket: What We’re Building

The new “Chat With Your Documents” feature is more than just a chatbot—it’s a fully modular, open-source tool that lets you:

  • Upload and manage collections of documents
  • Chat directly with those documents (context-aware)
  • Perform full-text search across your private data
  • Run everything locally or on your own cloud setup

:brick: System Overview

The system is structured around three main views:

  1. Dashboard – Manage your document collections.
  2. Chat – Create sessions and ask questions about a specific collection.
  3. Search – Perform keyword-based search across your files, similar to a personalized Google Search.

This reflects our core philosophy: put your data at the center—not the chatbot.

:page_facing_up: Under the Hood

Document Processing:
We’re using a cutting-edge open-source library called SpaCy Spatial Layout. Unlike basic text extractors, it preserves layout, tables, lists, and even section structure. That means better context and more accurate answers.

Chunking Strategy:
Documents are split into sections of ~250–350 tokens (around 200–400 words). Each chunk is enriched with contextual metadata—like where it fits in the document—so the AI knows not just what a chunk says, but why it matters. This is inspired by Anthropic’s work on contextual retrieval.

Embedding & Storage:
We combine:

  • ChromaDB for semantic (vector) search
  • BM25 for keyword-based (lexical) search

This hybrid approach gives the best of both worlds—precision when needed, flexibility when keywords are vague.

Models Involved:

  • Main chat model: hosted on Google Cloud using Ollama’s 8B parameter model
  • Embeddings: OAA embed-large (1024 dimensions)
  • Context generator for chunks: smaller 3B parameter LLM, deployed separately for performance

Everything runs locally or on self-hosted cloud infra with autoscaling (e.g., Google Cloud Run), so you only pay when it’s used.

:test_tube: Current Status

:white_check_mark: Feature-complete from backend and UI
:warning: Some document types (like Google Docs) still being debugged
:turtle: Upload/processing time can be slow for large documents—due to advanced structure extraction and context generation
:speech_balloon: Answers are generally accurate, but improvements are ongoing

:soon: What’s Next

  1. Make it work for every file type reliably
  2. Optimize speed, especially during document upload and indexing
  3. Add inline source referencing + clickable document navigation
  4. Integrate fully into BrainDrive UI as a plugin
  5. Make chunking configurable for advanced owners

:hammer_and_wrench: Built With You in Mind

Everything is modular and open-source. Swap out the document processor. Tune the chunk size. Host the models locally. Or just run the whole thing with a single script.

Whether you’re a Katie building content libraries or an Adam crafting custom AI workflows, this tool gives you real control over your documents.


Want to test it early or contribute? Drop a comment below or visit the GitHub repo (link coming soon).

Your AI. Your Rules.
— The BrainDrive Team

Hi All,

I had another conversation with Beck yesterday on the Chat w/ Documents project.

I have shared the video and an AI powered summary of our conversation below, and also wanted to update everyone on a couple of important points from the call.

  1. Extracting the text with structured data (like the fact that info is in a table) is a resource intensive process. There are faster and simpler ways to do it, but you lose the structure which makes the chat with documents capabilities less accurate.
  2. With the above in mind we are working on the following solutions. a) having a full powered extraction algorithm that can run in the cloud or on a very powerful computer. b) having a stripped down extraction process that will be less performant but will still run locally on the average modern computer. c) having a middle ground version that balances speed and performance and can run on more expensive local AI setups.

Full discussion below. Questions, comments, concerns, and ideas welcome as always. Just hit the reply button.

Thanks
Dave W.

Video Recording:

AI Powered Summary:

:brain: Chat With Your Documents: New Approach, Clear Trade-Offs

We’ve been working hard on improving our Chat with Your Documents feature — and in this update, we’re laying out what’s changed, why it matters, and how we’re evolving our approach going forward.

If you’d rather read than watch the latest dev video, here’s the key summary:


:hammer_and_wrench: What We Found

Our initial goal was ambitious: extract rich, structured information from documents locally, using open-source tools like spaCy layout. But we ran into three main challenges:

  • Slow processing times

  • Failed chunk generation on large files

  • Inconsistent results, especially on modest machines

The core issue? There’s a big difference between extracting plain text and extracting structured text (like tables, headers, and layout-aware elements). The structured approach is far more accurate — but also far more demanding on local hardware.


:bulb: What We’re Doing Next

We’re moving to a tiered system with clear trade-offs:

  1. Basic Local Mode
    → Extracts plain text only
    → Fastest, most lightweight, works on any modern laptop
    → Limited accuracy (no structure, no context)

  2. Intermediate Mode
    → Adds contextual chunking for better RAG (Retrieval-Augmented Generation)
    → Slightly longer processing, still local-friendly

  3. Advanced Cloud Mode
    → Full structured extraction with headers, tables, and context
    → 4–5x slower locally, but fast and smooth in the cloud
    → Ideal for serious use (e.g., production apps or high-quality knowledge bases)


:cloud: Why Cloud-First Makes Sense

Running the most advanced version locally is possible — but only on expensive hardware (think $10,000+ workstations). For most owners, the better option is:

  • Keep your interface and vector database local (if you want)

  • Offload heavy document parsing to a hosted service we’re building

  • Maintain ownership, control, and exit rights — even in the cloud

This is still self-hosted AI, not Big Tech AI.


:wrench: What’s Coming

We’re spinning off the structured document processing into a standalone API service. You’ll be able to:

  • Upload documents

  • Get back structured chunks ready for vector storage

  • Use it standalone or plug it into your BrainDrive setup

It’s open-source, transparent, and respects your data sovereignty.


:date: Timeline & Next Call

We expect the first version of the cloud-based document processor to be live within one week. We’re meeting again on Thursday, July 3rd to check in and finalize the integration.


:compass: Bigger Picture

This shift reflects a broader insight:

Local AI is great—but cloud-hosted, open-source AI is often the practical default.

Like WordPress, BrainDrive can run locally… but most people host it in the cloud. We’ll continue to support both paths.


Let us know what you think, and if you’d like to test things out when they goe live. And as always — your AI, your rules.

— Dave
co-creator, BrainDrive

Hi Guys,

Dave J., Beck, and I got together this morning to discuss moving Beck’s Chat w/ Your documents over into a BrainDrive plugin. Here’s the recording of the video followed by an AI powered summary of what was discussed.

Questions, comments, concerns and ideas welcome as always. Just hit the reply button.

Thanks,
Dave W.

  • Purpose: Discussion around integrating Beck’s standalone “Chat with Documents” app as a plugin into BrainDrive.
  • Participants: Main contributors were Beck (developer of the original app), Dave (BrainDrive plugin architect), and the meeting lead/coordinator.

:package: Beck’s Current Application

  • A standalone app that supports:
    • Document upload and indexing
    • Document chat functionality
    • Uses SpaCy for extraction
    • Embedding model: MxLarge
    • Embeddings stored in ChromaDB
    • Uses Qwen 3.8B as the chat model
    • Has its own front-end UI
  • Goal: Integrate this app as a plugin within BrainDrive.

:jigsaw: Plugin Development Status (Beck)

  • Started plugin development from scratch using the BrainDrive Plugin template.
  • Not reusing other BrainDrive components/services yet.
  • Current focus: Rebuilding front-end UI in React and connecting to the backend.
  • Plugin Structure:
    • Uses collections → documents → chat sessions (identified via IDs)
    • Plans to support:
      • Creating collections
      • Uploading documents
      • Initiating chats

:arrows_counterclockwise: Development Workflow & Challenges

  • Beck’s approach:
    • Complete React front-end from scratch
    • Once UI is built, will test it as a plugin
  • Not sure how to test the plugin in isolation yet
  • A few issues:
    • Outdated/missing scripts in BrainDrive installation documentation (e.g., build_plugins.sh)
    • Confusion about Conda environment: unnecessary for front-end, only needed for backend

:brick: Plugin Structure Discussion

  • Dave’s input:
    • Recommends making “Chat with Documents” a modular backend service
    • This would allow it to be reused in other plugins/pages
    • Could expose a set of APIs or an MCP server for document search/chat
  • Beck agrees: multiple paths possible
    • All-in-one Plugin (collections + docs + chat)
    • Or decoupled backend that other plugins can consume

:brain: Code Sharing & Integration

  • Dave offers:
    • His BrainDrive Chat plugin repo as a base, to help Beck avoid starting from scratch
    • Plugin is being refactored and will be ready by Monday
  • This plugin supports:
    • AI chat
    • Personas
    • History
    • Could be extended to support document chat
  • Beck can decide whether to integrate or continue his standalone implementation

:mag: Technical Detail: Search API

  • Beck demonstrated:
    • A /search endpoint requiring:
      • query
      • collection_id
      • top_k
      • hybrid flag
    • Web UI template shows how the API is used
    • Can be reused in other contexts (e.g., BrainDrive memory)
  • Dave mentions using Plugin State to manage persistent state across navigation in BrainDrive

:pushpin: Agreements & Next Steps

  1. Job 1: Get “Chat with Documents” plugin working in BrainDrive.
  2. Job 2: Make it usable by other plugins (modular backend or MCP API).
  3. Job 3: Integrate with BrainDrive Memory and AI Providers long-term.
  • Beck’s immediate next step:
    • Finish building the UI for the plugin
    • Push to his own repo for testing
    • Target: by Friday or Saturday
  • Group reconvenes Monday at 11 AM for review and next actions

:compass: Final Notes

  • No critical blockers so far
  • Everyone aligned on phased approach
  • Dave and Beck to collaborate on integration and modular backend reuse
  • Next sync: Monday at 11 AM

@DJJones Beck has the following question that I am moving here to the forum and letting him know to watch for the response here.

Could you pass this to David GitHub - bekmuradov/BrainDrive-Plugins: Development of braindrive plugins? It’s a chat with your documents plugin.

I can successfully install it as a plugin and see it in Plugin Manager page (even update it), but it doesn’t show it in braindrive studio page.

I think some configurations are missing, not sure. Maybe Dave can spot it and let me know.

And then he just came back with:

update: I was able to make it show on the studio page (plugins list). Turns out I had to push build folder to GitHub too.

if David would have time, let him look at it anyway, because I get Error Rendering Module

This is the cache issue correct?

Thanks,
Dave W.

Beck this may help:

Let me know thanks!

Dave W.

I am not sure how far you are along at this point, when you install the plugin and also in the plugin manager (module details) you can view the test plugin

It is failing due to this:
ChatWithYourDocuments

Failed to instantiate

Error: Function component test failed: Cannot read properties of undefined (reading ‘bind’)

When I wrap up a few things I have going at the moment I will delve more into if needed

Hi All,

Here is the recording from Beck, Dave J, and my call yesterday discussing Beck’s progress in integrating the Chat w/ Your Documents functionality into BrainDrive as a plugin.

Beck has it up and running in BrainDrive which is exciting! He also is helping us identify and work through updates and additions that need to be made to our developer documentation as a part of this process which is great.

Video of our discussing followed by an AI Powered Overview below:

Thanks,
Dave W.

:white_check_mark: What Beck Built

  • A fully functional Chat with Your Documents plugin running inside Brain Drive.
  • Frontend built as a class-based React component (required by Brain Drive plugin system).
  • Backend runs locally, with the model currently hosted on Google Cloud.
  • Document upload and session handling already working:
    • Users can upload documents.
    • Sessions are saved and reused.
    • Chat interface mimics previous standalone version but fully integrated as a plugin.

:test_tube: Development Workflow

  • Beck created a local development server for faster iteration.
  • Uses hot reload and local testing before pushing into Brain Drive.
  • Caching issues noted: sometimes required a full plugin delete & reinstall to reflect changes.

:wrench: Next Features (Planned)

  • Add streaming support for real-time response rendering.
  • Improve UI formatting (currently raw Markdown).
  • Add:
    • Chat session deletion
    • Content preview and document download
    • A separate “Doc Search” only component (non-chat)

:jigsaw: Developer Experience Feedback from Beck

  • Plugin architecture is solid once understood.
  • Lack of documentation was the biggest barrier initially:
    • Needed clearer guidance on required structure (e.g. class-based components).
    • Wanted better documentation and examples for Service Bridges (especially Events).
  • Suggested adding a component library or standardizing UI via Tailwind.
  • Once past setup hurdles, found plugin development straightforward and powerful.

:soon: Our Response / Next Steps

  • We’re creating:
    • A minimal example plugin for each Service Bridge (with isolated, working code).
    • Expanded developer documentation to accelerate onboarding.
  • Working toward a more robust plugin caching/update process.
  • Adding a component library and streamlining the UI system (moving toward Tailwind).

Hi All,

Dave J., Beck, and I spoke in detail about progress and vision for the Chat w/ Documents plugin.

Recording followed by AI powered summary below.

Questions, comments, ideas, etc welcome as always. Just hit the reply button.

Thanks,
Dave W.

1. Streaming Endpoint Implementation Issues

  • Current Challenges: A team member is having trouble implementing a streaming endpoint for chat functionality.
  • Technical Explanation:
    • React’s default streaming handling is insufficient; it waits for the full response before updating the UI.
    • Microsoft’s streaming library is used instead, with a postStreaming function in ApiServices.
    • Python FastAPI backend handles the response from the LLM and pushes it to React using this method.
  • Backend Code Review: Code is walked through together; team agrees it’s working except for streaming.

2. Chat With Documents Functionality

  • Current State: Chat with Documents works but only displays the output once the LLM finishes generating the response.
  • Goal: Enable real-time streaming of tokens as they’re generated.
  • Plan: Use a microservice architecture—one backend service for chat with documents, and a plugin that connects it to BrainDrive.

3. Deployment and Hosting

  • Microservice Hosting: Document processing is offloaded to a cloud-hosted service.
  • Setup Requirements:
    • Developer must run the chat-with-docs backend locally (non-Docker).
    • Plugin connects BrainDrive to this backend.
  • Documentation: Needs updating to reflect the current installation process.

4. Architecture Discussion: Core vs Plugin

  • Architecture Clarification: Chat with Documents is currently a separate backend using FastAPI, Chromadb, SQLite, BM25 index, and multiple LLMs.
  • Integration Question: Can this system use BrainDrive’s models?
    • Answer: Not currently, but potentially in the future.
  • Team Preference: Default to local simplicity; if it adds complexity, keep it a plugin.

5. UX Philosophy and Feature Design

  • Strong Preference: Mimic ChatGPT/Claude project structure—known, familiar, intuitive UX.
  • UX Features Required:
    • Each document collection has its own page.
    • On that page: ability to upload/delete documents and chat interface showing chat history.
    • Default behavior must match ChatGPT and Claude’s “Projects” flow.
  • Avoid Over-Complexity:
    • No dropdown to pick collections on generic chat page.
    • Keep collections linked to specific pages.

6. Plugin vs Core Decision

  • Plugin Chosen: Final decision is to keep Chat with Documents as a plugin.
  • Reasoning:
    • Maintains a slim core.
    • Easier for future developers to customize without forking the project.
    • Installer will eventually handle backend microservice automatically.

7. Developer Alignment

  • Back-End Developer Adjustments:
    • Needs to update UI to mirror ChatGPT/Claude layout.
    • Chat interface should be visible on document collection page.
    • Chat history and document management must be unified per page.
  • Future Goal: Allow selecting different models per page and use BrainDrive’s models, but okay to defer.

8. Implementation Timeline and Next Steps

  • Immediate To-Dos:
    • Add streaming endpoint support.
    • Redesign UI to match ChatGPT-style chat with documents.
    • Update documentation.
  • Next Check-In: Tuesday at 11am.
  • Communication Plan: Use forum for interim questions or feedback.

Here is the screenshot of ChatGPT projects we discussed.

Hi Guys,

Here is the recording from Beck’s update on the Chat w/ documents plugin which he is making great progress on.

Going to be a really cool feature for BrainDrive owners.

Questions, comments and ideas welcome as always, just hit the reply button.

Thanks,
Dave W.

Video Recording of Call:

AI Powered Call Summary:

The meeting focused on the development and future integration of the “Chat with Your Documents” plugin for the Brain Drive platform. Key topics included a review of the plugin’s current state, solving a technical issue with response streaming, and planning a more streamlined installation process for plugins with backend components.


“Chat with Documents” Plugin Status

  • Beck presented the current version of the plugin, which features a UI to manage project files (upload, download, delete) and a chat interface.
  • A key issue was identified: chat responses do not stream back to the user in real-time. Instead, the UI waits for the full response to be generated before displaying it.
  • A request was made to sort the chat session history by most recent date to improve user experience.

Solving Streaming and Adding Model Selection

  • To solve the streaming issue, the team decided that Beck will integrate an existing component from the “AI Chat v2” plugin, which already has streaming functionality built-in. This utilizes the core platform’s “service bridges” and promotes reusable code.
  • The team discussed a future enhancement to allow users to select which language model to use from within the UI. This will also be achieved by implementing an existing model selection component from the core platform.

Simplifying Plugin Installation

  • The current installation process for the plugin is complex, requiring users to manually set up a local backend, manage dependencies, and run scripts.
  • The team planned a significant improvement to the Brain Drive platform that will simplify this process. The plugin installation service will be extended to automatically install and manage backend microservices.
  • The goal is to enable end-users to install a complex plugin and its backend with just a single link, without any manual setup.

Action Items

  • For Beck:
    • Update the “Chat with Documents” plugin to implement response streaming by reusing the existing core service from the platform.
    • Re-order the chat session history to be sorted by the most recent date.
    • After updating the plugin, begin work on the new, simplified installation service for plugins with backends.
  • For David Jones:
    • Provide Beck with his notes and pseudo-code for the planned installation service to guide development.
  • For David Waring:
    • Create a thread in the community forums with the video of this meeting and this summary to centralize all future communication.

The team agreed to use the forum for ongoing communication to keep everyone aligned. They will have daily calls that Beck can join as needed to sync up on development progress.

This is my 3rd plan today with regards to installing micro servers/services in BrainDrive, my first plan took into account MCP, Docker and FastAPI which turned into a much larger scope than we need at the moment. So I took a few shots are nailing down to what I consider MVP.

Comes down to what is needed to install and use a FastAPI service inside BrainDrive extending the current installer.

implementation-summary.md (9.9 KB)
quick-test-guide.md (26.2 KB)
simplified-microservice-schema.md (22.4 KB)

Update from Beck:

  1. Chat with your documents plugin is done. I had made it streaming using both, my own implementation and using service bridges. I did my own to understand how it was working and to better understand plugin development. Now, it uses the service bridges streaming service.
  2. Will update documentation on how to run the chat with your documents locally and it will be ready for testing. I am also planning to add additional automated single command script files for setting up and running the project.
  3. After that will start working on micro servers installation process.

We have a followup call on Thursday which will be recorded and posted here.

Hi Guys,

Dave J, Beck and I spoke yesterday to update on project progress. Below is a recording of our call followed by an AI powered summary of the items discussed.

Recording:

AI Powered Summary:

  • Documentation mostly complete; install now works with 1 Docker command (still uses Docker/Docker Compose).
  • Works and tested; next step is installing required import plugin.

Microservices Installation Discussion

  • Beck’s plugin currently requires a separate backend service (FastAPI/Docker).
  • Goal: allow Brain Drive to install backend services automatically when a plugin requires them.
  • Dave shared forum docs outlining a high-level approach:
    • Separate microservice data in lifecycle manager.
    • On Brain Drive startup: start needed microservices; on shutdown: stop them.
    • Should support FastAPI, MCP, Docker.
  • Decision:
    • Backend plugins will be installed/managed like frontend plugins.
    • Beck will adapt his backend to fit this system so a non-technical user installs only once.
  • Possible separation between plugin and microservice so users can swap backends.
  • Beck will study Dave’s docs, explore backend plugin install flow, and may need plugin template updates for dependency declaration.
  • Estimate: 1–2 weeks for implementation.
  • Mid-point check-in set for next Thursday at 1 PM.

** Clarifying Frontend vs Backend Plugins**

  • Current Beck plugin = frontend (React); backend = Python/FastAPI.
  • Frontend handles UI; backend does heavy lifting (vector DB, embeddings, LLM calls).
  • Security not a concern since Brain Drive runs locally.
  • Fully embedding backend logic into frontend is possible but would remove flexibility for local storage and service swapping.

I don’t have your code so just kind of tried to play it from memory based on the conversation based on that this is the flow I came up with:

  1. Remote Installer extracts files and validates structure
  2. Line 882-885: Creates UniversalPluginLifecycleManager
  3. Line 892: Calls universal_manager.install_plugin(plugin_slug, user_id, db)
  4. UniversalPluginLifecycleManager loads the specific plugin’s lifecycle manager
  5. BaseLifecycleManager.install_for_user() is called
  6. Plugin’s _perform_user_installation() is called
  7. Plugin’s _create_database_records() creates plugin, modules, and (with services) services

If the lines do not match don’t get to hung on it because every LifeCycle Manager kind of has its own personality. I can dig some more tomorrow if needed, I don’t know yet what I am working on just depends on if I finish getting Matt’s changes inserted tonight or tomorrow.

Plugin service installer is ready to test. For not it works only with docker compose.
Plugin service needs to have docker-compose.yml and Dockerfile files.

@davewaring
you can test it by downloading BrainDrive from this branch

once braindrive is running import this plugin:

I will send some variables over email. Please add them to BrainDrive backend/.env

For me plugin installation failed with timeout error, but it’s actually installed. You can double verify plugin installation by navigating to http://localhost:8000/ (this is UI from chat with your documents plugin)

@davewaring
also first time you test chat with your documents it might take a little longer, because it needs to wake up the document processing instance

Thanks Beck! I sent the files over to @DJJones that you emailed so he can add.

Have a good weekend.

Thanks,
Dave W.

some further question regarding next steps for plugin service installer.

Currently, the installer supports Docker (docker compose), and plugin service automatically runs only during the plugin import process. This means it doesn’t automatically start the plugin services the next time the BrainDrive backend runs (start, re-start).

I think, BrainDrive backend already stores the plugin information in database, so my question is where to place this logic of automatically starting plugin services.

I also think adding UI actions for “start,” “stop,” and “restart” could be useful.

Let me know @DJJones , @davewaring .

I am looking for the code where it actually saves plugin data to database.

I started looking from:

  1. Plugin installation endpoint BrainDrive/backend/app/plugins/lifecycle_api.py at feature/plugin-services-runtime · BrainDriveAI/BrainDrive · GitHub. (It has unused db param)
  2. Then checked remote installer.install_from_url BrainDrive/backend/app/plugins/remote_installer.py at feature/plugin-services-runtime · BrainDriveAI/BrainDrive · GitHub. Because lifecycle_api install endpoints calls remote_installer.install_from_url. I did’t find any methods where it saves the plugin data to database, except one. Remote installer has update_plugin method which update plugin data in db BrainDrive/backend/app/plugins/remote_installer.py at feature/plugin-services-runtime · BrainDriveAI/BrainDrive · GitHub, but I don’t see where update_plugin method is used.
  3. I found that plugin directory BrainDrive/backend/app/plugins at feature/plugin-services-runtime · BrainDriveAI/BrainDrive · GitHub, has db_manager and repository files. However, I didn’t find where they were used (used the search in files in VsCode IDE)

Could you point, to where it’s saving plugin data in plugin installation flow @DJJones ?