The Intelligence Engine
We threw the LMS model in the bin. Here is what we built from scratch, what it does, and why a 45-minute session now beats a four-hour course every time.
Collab365 was a Microsoft 365 training business. Fourteen years. 50,000 professionals through our doors. Clients including Microsoft, Nike, the NHS, and Deloitte. Just four people based in Telford. Highly profitable.
We didn't spend four grueling months wrestling our legacy data into submission just to throw that effort away and build another course library.
But for over a decade we had been asking our audience the very worst question you can ask a customer. We asked them: "What do you want to learn?"
They would reply: "Power Apps." So we would spend six months and thousands of dollars building a massive four-hour workshop. By the time it launched, the software had updated, the interfaces had moved, and dropout rates routinely hit 50%. Our business was entirely reactive. We were building static assets that mathematically decayed the second we hit publish.
The AI Wrapper trap nearly killed us. The manual data migration had been an exhausting ordeal. But fighting through it forced a brutal realization. The traditional response in our industry is just to build "shorter courses". But that completely misses the point.
Nobody wakes up wanting to consume technical video. They want help making a decision on how to perfectly solve the exact problem blocking their project.
If you have been reading this series from the beginning, you know the exact conclusion we reached. To survive the AI era, we had to burn the traditional LMS to the ground. You cannot bolt AI onto a portal. You must build an Intelligence Engine.
What we finally built on top of that pristine data is called Collab365 Spaces. It is not a library. It is a terrifyingly efficient, autonomous machine. And here is exactly how it works.

The Engine Mechanics
A Collab365 Space groups people together by common domain. Think of an IT Manager implementing Copilot. Instead of guessing what that Avatar might want to watch, we deploy an intelligence pipeline to map and solve their exact friction points in real time.
Collab365 Spaces is currently in a closed Beta while our intelligence engine ingests 14 years of legacy content. As you browse the public platform today, you will see early AI-mapped problems sitting alongside heavy legacy courses that we migrated for baseline testing. Do not mistake the old migrated courses for the new AI-generated ecosystems.
1. The Chrome Extension (The Intake Vacuum)
An Intelligence Engine cannot rely on manual data entry. To capture reality as it happens, we built a custom Chrome Extension. Think of it as an 'Intake Vacuum'. While our team browses the web, they can instantly siphon critical forum posts, Microsoft update logs, and Github issues directly into our semantic database with a single click. The machine is fed real-world context instantly.
2. The Admin Copilot & On-Page Workflows
We do not manage this massive engine with command-line scripts. We built a proprietary Admin Copilot straight into the platform interface. It operates as a fully aware administrative agent.

We can literally talk to our entire knowledge base and database in natural language. The Admin can ask the Copilot to perform live web searches, invoke highly specialized "skills", or trigger massive background AI orchestrations directly from the page UI. If we are researching a problem, the Copilot can instantly read the on-page content, autonomously generate custom prompts based on that exact context, and hand them to us to pass into other external workflows. It provides absolute command-and-control without ever leaving the browser.
3. Problem Discovery & The Shopping List
Once triggered, the AI engine aggressively interrogates the data. It sizes the problem to see how large the addressable market is and extracts the exact jargon. But instead of blindly churning out a course, it autonomously writes a "Knowledge Shopping List". This is a structured list of the exact data points required to solve the problem.
The engine then executes Deep Research to fulfill that shopping list. This produces high-fidelity technical articles that can be consumed directly by the Avatar for quick reading (when a massive course isn't appropriate), or used by the system to generate an administrative Skeleton Recipe detailing the exact curriculum for our experts. The engine has done 95% of the heavy lifting before a single lesson is drafted.
4. The Human Anchor (Versioning & Visibility)
This is where we introduce the liability shield. The AI is structurally banned from acting as the final expert. The draft Recipe lands on the desk of a human specialist: the Human Anchor.
This is not a black box. Our engine enforces strict Versioning on every document, allowing the Human Anchor to easily audit exactly what the AI generated versus what a human has touched. The expert checks the logic, edits the Markdown, and uses our granular Visibility Controls to authorize the final publish. We get the velocity of a massive editorial team, but the human never surrenders the guarantee of truth.
5. Context-Aware Media Bots
Once the logic is verified, media production drops to zero latency. Our editors do not hunt for stock photos. They click "Add Image" inside our custom editor, and our context-aware AI bots instantly read the surrounding Markdown, ask for a style preference, and generate the exact visual asset required on the spot.
To provide multi-modal learning, the engine generates an optimized prompt alongside the verified text. We then hand over those assets to NotebookLM to instantly produce podcast-style audio supplements. Total production time is measured in seconds.
6. Answer Engine Optimization (AEO)
We do not just build our platform for humans. Every solution we generate is automatically structured for Answer Engine Optimization (AEO). This guarantees that when external Large Language Models (like ChatGPT, Claude, or Perplexity) crawl our Spaces, they can instantly parse our architecture, ingest our pristine logic, and cite us as a definitive source of truth.
7. The Pulse & Dynamic RSS
A static LMS decays the moment it is published. A Space stays alive. We built a daily autonomous workflow called The Pulse. Every morning, the Pulse wakes up, looks at the specific Avatar of the Space, reads the latest news and tech releases, and autonomously publishes only the information the Avatar actually needs to keep up.

To distribute this seamlessly, the platform generates a custom RSS Feed that can ingest one or multiple Spaces. Members can subscribe and read their personalized and living intel directly in their favorite readers. This completely replaces the endless slog of manually compiling generic daily email digests.
8. The Invisible LMS Environment
We realized that while the traditional LMS content model was broken, the need to manage access was not. So we built an entirely new and invisible Membership Environment directly into the platform. It includes granular User and Team management, secure role-based access, and deep visibility controls. It gives us all the governance of an enterprise LMS, but without the massive bloat. It is designed purely to support a living intelligence rather than static SCORM packages.
Closing The Circuit
Let's be completely honest for a second.
We didn't just build this to create better content. We built this to survive.
Remember the AI Wrapper Trap we talked about back in chapter three? The one where businesses are just bolting ChatGPT onto legacy data and crossing their fingers? Well, this is the exact opposite of that.
When we assemble those eight pieces, we completely change the operational math of the business. We took every single laborious friction point (anything that takes a human more than five minutes to do) and we automated it.
But here is the critical part. We still hand the final authority back to a human expert. We stripped out the grueling administrative slog, but we absolutely kept the liability shield.
The result? We no longer have an LMS. We have a proprietary intelligence asset. And I was able to build the entire engine from scratch in less than four months.
Now. Will it actually work?
The honest truth is we won't know for sure until we launch the public Beta in May and let real people inside. But here is the thing. Even if this very first version isn't perfect, we have given ourselves an unshakeable foundation to succeed.
Because we built this on a completely native AI foundation (rather than fighting with a decaying legacy LMS), if the market feedback tells us we need to pivot, we can. We don't need a massive development team to rewrite the codebase. We just adjust the instructions, redirect the orchestrator, and the entire platform pivots with us instantly.
But building an engine this ambitious introduced a terrifying new problem. Running constant AI workflows, deep research loops, and daily RAG orchestrations on traditional enterprise servers is violently expensive.
To make the Collab365 Intelligence Engine financially viable at scale, we had to do something radical. We had to throw out the servers entirely.