MicroCoreOS is a Python framework built on the Atomic Microkernel Architecture. A implementation where every feature lives in a single file—allowing AI agents to master any plugin in seconds, not minutes.
from core.base_plugin import BasePlugin
from domains.users.models.user_model import UserModel, UserListResponse
class ListUsersPlugin(BasePlugin):
def __init__(self, http, db, logger):
self.http = http
self.db = db
self.logger = logger
def on_boot(self):
self.http.add_endpoint(
path="/users",
method="GET",
handler=self.execute,
tags=["Users"],
response_model=UserListResponse
)
self.logger.info("ListUsersPlugin: Endpoint /users registered with Schema.")
def execute(self, data: dict, context=None):
rows = self.db.query("SELECT id, name, email, password_hash FROM users")
users = [UserModel.from_row(row).to_dict() for row in rows]
return {"success": True, "users": users}In traditional layered architectures, adding a simple CRUD endpoint means explaining entities, repositories, factories, controllers, and DTOs to your AI. That's 6-8 files and 200+ lines of code for one endpoint.
"I've identified 7 files to modify...
Creating repository...
Injecting into factory..."
All logic, routing, and operations in one atomic plugin file.
Auto-generated `AI_CONTEXT.md` describes everything available.
The core knows zero business rules. It only handles DI and lifecycle.
Add your own infrastructure (Redis, Kafka, S3) in minutes. Pure plug-and-play.
Three layers. Each one maps directly to a folder in your project — nothing hidden, nothing implicit.
No direct imports. No coupling. Plugins talk through the bus — never to each other. The
EventBus is itself a Tool — just another stateless driver living in tools/event_bus/.
Each in its own file. None imports another.
Still no direct import. Still self-contained.
Everything has its place. The kernel auto-discovers plugins in the /domains folder and wires them to the /tools they need — zero manual wiring, zero config files.
Your AI assistant knows exactly where to put new code. No more explaining layers, factories, or DI containers. The structure is the documentation.
~240 lines. Read it once, understand everything.
Stateless. Config reads from env/configmaps at boot — no mutable state, ever.
1 file = 1 feature. Self-contained — delete it and nothing breaks.
No service locators. No factory calls. No container.get(). Just declare what you need in your constructor — the kernel delivers it.
List what you need in __init__. The kernel reads
your signature and knows exactly what to inject.
Boot time. Zero config files. The orchestrator resolves the dependency graph and delivers each tool instance automatically.
self.tool — that's itUse any tool anywhere in your plugin. Your constructor is the contract — the AI reads it and instantly knows what the plugin needs.
The constructor is not boilerplate — it's the dependency specification. One glance tells you everything a plugin needs to function.
import unittest
from unittest.mock import MagicMock
from domains.users.plugins.create_user_plugin import CreateUserPlugin
class TestCreateUser(unittest.TestCase):
def setUp(self):
# 1. Trivial Mocking of dependencies
self.mock_db = MagicMock()
self.mock_bus = MagicMock()
self.plugin = CreateUserPlugin(db=self.mock_db, event_bus=self.mock_bus, ...)
def test_execute_success(self):
self.mock_db.execute.return_value = 42
result = self.plugin.execute({"name": "Test", "email": "test@test.com"})
# ✅ Verify logic AND Side-Effects (Events, DB)
assert result["success"] is True
self.mock_db.execute.assert_called_once()
self.mock_bus.publish.assert_called_with("users.created", result["user"])Stop fighting with DI containers in your tests. Because every plugin is a simple class with an explicit constructor, you can test it in perfect isolation.
Mocking is trivial: No complex setup. Just pass your mocks directly.
Side-effect verification: Ensure DB calls and EventBus pings happen exactly as expected without a real environment.
Develop as a Product: Program a feature in
the morning, test it locally in the afternoon, drop it into the domains/ folder at night.
Implicit Integration: If it passes its unit tests, the kernel handles the wiring. It will work.
How much context your AI needs to understand a single feature.
MicroCoreOS
~1,000
Estimated Tokens
Clean Architecture
~4,000
4x More Bloat
Development Cost
1 File
Per Feature
Designed to be simple, auditable, and rigid where it matters.
Zero business logic in the core. It is a neutral orchestrator that boots what you drop in /domains.
Infrastructure drivers (DB, HTTP, Bus). Reusable, stateless, and neutral capabilities.
1 file = 1 feature. Pure business logic implementation. AI agents understand the whole module in one read.
Plugins communicate via EventBus only. Decoupled by default, scalable by design.
Dependencies are declared in the constructor. The kernel delivers what's requested.
Just drop a file in a folder. The kernel finds it, boots it, and wires it.
Where MicroCoreOS is heading — from observability to a full tool marketplace.
Integrated mapping of which plugins react to which events. Full observability from day one — no external tools required.
Drop a new plugin file and the kernel picks it up instantly — no restart required. Zero-downtime development loop, potentially in production too.
A visual dashboard built on top of the Tracer Tool — see event flows, plugin timings, and system health in real-time. Already in early form, being improved.
Drop-in tool ecosystem — Redis, PostgreSQL, LLMs — as self-contained folders with
their own manifests and AI instructions. Install via uv or copy-paste.
Sidecar plugins via gRPC or WASM — write performance-critical modules in Go or Rust, orchestrated by the same MicroCoreOS kernel.
Stop wasting tokens and time. Join the 1-file revolution and build features that AI understands instantly.