Python Driver SDK (Tier 2)
Python drivers are .driver.py files. They run as a separate subprocess per active connector, communicating with MuxitServer over line-delimited JSON-RPC on stdin/stdout. Use this tier when you need the Python ecosystem (numpy, torch, transformers, instrument SDKs) — anything else is faster as a JS driver.
Requirements
- Python 3.10 or newer on the host machine. Muxit probes
MUXIT_PYTHON(env var with full path) →python3on PATH →pythonon PATH, in that order. The first interpreter that reports a usable version wins. - If no Python is found, Python drivers are silently skipped at scan time (with a one-line warning per
.driver.py). The rest of Muxit keeps working.
Install Python from python.org or your system package manager. No further setup is needed for drivers without third-party dependencies.
Two ways to use Python
There are two paths, in order of how much Python you want to write:
- The generic
Pythondriver — point a connector at any.pyfile inworkspace/python/and call its top-level functions. NoDriverclass, no SDK boilerplate. Start here if you mostly want to glue an existing Python library to a connector. See GenericPythondriver below. - A bespoke
.driver.py— subclassDriver, declare a typedMETAblock, ship as a packaged.muxdriver. Use this when you're building a polished driver for distribution: typed property/action schema, opt-out of safety gates, custom packaging. See Structure below.
Both paths run on the same Tier 2 host (subprocess + JSON-RPC), share the same requirements.txt auto-install behaviour, and can be debugged the same way.
Generic Python driver
The muxit/python driver (built and shipped from drivers/py/python-runner/) is a generic dispatcher: it loads any .py file you point it at and exposes that file's top-level functions as actions / properties.
A new workspace ships with four working examples in workspace/python/ and matching connectors in workspace/connectors/:
hello.py+python-hello.js— single function, no dependencies.counter.py+python-counter.js— adds state, demonstratesget_<x>()/set_<x>(value)properties.http-probe.py+python-http-probe.js— usesrequests, demonstrates the per-script venv (siblinghttp-probe.requirements.txt).chatterbox.py+python-chatterbox.js— text-to-speech via the chatterbox-tts library. Heavy-deps demo: first activation pulls torch + transformers + chatterbox into the venv (a couple of GB, several minutes); eachspeak()writes a WAV underworkspace/data/chatterbox/and returns the path.
Open workspace/python/README.md for the full conventions list.
Layout
workspace/
python/
chatterbox.py # your script — plain functions, no class
chatterbox.requirements.txt # optional, sibling, one package per line
.venvs/
chatterbox/ # auto-created on first activationYour script
# workspace/python/chatterbox.py
from chatterbox import TTS
_tts = None
def init(config):
"""Optional. Called once when the connector activates. Load models here."""
global _tts
_tts = TTS()
def speak(text):
return _tts.synthesize(text)
# Properties: get_<name>() / set_<name>(value)
def get_status():
return "ready" if _tts is not None else "idle"
def shutdown():
"""Optional. Called when the connector disables / server stops."""
passThat's the whole contract — no imports of muxit_driver, no class, no JSON. Function signatures define what the connector can call.
Connector
// workspace/connectors/chatterbox.js
export default {
driver: "Python",
config: {
script: "chatterbox", // → workspace/python/chatterbox.py
},
methods: {
speak: [(text) => connector().speak({ text }), "speak text out loud"],
},
properties: {
status: () => connector().status,
},
};Lifecycle
- First activation: if
chatterbox.requirements.txtexists, Muxit createsworkspace/python/.venvs/chatterbox/, runspip install -ragainst it, stamps a hash of the requirements file. Output streams to the server console line-by-line. - The driver subprocess imports
chatterbox.pywith the venv'ssite-packagesprepended tosys.path, sofrom chatterbox import TTSresolves to the venv-installed package. - Subsequent activations: if the requirements hash matches, pip is skipped (~150ms warm start). Otherwise the install runs again (pip handles incremental updates).
- RPC calls from the connector dispatch to the matching function:
connector().speak({text})→module.speak(text=...).connector().counter(read) →module.get_counter().connector().counter = 5→module.set_counter(5). - Disable / shutdown: optional
module.shutdown()is called, then the subprocess exits.
Limitations vs. a bespoke Driver subclass
- Property and action types are not declared in the script — the connector's
methodsandpropertiesblocks are the source of truth for what's exposed and how it's typed. Auto-generated schema (used by the AI prompt and the script editor's IntelliSense) is therefore looser; for a polished public driver, use the typedDriversubclass route. - No
self.emit()for streams from a generic-driver script — streaming use cases need theDriversubclass route. - One subprocess per connector (same as the typed route).
If you outgrow the generic driver, the migration is mechanical: copy your functions into a Driver subclass, add a META block with typed properties / actions, package as a .muxdriver.
Structure
from muxit_driver import Driver, run
class MyDriver(Driver):
META = {
"name": "MyDriver",
"version": "1.0.0",
"description": "Short description of what this driver does.",
"group": "instruments", # "instruments" | "motion" | "communication" | "utilities"
"requiresSafetyGates": True, # optional, default True
"properties": {
"voltage": {"type": "double", "access": "R/W", "unit": "V",
"description": "Output voltage"},
},
"actions": {
"reset": {"description": "Reset device"},
"set_curve": {
"description": "Upload a setpoint curve",
"args": {"points": {"type": "double[]", "description": "Setpoints in V"}},
},
},
"streams": ["measurements"],
}
def init(self, config):
# Heavy imports (torch, numpy, ...) belong HERE, not at module level.
# Module-level code runs at scan time too, where deps may be missing.
from somelib import Device
self._dev = Device(config["serial"])
def get(self, property):
if property == "voltage":
return self._dev.read_voltage()
raise KeyError(property)
def set(self, property, value):
if property == "voltage":
self._dev.write_voltage(float(value))
return
raise KeyError(property)
def execute(self, action, args):
if action == "reset":
self._dev.reset()
return
if action == "set_curve":
return self._dev.upload(args["points"])
raise KeyError(action)
def shutdown(self):
self._dev.close()
if __name__ == "__main__":
run(MyDriver)Sync and async def methods are both supported — return a coroutine and the dispatcher will await it.
Connector starter template
Ship a template.js next to your .driver.py. It's the connector starter content users see when creating a connector for your driver. The packager copies it into the .muxdriver at the package root. Required — node drivers.js build refuses to package a driver without it.
drivers/py/mydriver/
├── mydriver.driver.py
├── manifest.json
├── template.js ← starter connector (required)
└── requirements.txt ← optional runtime depsManifest
{
"formatVersion": 1,
"id": "you/mydriver",
"name": "MyDriver",
"version": "1.0.0",
"description": "...",
"tier": 2,
"category": "free",
"group": "instruments",
"entryPoint": "mydriver.driver.py",
"author": { "name": "Your Name" },
"tags": ["other"]
}tier must be 2. entryPoint must end with .driver.py. category must be "free" — premium signing for Python drivers is not yet implemented.
Runtime dependencies
Place a requirements.txt next to your .driver.py. The packager bundles it into the .muxdriver. The first time a connector built on the driver is activated, Muxit:
- Creates a per-driver virtual environment under the package's cache dir (
workspace/drivers/.cache/<id>@<version>/.venv/). - Runs
pip install -r requirements.txtagainst that venv, streaming output to the server console and emittingdriver.python.installevents on the EventBus so the UI can show progress. - Stamps a SHA-256 hash of
requirements.txtin.venv/.muxit-req-hash. Subsequent activations check the hash; same hash → skip pip; different hash → re-run pip (incremental, pip handles delta). - Launches the driver subprocess with the venv's
pythonsoimport torch(or whatever the driver imports insideinit()) resolves to the venv-installed package.
The venv is per-driver — different drivers can pin conflicting versions of the same package without colliding. The venv lives in the package cache dir, so when the user upgrades or uninstalls the driver, the venv goes with it (no orphaned installs littering the system).
If requirements.txt is absent the venv step is skipped entirely and Muxit launches the driver directly with the system interpreter (faster startup, zero disk footprint).
Caveats
- Native wheels (torch, opencv) sometimes need system-level build tools or libraries. If pip fails, the error surfaces as the driver's init error with the full pip transcript above it.
- On Debian/Ubuntu the system Python may not include the
venvmodule by default —apt install python3-venvonce if you seepython -m venvfailing. - Loose
.driver.pyfiles dropped intoworkspace/drivers/(dev workflow, no.muxdriverpackaging) do not get a managed venv — there's no cache dir for the venv to live in. Use packaged drivers when you want auto-install.
Logging and streams
Inside your driver class:
self.log("opened device", level="info") # forwarded to the Muxit console
self.emit("measurements", {"v": 12.3}) # emits on stream "measurements"Anything written to stderr is forwarded to the console verbatim (useful for tracebacks, library warnings). Anything print()-ed to stdout is logged with a "non-json stdout" hint — use self.log() instead.
Debugging
Three paths from "my driver does something weird" to "fixed it", in order of how invasive they are.
1. Read the error response
When your driver raises an exception, the user-facing error message now includes the location:
KeyError: 'unknown property: voltage'
at get() in mydriver.driver.py:62
at _coerce() in mydriver.driver.py:91The first line is the exception, the rest is up to 3 frames of your code (SDK frames are filtered out). The full Python traceback also goes to the server console via stderr — that's the place to look when you need to see the entire chain.
2. Run the driver standalone
You don't need to start MuxitServer to reproduce a problem. Two single-line commands are enough:
# Verify your META block is valid
python3 mydriver.driver.py --scan
# Drive the dispatch loop manually — paste JSON-RPC requests, see responses
python3 -u mydriver.driver.py --instance dev
> {"id": 1, "method": "init", "params": {}}
{"event": "log", "level": "info", "message": "Hello"}
{"id": 1, "result": null}
> {"id": 2, "method": "execute", "params": {"action": "greet", "args": {"name": "alice"}}}
{"id": 2, "result": "hello alice"}The --scan mode prints {"meta": {...}} and exits — useful for catching typos in your META dict before they show up at server start. The --instance mode is the same JSON-RPC loop the server drives, just with you typing the requests instead.
3. Attach a debugger (debugpy + VS Code)
For step-through debugging, set MUXIT_PYTHON_DEBUG=<port> in the connector's environment (e.g. 5678). The driver will pause at startup and wait for a debugger to attach to 127.0.0.1:<port> before it runs init.
# Add debugpy to your driver's deps:
echo "debugpy" >> requirements.txt
# Then in VS Code: "Run and Debug" → "Python: Attach" on port 5678.Use cases: pausing inside init to inspect what the device returned the first time you connect, walking through a complicated execute() line by line, or just dropping a breakpoint() somewhere and seeing it fire when the dashboard pokes the connector. pdb.set_trace() does not work — the subprocess' stdin is busy with JSON-RPC, so pdb's interactive prompt has nowhere to read from.
If MUXIT_PYTHON_DEBUG is set but debugpy isn't installed, the driver prints a one-line hint to stderr and continues without debugging — you'll see it in the server console.
Common pitfalls
(non-json stdout)in the server log — you usedprint()somewhere instead ofself.log(). stdout is the JSON-RPC channel; tame it withself.log()or send debug output toprint(..., file=sys.stderr)instead.MissingMethodException/ImportErrorat init — your heavy imports are at module level. Move them insideinit()so scan time stays clean and the venv install runs before the import fires.- Driver hangs on first init — pip is probably still working on a big native wheel. Watch the server console; the
[pip]-prefixed lines tell you what's happening. python -m venvfailed on Debian/Ubuntu —apt install python3-venvonce.
Why heavy imports go inside init
Scan time runs the module with --scan to extract META. If you import torch at the top of the file, every server start has to import torch (and the deps must be installed even on machines that don't run that driver). Putting the import inside init keeps scans fast and keeps the catalog usable on machines that haven't installed your driver's deps yet.
Wire protocol (reference)
One JSON object per line on stdin/stdout:
| Direction | Shape | Meaning |
|---|---|---|
| host → driver | {"id": N, "method": "init"|"get"|"set"|"execute"|"shutdown", "params": ...} | RPC request |
| driver → host | {"id": N, "result": ...} | success |
| driver → host | {"id": N, "error": "..."} | failure |
| driver → host | {"event": "log", "level": ..., "message": ...} | structured log (async) |
| driver → host | {"event": "stream", "name": ..., "data": ...} | stream emit (async) |
The SDK's run() handles all of this for you — you only see the init / get / set / execute / shutdown Python methods.
Limitations vs. C# (Tier 3)
- No premium signing — Python drivers are free-only.
- One subprocess per active connector — modest memory overhead vs in-process JS/DLL drivers.
- Subprocess startup adds ~100–500ms to connector init (plus a one-off
pip installon the very first activation if the driver ships arequirements.txt). Subsequent activations reuse the cached venv and start in ~150ms. - No transport factories: open serial / TCP from the driver's own Python code (
pyserial,socket, ...). The host doesn't proxy transports the way it does for JS drivers.