Our Vision: where we're headed
Today, most fermentation data systems tell you that something changed. We're building Strand to answer why it changed—and what to do next.
Our vision: a reinforcement loop that closes on every run. Ingest signals with provenance, update a mechanistic-causal graph, propose the next policy, observe results, then tighten both model and recommendations together. Each iteration makes the next one smarter.
This page describes where we're going, not what exists today. We're building toward this future one layer at a time.
Most tools stop at Level 1. We're architecting for Level 3 and building upward from where we are now.
"See all my data in one place"
Ingest CSV/XLSX files with column mapping. Store measurements with file provenance. Generate SPC charts with outlier detection. Flag and triage anomalies.
"Help me understand this run"
Chat with your causal graph. Spot drift early, line up the runs that matter, and explain the why in plain English.
"What CAUSED Run 11 to outperform?"
Mechanistic models plus causal inference. Surfaces why things changed, proposes the next move, and tightens with every run across customers.
Mechanistic priors will seed the initial graph. Then observational data and every new run will update edge weights. We'll exploit natural variation in your lab—operator differences, timing noise, equipment drift—the way randomized trials do, checking each edge against physics before recommending a move.
Variance you already have will be treated as instrumented randomization. The model will learn causal lift per lever while tracking uncertainty and data provenance.
Model stack
Physical constraints bound edge directions, while Bayesian structure learning scores competing graphs.
Decision readiness
Counterfactual simulations estimate lift and risk bounds before a recommendation leaves the loop.
We'll train on outcomes (titer, quality, yield) rather than proxies, then trace back to the levers you can actually adjust. Each recommendation will return its causal chain, expected lift, confidence interval, and what data would tighten it further.
Each run will be a step in an RL loop: align data to the graph, update causal beliefs, propose a policy, observe outcomes, and shrink uncertainty before the next move. Over time, the system learns what drives your process.
Ingest runs, normalize units, and map signals to the causal graph so every feature has provenance and confidence.
Mechanistic priors and causal inference update edge weights every run, so correlations are tested against physics.
Recommendations are treated as actions in a reinforcement loop, not one-off tips.
As telemetry and QC results arrive, the loop scores predictions vs. reality and tightens both priors and policy.
We'll encode fermentation biology as a directed graph seeded with mechanistic priors. Every run will update the edge weights via causal inference, so recommendations stay anchored in biology while adapting to your specific process and organism.
Upstream
Process
Outcomes
Example causal pathway
Four compounding advantages that will make the system smarter with every experiment across every customer.
Biology will be encoded as causal structure, not relearned from scratch. Mechanistic priors will keep the model grounded while data keeps it honest.
We'll extract causation from variance you already have: operator differences, timing noise, equipment drift. It will function like a randomized trial without the experimental burden. Inspired by JURA Bio's causal inference work.
Every run across all customers will strengthen the shared causal structure. We'll share graph updates, not raw data, to remain privacy-preserving while accelerating learning.
Each loop will tighten the policy: better priors → better experiments → better priors. New customers will start where others left off, not from zero.
We'll strengthen data ingestion, provenance tracking, and mechanistic priors. Begin running causal graph updates on every batch of telemetry. Start shipping counterfactual estimates with confidence bounds before recommendations go live.
We'll move from single recommendations to full run-sheet policies scored by RL rewards that balance lift, risk, and cost. Policies will be simulated against historical runs before deployment, with live confidence monitoring throughout execution.
Cross-customer causal learning with privacy-safe graph updates, automated counterfactual design, and fully closed-loop experimentation. Every run will sharpen both mechanistic priors and operational policy simultaneously.
If this vision resonates with how you think about fermentation, we'd love to talk.