Mostly
Harmless

An AI lab building, securing, and teaching the systems that build the systems — quietly, and with a towel.

What we do

We build, secure, and teach AI systems that act on their own.

01AI Security

We find what your model didn't expect to be asked. Adversarial testing, threat modeling, and security review for production AI. Most engagements start with a focused assessment — prompt injection, jailbreaks, agent tool misuse — and end with concrete hardening recommendations and a report your auditors will accept.

02Training

Specialized training for technical teams putting AI into production. Practical, deep, and free of magical thinking. Topics range from secure prompting and evaluation harnesses to agent design and deployment operations. Delivered as half-day workshops, multi-day cohort programs, or bespoke curriculum for in-house teams.

03Agentic AI

Designing and shipping autonomous systems that mostly behave themselves. From prototype to production. Research agents that synthesise sources, support agents that shorten ticket queues, internal automation that turns five-step processes into one. We work with any model provider and any agent framework — or write the framework when off-the-shelf doesn't fit.

04Research

Public notes from the strange edges of the field — safety, alignment, and the experiments worth telling you about. We share what we learn: write-ups of new attack classes, evaluation methods, and the small tools we wish someone else had built. Read because it's interesting; hire us when it lines up with what you need.

An example

What our deliverables look like.

A pre-deployment threat model for an example multi-agent system — the same structure and depth as a real engagement. System context, trust boundaries, prioritised risks, and recommended mitigations. The system is fictional; the format isn't.

Download example

PDF · 240 KB · public example

Get in touch

Tell us what you're working on.

or email info@mhl42.ai