Principles
What these tools have in common.
The three CLIs were built independently for different problems, but they share a small set of convictions. If you find yourself disagreeing with one of these, the suite probably isn't the right fit for you — and that's fine.
0 — Agents do the operations; humans approve the commitments
These tools were designed for a world where an LLM agent does most of the operational work and a human is in the loop only for the gates that legitimately need a human. Drafting, scoring, proposing amendments, sending, tracking, verifying — agent. Signing a contract, accepting a final position, overruling an escalation — human, with a deliberate gesture.
Concretely, that shows up as: the sign-cli MCP server exposes every command to any MCP-aware client; per-signer approval tokens are scoped to a signer's email and have a TTL, so a runaway agent can't sign on a human's behalf; negotiation rounds are hash-chained so a human can audit what an agent did between sessions; findings carry severity scores so escalation to a human is automatic for the things humans actually need to see.
This is the principle every other principle on this page is in service of. If you only care about local-first or deterministic behavior because you want to operate the tools yourself, that works too — but the design is optimized for the agent-plus-human shape.
1 — Local-first by default
Every tool runs offline as the default mode. If a step needs the network — calling an LLM, posting to a signature provider, hitting a remote PDF backend — it's behind an explicit flag, with the destination disclosed before the call goes out. Nothing phones home. There is no telemetry endpoint to disable because there is no telemetry.
2 — Deterministic where possible
Same input plus same configuration should produce the same output. Reviews don't change between runs. Drafts don't include a timestamp inside the contract body. Signature receipts hash-chain the events so a third party can replay the audit trail and verify it independently.
LLM augmentation is opt-in and isolated under its own keys in the output — the deterministic rule output is never overwritten by the model's suggestions.
3 — Audit-grade evidence baked in
These tools are useful precisely when you need to defend what happened. Negotiation rounds are hash-chained. Signature events are hash-chained and RFC 3161 timestamped. Draft outputs include the policy snapshot they were rendered against. If you ever have to reconstruct who agreed to what, on which day, and why, the evidence is on disk in a format another tool can read.
4 — Composable over monolithic
Each CLI does one thing. The output of one tool is a standard file format that any other tool
can read. There's no shared database, no daemon process, no proprietary glue. If a better
DOCX-to-PDF converter shows up tomorrow, swap it in. If you'd rather sign with a notary than an
e-signature provider, skip sign-cli entirely. The workflow doesn't
care.
5 — Honest failure
When something can't work, the tool says so loudly and exits with a non-zero status. Documents don't get silently mangled. Fonts that aren't installed get flagged before conversion, not substituted invisibly. Hash-chain mismatches halt processing. There's no "best effort" mode where the tool quietly does the wrong thing — that's a category of bug that costs people trust, and trust is the whole point.
6 — No new runtime dependencies, when avoidable
The Python CLI is stdlib-only at runtime — no Flask, no requests,
no SDKs. The Node CLIs keep their dep trees small and audited. This isn't dogma; it's a
forcing function for clarity. Fewer dependencies means a smaller attack surface, fewer
install surprises, and code that's easier to read end to end.
7 — Open source, MIT licensed
Every line of code is on GitHub under the MIT license. Fork it, vendor it, modify it for your firm's house style. The only thing we ask is the standard MIT attribution; that's it.
What we deliberately don't do
- No multi-tenant SaaS. Each install belongs to one user or one team. There is no shared backend to compromise.
- No clause-by-clause AI-generated drafts. The tools render templates with your policy's preferred language; they don't fabricate clauses on the fly.
- No "automatic" signing without explicit per-signer approval tokens. Every signature requires a human gesture, even when an agent kicked off the workflow.
- No telemetry or usage analytics. We don't know who installs these tools. We'd rather not.
- No ML-driven contract generation. LLM features (where they exist) augment a deterministic baseline; they don't replace it.
Where this comes from
Most legal-tech products optimize for vendor-side metrics: lock-in, daily active users, premium tiers. The three CLIs here optimize for the user-side metric instead — would I, the person doing actual contract work, trust this output enough to put my name on it? Everything in the design follows from that question.