Every claim your code makes — verified.
The architectural compliance engine for GitHub. SOC 2, ISO 27001, PCI DSS, PSD2, GDPR, HIPAA, EU AI Act, NIST AI RMF, DORA, NIS2, CCPA — verified against your actual codebase, not your marketing deck.
Powered by ReguNav — the compliance engine.
What ships today
EU AI Act, ISO 42001/27001/27701, GDPR, HIPAA, SOC 1/2, PCI DSS, NIST AI RMF/CSF, DORA, NIS2, CCPA + more
patent-safety · no-placeholder · trademark-consistency · HF model-card
Banking, healthcare, manufacturing, SaaS, retail, public sector, energy, defence, more
Enterprise tier with R2 Object Lock COMPLIANCE mode (Principle #45)
Compliance reviews shouldn't be quarterly slide decks.
Five concrete patterns we see at tier-1 banks and Series B+ fintechs. Each one rooted in a real engine check that ships in the platform today.
Quarterly slide-deck reviews. Findings noted in spreadsheets that nobody updates. Audit-prep is 6 weeks of evidence-gathering.
Every PR runs the same checks an auditor would run. The evidence pack is the audit deliverable — already cryptographically signed.
Engineering ships 'SOC 2 certified' on a landing page before the audit closes. Legal finds out via a customer's vendor questionnaire.
Patent-safety check fires on every PR. Unsubstantiated claims block the merge. Drift can't ship.
HuggingFace / Kaggle artefacts ship without intended-use, eval results, or training-data lineage. EU AI Act Art. 13 / ISO 42001 8.1 silently fail.
HF model-card evaluator runs on every README.md change. Missing fields are blocking on Article 13's mandatory disclosures.
DPA points to v2; sub-processor list shows last year's vendors; the SLA page references a metric the runbook deprecated.
Trust artefacts are generated from the same data plane the engine reads. There is one source of truth, not five.
Vendor says BYOC. The Terraform module hasn't been touched in a year. Three providers depend on a private endpoint that's no longer documented.
Code Constitution's customer-mirror workflow runs in YOUR runner with YOUR secrets. The vault pattern is the architecture, not a doc.
Try the engine right here.
Paste a README, a marketing claim, an engineering note — anything. The same rule patterns ship in the production engine. Findings update on every keystroke.
- fail
SOC_2 · CC1.2L3:8soc2-certified-unsubstantiatedSOC 2 certifiedUnsubstantiated SOC 2 claim. Qualify with 'audit in progress (target QX 20YY)' or 'controls mapped'.
- fail
ISO_27001 · A.5.32L3:28iso27001-certified-unsubstantiatedISO 27001 certifiedUnsubstantiated ISO 27001 claim. Qualify or remove. Use 'controls mapped' if applicable.
- warn
FTC · §5L3:49worlds-first-superlativeWorld's firstSuperlatives are FTC-actionable when unsubstantiated. Replace with a verifiable claim.
- fail
Internal · —L6:8fake-phone-xxx+966 11 XXXPlaceholder phone number. Replace with real contact.
- warn
Internal · —L6:16todo-markerXXXUnresolved marker. Move to issue tracker or scope before shipping.
These rules are a subset of what ships in@regunav/engines/code-verification— the real engine runs in your CI on every PR with the full rule set + custom exemptions.
Nine engines, one product.
Every capability below is in the build. Nothing roadmap. Every check has a corresponding rule pack the auditor can reference.
SOC 2 / ISO 27001 / PCI DSS unsubstantiated-claim detection. FTC superlative-watch. USPTO trademark-symbol enforcement on first occurrence.
Blocks lorem-ipsum, fake phone numbers, unscoped TODOs in production code paths, and ship-blocking 'coming soon' UI text.
First-occurrence mark detection across marketing, docs, and product copy. Catches USPTO §15(1057) / EUIPO Art. 9 hygiene drift.
EU AI Act Art. 10 / 13 / 15 + ISO 42001 8.1 + NIST AI RMF MAP-3.1 conformance against any HuggingFace model README.md with YAML frontmatter.
Up to 50 annotations per check run, positioned at the exact line/column. Severity-coded. Auto-collapses to summary when count exceeds the GitHub cap.
Every run produces a content-addressed pack in R2: full violation list, framework refs, control refs, file paths, timestamps. The auditor pulls it directly.
Every state-changing decision is chain-hashed (sha256(prev_hash || event)). The replay engine reconstructs any prior state. Tamper-detection per row.
Customer secrets stay in the customer's GitHub Secrets. The mirror workflow runs in the customer's runner. We never see HF / CF / AWS / GCP / Azure tokens.
Zero long-lived shared secrets. Five-minute OIDC tokens minted by GitHub, verified against the public JWKS, cross-checked against the calling repo.
A category gap, not another scanner
Existing tools check what's in the code. Code Constitution checks whether your architecture delivers what your compliance framework requires.
The full pipeline, end to end.
Every push runs the same six-stage flow. Mouse over to pause; click a stage to inspect.
A push or PR-open event on a repo where the Code Constitution™ GitHub App is installed.
Benefits, quantified where we can.
Six benefit categories, three claims per side, traceable to public audit benchmarks. We don't publish customer metrics — your numbers will vary; the calculator below shows your scenario.
Every PR produces a signed, content-addressed evidence pack persisted to R2. Auditors consume directly. Replaces the manual screenshot-folder workflow with a deterministic artefact whose lineage is replayable.
Patent-safety check fires on every PR that touches marketing copy, README, docs, or product strings. Unsubstantiated SOC 2 / ISO / PCI claims are blocking-severity by default. Configurable via .codeconstitution/exemptions.yaml.
Findings appear as inline annotations the engineer can ack or fix in the same pass. No 'compliance review meeting' three weeks later. Time-to-resolution drops because the context is still loaded.
Trust center auto-generates SIG, CAIQ, ISO 27001 SoA, NIST 800-171, CMMC L2, Cyber Essentials, ENISA pre-fills. Auditor portal gives external assessors a per-tenant read-only view + WORM-chained activity log.
Every AI system gets an obligation tracker keyed to the in-scope frameworks. Model-card evaluator enforces Art. 13 disclosure fields. Sub-processor + transfer disclosures auto-publish on the trust center.
The self-audit engine evaluates every metric on every deploy and persists drift findings to the ack ledger. Compliance ops moves from periodic-review to event-driven; engineers receive notifications when a metric crosses its baseline, not three months later.
One hub. Every registry, every cloud, every framework.
Code Constitution™ is the hub; spokes connect to the registries we evaluate, the clouds we mirror logs from, the frameworks we enforce, the billing rails we meter against, and the observability sinks customers fan to. Click any spoke to inspect.
HuggingFace
Model-card evaluation against EU AI Act Art. 10 / 13 / 15. Public API; no token required for read.
Audit-prep ROI calculator
Conservative 60% prep-time reduction (lower bound of published SOC 2 benchmarks). Adjust the inputs for your org. No data leaves the page.
Your inputs
Your scenario
- Audit-prep hours today
- 1,600 hrs / yr
- Audit-prep hours with Code Constitution
- 640 hrs / yr
- Hours saved
- 960 hrs / yr
- ≈ work-days saved
- 120 days / yr
- ≈ FTE equivalent
- 0.48 FTE
Assumptions: conservative 60% prep-time reduction; uniform repo distribution; identical audit scope across audits. Your number will differ — talk to sales for a tailored estimate based on your audit history.
Talk to sales →Built for the people who own the outcome.
Six roles. Click any one to see the platform capabilities it touches. Qualitative outcomes only — we don't quote customer metrics we cannot publicly cite.
Continuous evidence instead of quarterly audits.
Auditors arrive expecting weeks of evidence-gathering. Code Constitution makes the evidence pack the audit deliverable — signed, content-addressed, and produced on every PR.
- Signed evidence pack per check run (R2-persisted)
- WORM audit chain — replayable to any prior timestamp
- Inline PR annotations keyed to SOC 2 CC1.2 / ISO 27001 A.5.32 / PCI DSS controls
- Auditor portal: per-tenant read-only browse of the artefact trail
Twelve sectors, one engine.
Each sector inherits a different framework matrix. The engine ships rule packs for every framework below; installing a sector pack turns on the matrix in one click.
Banking & Financial Services
Loan underwriting AI · KYC/AML model cards · transaction-monitoring drift
Healthcare
PHI de-identification · EU AI Act Annex III §5(d) medical-device AI
Insurance
Claims-decisioning AI · pricing-model fairness · sub-processor disclosure
Government & Public Sector
Annex III §2 critical infrastructure · NIS2 essential-entity reporting
SaaS / B2B Software
Sub-processor list · DPA addendum cycles · vendor questionnaires
AI Providers / Model Hosts
Model-card disclosure · GPAI Art. 53 + 55 obligations · Annex III gating
Manufacturing & Automotive
Industrial AI · safety-component AI · supply-chain attestation
Energy & Utilities
Critical infrastructure · NIS2 essential-entity · ICS / OT AI
Pharma
GxP-adjacent AI · clinical-trial AI · pharmacovigilance models
Defence & Aerospace
Annex III §3 critical defence systems · CMMC L2/L3 evidence
Telecom
Network security · roaming-fraud AI · subscriber-data handling
Retail & E-commerce
Recommendation systems · price-discrimination prohibition · payment card scope
Framework coverage
Each framework is shipped as a Rule Pack + Dictionary + Manifest by the ReguNav engine, evaluated against your codebase deterministically.
CC6.1: every data-mutation endpoint has auth middlewareA.9.4.2: admin routes carry MFA middlewareAI systems registered with risk classificationReq 4: payment endpoints enforce TLS 1.2+Art. 5(1)(f): PII columns encrypted at restICO-aligned data-subject rights endpoint§164.312(a)(1): user model has unique-ID constraintArt. 14: high-risk decisions have human-oversight gateMAP-3.1: AI system context documentedPR.AC-1: identities & credentials managedArt. 9: dependency map present and consistentArt. 21: incident-notification chain documentedVulnerability disclosure policy presentData-subject rights endpoint existsHow it works
Bring your own LLM key
The engine is deterministic — no LLM required for any check. LLM is only used (optionally) for drafting fix PRs on non-whitelisted violations. Customers bring their own key (Anthropic, OpenAI, Gemini, Llama, or self-hosted). Prompts and completions never touch our infrastructure.
engine.deterministic = true llm.byo_key = true prompts.stored_by_us = false
Value pricing
Priced against the audit-cycle cost you avoid — typically $150k–$2M per company per year in consulting + delay.