Assurance Library
VoteChain Assurance
Internal pre-flight playbooks, published for transparency
14 playbooks → 60 seconds
AI Document Summarizer
█ Reading 14 assurance playbooks...
█ Analyzing threat models and attack surfaces...
█ Cross-referencing testing methodologies...
█ Evaluating coverage across security domains...
█ Compiling summary...
The building-inspection version:
What are these documents?
Imagine a brand-new building is about to open. Before anyone walks through the doors, inspectors check everything — the fire alarms, the plumbing, the locks, the wheelchair ramps, the emergency exits. These 14 playbooks are the inspection checklists for VoteChain. Each one covers a different kind of check.
Why publish them?
Because when the checklists are public, anyone can look at them and say "hey, you forgot to check the sprinklers on the third floor." That's the whole point — invite the sharpest critics to find what was missed before opening day, not after.
What gets inspected?
- Can bad guys break in? Several playbooks imagine every kind of troublemaker — hackers, insiders, tricksters — and plan how to stop them. Think of it like testing whether every door locks, even from the inside.
- Does the math check out? The system uses special math (cryptography) to keep things private and tamper-proof. One whole playbook is just about making sure that math is done right — like double-checking every weld on a bridge.
- What if something breaks? Some playbooks test what happens when things go wrong on purpose — power failures, traffic overloads, servers going down. The building still has to stay safe even if the elevator stops working.
- Is it fair for everyone? One playbook specifically checks that the system works equally well for people with disabilities, people without smartphones, people in rural areas, and people who speak different languages. If the wheelchair ramp is steeper than the stairs, you failed.
- Does it keep secrets? A privacy inspection makes sure the system never accidentally learns who you are or how you voted — only that you were allowed to vote.
- Can anyone cheat and get away with it? A special team pretends to be the bad guys (a "red team") and tries to break the system on purpose. If they succeed, that's a good thing — it means the flaw was found before a real bad guy found it.
Some quick numbers:
- 14 separate inspection playbooks
- Covers 6 domains: security, cryptography, operations, privacy, usability, and fairness
- Written for the internal team first, then published for everyone to read
- Designed to run before outside auditors arrive — so the outside check starts from a higher bar
The idea is simple: don't wait for someone else to find what's broken. Find it yourself, fix it, prove you fixed it, then let the outside inspectors verify you didn't miss anything.
Want more detail? Slide the bar to the right. Or close this and browse the 14 individual playbooks below.
Here's what 14 assurance playbooks cover:
These are internal pre-flight guides — the testing the engineering team runs on itself before independent auditors, red teams, and certification bodies arrive. They're published so anyone can critique the methodology, spot blind spots, and hold the project accountable to its own standards.
Security & Threat Analysis (3 playbooks)
- Threat Modeling turns every security claim in the PRD into explicit assumptions, failure modes, and a test plan. It uses structured attacker profiles — hackers, insiders, coercers, supply-chain attackers — and maps each one to the defenses that are supposed to stop them. The output is a risk register that tells auditors exactly where to focus.
- Attack Case Library catalogs specific adversarial test cases with expected defenses and patch patterns. Think of it as a public library of "ways someone might try to break this" — without including step-by-step exploit instructions. Each case has a known defense and a documented fix pattern.
- Penetration Testing scopes an internal staging pentest focused on system invariants, configuration drift, and safe failure behavior. It defines what's in-scope (the verification chain, the bulletin board, the gateway) and what the testers should look for: broken assumptions, not just open ports.
Cryptography & Protocol Integrity (2 playbooks)
- Crypto + Protocol Review prepares the entire cryptographic stack for external audit. It checks that every signature and proof is bound to the right context (election ID, jurisdiction, challenge nonce), reviews domain separation for hashes, pressure-tests the privacy model for correlation leaks, and documents every key ceremony and rotation procedure.
- EWP Conformance Testing defines a conformance surface for the Election Web Protocol — the set of behaviors any compliant implementation must exhibit. It includes test vectors (known inputs with expected outputs), invariant tests, and interoperability checks across deployment modes.
Adversarial Exercises (2 playbooks)
- Red Team Exercises go beyond technical pentesting. These are end-to-end scenario drills that combine technical attacks, social engineering, and operational failures. The red team tries to break the system; the blue team defends it; purple team debriefs extract lessons. Exercises include blind engagements where defenders don't know an attack is coming.
- VDP / Bug Bounty Readiness prepares the project for public vulnerability disclosure. It covers intake workflows, triage SLAs, safe-harbor legal language, reward tiers, and internal dry runs to make sure the pipeline works before external researchers start submitting reports.
Code & Operations (3 playbooks)
- Secure Code Review provides an implementation checklist covering protocol correctness, input parsing, key management, logging hygiene, and privacy-by-default patterns. It's what a reviewer should check line-by-line before any code is merged.
- Operational Audit covers everything outside the code: key ceremonies, change control processes, incident response runbooks, evidence handling chain-of-custody, and operator access controls. If the code is perfect but someone leaves an admin password on a sticky note, this playbook catches it.
- Monitoring + Non-Equivocation validates that the bulletin board cannot show different histories to different observers without detection. It defines monitor responsibilities, gossip protocols, alerting thresholds, and includes controlled equivocation simulations — deliberately feeding inconsistent data to see if monitors catch it.
Privacy & Usability (2 playbooks)
- Privacy Audit validates that the implementation matches the privacy model: no PII on-chain, no correlation channels that could deanonymize voters, bounded metadata precision, access-controlled evidence bundles, and receipt-freeness (receipts prove inclusion but cannot prove how someone voted).
- Usability: Phishing Defense tests whether real users can tell a legitimate VoteChain gateway from a fake one. It covers gateway authenticity signals, manifest verification UX, and measures how often users fall for simulated phishing attempts during controlled testing.
Equity & Resilience (2 playbooks)
- Equity and Access Testing measures whether verification and casting paths create a two-tier system. It compares failure rates, verification times, and support burden across demographic cohorts, device types, network conditions, and enrollment channels. If any group experiences measurably worse outcomes, the system fails this check.
- Load + Failover Drills stress-tests degraded mode: what happens when nodes go down, traffic spikes to 2x peak, or network partitions isolate regions. It validates that rate limiting is fair, continuity objectives are met, and no voter is turned away because of an infrastructure failure.
Why publish internal playbooks?
Because "trust us, we tested it" is not a trust model. Publishing the methodology means anyone — cryptographers, election officials, security researchers, journalists — can evaluate whether the testing was rigorous enough, whether the right questions were asked, and whether anything important was left out. The goal is to make the assurance process itself auditable, not just the system it's testing.
Close this and browse the 14 individual playbooks for full checklists, expected outputs, and acceptance criteria.
Technical summary of the assurance surface:
14 internal playbooks covering the full assurance lifecycle for VoteChain (permissioned voter verification chain) and EWP (Election Web Protocol). Designed to be executed pre-3P audit/red-team to raise the baseline before external engagement. Published for public methodology review.
Threat Analysis & Attack Surface
- Threat Modeling: STRIDE-derived methodology adapted for election systems. Asset inventory (enrollment keys, gateway signing keys, BB STH keys, trustee threshold shares, chaincode governance keys). Trust-boundary decomposition across voter client, poll device, enrollment authority, gateway, bulletin board, chain nodes, monitors, and oversight portal. Attacker profiles: network-only, physical, privileged insider, supply-chain, coercer. Claim-to-mechanism matrix mapping each PRD security property to its enforcement mechanism, assumptions, detection strategy, and recovery path. Output: risk register driving red-team and audit focus.
- Attack Case Library: Cataloged adversarial test cases (safe negative tests) with expected defense behavior and patch patterns. Categories: credential forgery, enrollment replay, nullifier collision injection, gateway impersonation, BB fork/equivocation, threshold ceremony disruption, device attestation bypass, cross-jurisdiction replay. Each case includes preconditions, attack steps (without full exploit chain), expected system response, and regression test anchor.
- Penetration Testing: Internal staging pentest scoped to verification chain invariants, BB API surface, gateway authentication, node configuration drift, and degraded-mode edge cases. Focus on broken assumptions rather than generic OWASP findings. Scope excludes ballot-casting (separate system). Output: findings mapped to risk register entries with severity, exploitability, and remediation SLA.
Cryptographic & Protocol Assurance
- Crypto + Protocol Review: Audit-readiness preparation for external cryptographic review. Covers: binding verification (every signature/ZKP binds election_id, jurisdiction_id, manifest_id, challenge nonce, nullifier), domain separation for all hash functions, privacy leakage matrix (time/location metadata correlation, IP-to-voter mapping in Mode 3, receipt-freeness under coercion), non-equivocation proof semantics (BB consistency proofs, monitor gossip, anchor linkage), key ceremony review (generation, distribution, rotation, revocation for HSM-backed and threshold keys), and ZK circuit review (constraint system completeness, soundness assumptions, trusted setup ceremony if Groth16, or transparent setup if PLONK/STARKs).
- EWP Conformance Testing: Defines the EWP conformance surface: the set of protocol behaviors any compliant implementation must exhibit across deployment modes (Mode 1/2/3). Test vector format: deterministic inputs with expected outputs for credential issuance, ZKP generation/verification, nullifier derivation, BB inclusion/consistency proofs, receipt generation, and cross-node consensus. Invariant tests: properties that must hold regardless of implementation (e.g., nullifier uniqueness, proof non-malleability, receipt non-coercibility). Interoperability matrix across reference implementations.
Adversarial Exercises
- Red Team Exercises: Full-scope adversarial engagement framework. Scenario library covering technical (chain-level attacks, ZKP forgery attempts, BB fork injection), human (social engineering of operators, phishing poll workers), and operational (insider key compromise, supply-chain substitution) attack paths. Exercise types: announced purple-team drills (collaborative), blind red-team engagements (defenders unaware), and tabletop exercises for scenarios too dangerous to simulate live. Measurable outcomes: time-to-detect, time-to-contain, coverage gaps, and patch-loop closure verification.
- VDP / Bug Bounty Readiness: Pre-launch vulnerability disclosure program preparation. Intake pipeline: submission form, triage SLA (24h acknowledge, 72h severity assessment), severity classification (CVSS + election-context modifiers), safe-harbor legal language, reward structure (up to $500K for critical chain-level findings). Internal dry runs: planted vulnerabilities processed through the full pipeline to validate triage accuracy, communication templates, and payout mechanics before public launch.
Implementation & Operations
- Secure Code Review: Line-level implementation checklist. Protocol correctness (does the code match the spec?), input parsing (malformed payloads, boundary values, encoding edge cases), key material handling (in-memory exposure, zeroization, HSM offload), logging hygiene (no PII in logs, structured audit events, tamper-evident log chains), dependency audit (supply-chain risk for cryptographic libraries, pinned versions, reproducible builds).
- Operational Audit: Non-code assurance surface. Key ceremony procedures (multi-party generation, secure transport, rotation schedules, emergency revocation). Change control (chaincode upgrades require multi-party approval + 30-day public comment). Incident response (runbooks, communication templates, evidence preservation chain-of-custody, post-mortem publication SLA). Operator access controls (principle of least privilege, session recording, break-glass procedures with mandatory post-incident review).
- Monitoring + Non-Equivocation: BB transparency validation. Monitor responsibilities: STH polling, anchor verification against VoteChain, consistency proof verification, cross-monitor gossip. Controlled equivocation simulations: deliberately inject inconsistent STH views and verify detection latency, alert propagation, and public disclosure timeline. Alert thresholds: missing consistency proof = high severity, STH divergence = critical, monitor downtime > N minutes = escalation. Evidence capture for all detected anomalies.
Privacy & Usability
- Privacy Audit: Privacy impact assessment methodology. Data-flow tracing across enrollment, verification, casting, fraud-case evidence, and monitoring paths. Field inventory: every data field across on-chain events, BB leaves/STHs, receipts, logs, metrics, and support systems — tagged with necessity, retention, access control, and correlation risk rating. Correlation risk testing: time/location linkage precision, network identifier leakage (IP-to-voter in Mode 3 gateways), operational log pseudonymity validation, recovery-flow centralization risks. Receipt-freeness review: can a receipt be used to prove vote content to a coercer?
- Usability: Phishing Defense: User testing protocol for gateway authenticity signals and manifest verification UX. Simulated phishing campaigns against test cohorts measuring click-through rates, credential-entry rates, and time-to-recognition. Evaluation of visual trust indicators, certificate pinning UX, and out-of-band verification flows. Acceptance criteria: phishing success rate below defined threshold before public deployment.
Equity & Resilience
- Equity and Access Testing: Parity measurement across demographic cohorts, device classes (low-end phones, assistive tech, no-smartphone path), network conditions (offline, intermittent, high-latency), and enrollment channels (assisted, mobile unit, in-office). Key metrics: verification time distribution (median, p95), failure/escalation rates, provisional ballot rates, support burden per 1K voters. Threshold rules: if any cohort metric exceeds the baseline by more than a defined percentage, the test fails. Includes end-to-end exercises with representative users covering disability access, low digital literacy, and non-English speakers.
- Load + Failover Drills: Stress testing at 2x peak projected load. Scenarios: node failure (single, multi, partition), network degradation (latency injection, packet loss), geographic isolation, and cascading failure chains. Validation targets: rate limiting fairness (no demographic skew under load), continuity objectives (RTO/RPO per component), degraded-mode voter experience (provisional path remains functional), and data integrity under partial-write scenarios. Output: capacity model with headroom recommendations and failover runbook validation.
Cross-cutting methodology:
Every playbook follows the same structure: inputs (what you need before starting), steps (what to do), expected outputs (deliverables with pass/fail criteria), and patch loop (how findings feed back into the risk register and re-test cycle). The intent is that internal execution raises the baseline high enough that external audit time is spent on novel findings rather than known gaps.
Each playbook below contains the full checklist, expected outputs, and acceptance criteria. Browse individually or read sequentially.
These documents are written for internal engineering teams to do pre-flight testing before independent third parties run official audits, red teams, and certification.
They are published to invite critique and to make assurance requirements concrete. They intentionally avoid step-by-step exploit instructions. Do not use them to attack systems you do not own or explicitly control.
Attack Case Library
Public adversarial test cases (safe negative tests) with expected defenses and patch patterns.
Threat Modeling
Turn claims into assumptions, failure modes, and a test-driven risk register.
EWP Conformance Testing
Define a conformance surface, vectors, and invariant tests for interoperability.
Crypto + Protocol Review
Audit-readiness checklist: bindings, privacy leakage, non-equivocation, ceremonies.
Secure Code Review
Implementation review checklist for protocol correctness, parsing, keys, logs, privacy.
Penetration Testing
Internal staging pentest plan focused on invariants, configs, and safe failure behavior.
Red Team Exercises
End-to-end scenario library (technical + human + ops) with measurable outcomes.
VDP / Bug Bounty Readiness
Intake, triage, SLAs, safe harbor, and internal dry runs before going public.
Operational Audit
Keys, ceremonies, change control, incident response, and evidence handling requirements.
Privacy Audit
Validate privacy claims, inventory fields, and identify correlation channels early.
Usability: Phishing Defense
User testing plan for gateway authenticity signals and manifest verification UX.
Equity and Access Testing
Measure failure/latency parity across cohorts; validate alternate paths are first-class.
Monitoring + Non-Equivocation
Monitor operations and controlled equivocation simulations with evidence capture.
Load + Failover Drills
Stress test degraded mode, safe rate limiting, and continuity objectives under failure.