Technical reference for security reviewers

How SkinID is built, in detail.

For security engineers, CISOs, and technical buyers running due diligence. This page describes the protocols, key lifecycle, server architecture, and operational controls in enough detail to evaluate the security model without an NDA. For source-level review, contact support@skinid.ch.

Rollout status (as of May 2026). The architecture below is implemented in code and self-tested. The chip-bound zero-knowledge path (sections 2 to 4) becomes the live encryption regime for a customer the moment their chip is provisioned. The operator panel, audit log, and server hardening (sections 6 to 8) are in production now. Some controls in section 11 (independent penetration test, SOC 2 Type II) are roadmap items, not delivered; they are flagged inline so a reviewer is never confused about what is live versus planned.
Contents
  1. The stack at a glance
  2. Chip and provisioning protocol
  3. Authentication on every use
  4. Credential encryption and key derivation
  5. Account recovery (3 paths)
  6. Operator access and multi-party approval
  7. Tamper-evident audit log
  8. Server architecture and hardening
  9. Cryptographic primitive choices
  10. Threat model in detail
  11. Compliance map
  12. How to verify our claims

1. The stack at a glance

Five components, all under our control:

No third-party services on the critical path. No CDN-hosted JavaScript. No cloud key management. No analytics. The server is in Switzerland, the data is in Switzerland, the company is in Switzerland.

2. Chip and provisioning protocol

A factory-fresh DESFire chip ships with a single DES master key on the PICC (the chip's master application), all zeros. The first time a user taps a fresh chip, the server walks the chip through a 13-round-trip provisioning protocol that converts it to AES, installs a per-chip key, and writes a vault wrap key into an encrypted file.

Provisioning sequence

PCD (server) Chip (DESFire EV3) ───────────────────────── ───────────────── 1. Authenticate(0x0A) DES ── 0xAF E_K(RndB) ── 2. AF E_K(RndA||rotL(RndB)) ── 0x00 E_K(rotL(RndA)) ── 3. ChangeKey(PICC -> AES) ── 0x00 ── 4. AuthenticateEV2First ── 0xAF E_K(RndB) ── 5. AF E_K(RndA||rotL(RndB)) ── 0x00 E_K(rotL(RndA)||TI||PDcap||PCDcap) ── 6. CreateApplication(SKI) ── 0x00 ── 7. SelectApplication(SKI) ── 0x00 ── 8. AuthenticateEV2First ── 3 round trips, default app key 0 ── 9. ChangeKey(K_chip) ── 0x00 ── 10. AuthenticateEV2First ── 3 round trips with newly installed K_chip ── 11. CreateStdDataFile(0x01) ── 0x00 ── 12. WriteData(K_vault, 32B) ── 0x00 ── At this point the chip is SkinID-bound. K_chip protects the file that holds K_vault. The server keeps an MASTER_KEY-encrypted copy of K_chip and K_vault for disaster recovery only.

Per-chip secrets generated at provisioning

Both keys are versioned. Rotation = re-provision; the old chip's keys become unreachable when the customer activates a fresh chip via the recovery workflow.

3. Authentication on every use

When a user wants to log into a website, the chain looks like this:

  1. Trigger. Browser extension detects a login form, or the user explicitly requests autofill. Native client opens an NFC session.
  2. Read UID. Chip is tapped. Native client reads the 7-byte UID and the 28-byte GET_VERSION response. Forwards both to the server.
  3. Lookup. Server finds the chip in provisioned_chips, decrypts K_chip from server-side ciphertext using MASTER_KEY-derived KEK.
  4. Mutual authentication (3 round trips). Server initiates AuthenticateEV2First. APDUs are relayed through the native client to the chip and back. After completion both sides hold session keys (KSesAuthENC, KSesAuthMAC) and a 4-byte transaction identifier (TI).
  5. Read vault key. Server issues ReadData(file=1, offset=0, len=32) in CommMode.FULL. The chip's response is encrypted with KSesAuthENC + MAC'd with KSesAuthMAC. Server decrypts to recover K_vault.
  6. Originality check (optional, cached). First time we see a chip, the server reads the NXP originality signature (cmd 0x3C), verifies it against the published NXP DESFire EV3 public key on secp224r1. Result is cached in provisioned_chips.originality_verified_at.
  7. Decrypt the credential. The credential ciphertext (stored in credentials_v2.password_blob) is decrypted using the per-credential key derived from K_vault (see §4).
  8. Serve plaintext to client. Decrypted credential goes back over TLS to the browser extension, which fills the form. The chip can now leave the field.

Total wall-clock time: ~3 seconds end-to-end on iPhone Core NFC, ~1.5 seconds on a USB reader on Mac.

What we do not have: persistent server-side caching of K_vault. Each authentication round trip goes back to the chip. Sessions are short and bounded; once the user lifts their hand, decryption capability ends.

4. Credential encryption and key derivation

Every credential is encrypted with a unique key. The chain of derivations:

# For credential row id N belonging to user U on rp R:
cred_salt = struct.pack('<Q', N)                # 8 bytes, unique per cred
cred_key  = HKDF(K_vault, cred_salt,
                 info=b'skinid-vault-password-v1', length=32)
aad       = f'u={U};rp={R};c={N}'.encode()      # associated data
nonce     = os.urandom(12)                       # fresh per-write
ct        = ChaCha20Poly1305(cred_key).encrypt(nonce, plaintext, aad)
blob      = bytes([0x01]) + nonce + ct           # 1 + 12 + N+16 = N+29 bytes

Why each design choice

Reference implementation lives in vault_crypto.py, ~262 lines, with a self-test that exercises round-trip, tamper detection, AAD swap rejection, wrong-key rejection, and FIDO/password domain separation.

5. Account recovery (three paths)

If the user loses their chip, three independent paths exist. Customers choose any one at signup.

Path A: backup chip

At signup we offer to provision a second chip. K_vault for both chips is the same (the customer's vault, not a per-chip vault). The backup chip lives in a drawer, a safe deposit box, or a trusted family member. Tapping the backup at /recover swaps the primary, retires the lost chip's K_chip, and provisions a fresh backup slot.

Path B: printable Shamir key

K_vault is split via Shamir Secret Sharing over GF(2^8) using the Rijndael polynomial 0x11B. Default 3-of-5 split. The 5 shares are printed on paper at signup; the customer stores them separately. Three shares reconstruct K_vault and let us provision a fresh chip.

Each share is hashed with SHA-256 and a per-user, per-index salt; we store only the hash. The user provides the share bytes, we hash and verify.

Path C: KYC + cooling-off + multi-operator approval

Customer has neither backup chip nor Shamir key. They submit a recovery request with proof of identity (passport / national ID). After a documented cooling-off period (default 7 days), two operators of role senior or super must independently approve. A single reject vote rejects the request. On execution, a one-time activation code is generated; the customer enters it on a fresh chip, which gets provisioned with their existing vault key (derived from the recovered Shamir or, in last resort, from the encrypted-under-MASTER_KEY backup copy).

This last branch is the only situation where the server can technically decrypt customer credentials. It requires:

Each step is audit-logged and notifiable to the customer.

6. Operator access and multi-party approval

Roles

RoleCan do
supportRead dashboard, chip inventory, users; change own password; ship chips (factory to shipped); transition shipped to activating
seniorAll of support + read audit log + vote on approval requests + lock/unlock user accounts (with approval gate when user has stored credentials)
superAll of senior + create/disable other operators + retire activated chips (peer-approved)

Multi-operator approval

High-risk actions are gated behind a 2-of-N voting workflow:

The requester's vote is auto-counted. A second senior+ operator must vote approve before the action runs. A single reject vote rejects the entire request immediately. Requests expire after 7 days. All votes are recorded in the tamper-evident audit log with operator id, timestamp, and free-text comment.

Operator session security

7. Tamper-evident audit log

Every operator action is logged to activity_log. Each row carries a SHA-256 hash chained to the previous row's hash:

row_hash = SHA-256(
  prev_hash || \x1f || str(user_id) || \x1f || action || \x1f
  || details || \x1f || str(timestamp) || \x1f || str(operator_id)
  || \x1f || before_state_json || \x1f || after_state_json
  || \x1f || reason || \x1f || ip_address
)

Modifying any historical row breaks the chain at that row and at every row after it. /operator/api/audit/verify_chain walks the entire log and reports any mismatches. Pair with periodic external pinning of the latest last_hash (Slack channel, off-server log, public transparency log) for evidentiary value: if the on-server chain matches the external pin, you have cryptographic proof the log was not silently rewritten.

Each row carries: actor (operator + user id), action type, before/after JSON-serialised state, free-text reason, IP address (real client IP via ProxyFix, not the nginx loopback), and the timestamp.

This is the single mechanism that makes insider abuse detectable. An operator who silently fixes their own audit trail to hide an unauthorised action would have to recompute every row's hash from that point forward, which would mismatch the externally-pinned hash.

8. Server architecture and hardening

Process

HTTP defense layers

Code hygiene

9. Cryptographic primitive choices

UseAlgorithmLibrary
Chip mutual authAES-128 EV2First (NXP DESFire EV3)desfire.py
Chip secure messagingAES-128 CBC + AES-128 CMACdesfire.py
Vault wrap key32 bytes random, written to chipos.urandom
Per-credential keyHKDF-SHA256(K_vault, salt=cred_id, info=type)pyca/cryptography
Credential AEADChaCha20-Poly1305pyca/cryptography
FIDO2 assertionECDSA P-256 + SHA-256 (RFC 8152)pyca/cryptography
Originality verifyECDSA secp224r1, NXP-signedpyca/cryptography
Recovery key sharingShamir over GF(2^8), Rijndael poly 0x11Bcrypto.py
Server master KEKHKDF-SHA256 from /opt/skinid/.encryption_keycrypto.py
Operator passwordsPBKDF2-SHA256, 600 000 iterations, 16-byte saltoperator_auth.py
Audit log chainingSHA-256, Merkle-styleoperator_models.py
Backups at restage (X25519 + ChaCha20-Poly1305)filippo.io/age
TLSTLS 1.3, HSTS preloadnginx
CSPRNGLinux getrandom(2) on server, SecRandom on iOS, SecureRandom on Androidstdlib
Constant-time comparehmac.compare_digestPython stdlib

Every primitive is replaceable. Each encrypted blob carries a one-byte version so we can rotate algorithms without flag-day breakage. pyca/cryptography is the underlying library for everything except DESFire (which we wrote against the NXP datasheet) and Shamir (small-enough scope that we wrote it explicitly).

10. Threat model in detail

ThreatMitigation
Stolen DBCiphertext only. K_vault lives on the chip. Server-side encrypted backup of K_vault requires MASTER_KEY (off-server) to unwrap.
Stolen DB + MASTER_KEYDisaster-recovery branch only. Used only when the customer's chip is destroyed AND self-recovery via Shamir or backup chip has failed AND multi-operator approval. Audit-logged.
Stolen chip aloneNo DB access, no value to attacker. Chip won't talk meaningfully without a SkinID server in the loop.
Stolen chip + DBFull access for the held-chip duration. Bounded to physical proximity time. Customer revokes by contacting support or tapping a backup chip.
PhishingPasskeys are origin-bound (FIDO2). Phishing fails by construction. Passwords stored under origin-keyed entries; the extension matches origin before serving.
Server compromise (RCE)No customer credentials decryptable without chip taps. Attacker can intercept chip taps for users authenticating during the compromise window. They cannot exfiltrate the entire vault as plaintext.
Network eavesdroppingTLS 1.3, HSTS preload, certificate pinning on iOS native client.
Insider attackOperators cannot decrypt vaults. Recovery completion requires multi-operator approval. Tamper-evident audit log makes silent rewriting detectable.
Supply chain (ours)All JS self-hosted. Single dependency that's a real risk surface: pyca/cryptography. We track its CVEs and ship within 7 days of a critical advisory.
Supply chain (NXP)DESFire EV3 has a documented originality signature; we verify it. We don't have a counter for a state-actor-level NXP backdoor; this is a residual risk for any DESFire-based authenticator.
Coercion of a single userOut of scope. If someone forces you to scan, the system can't tell.
Physical removal of the chipOut of scope. Body autonomy is yours.
Compromise of user's primary device while the chip is tappedBounded. The window of decryptability ends when the user lifts their hand.
State-level adversary with arbitrary code execution on user's deviceOut of scope. No consumer authenticator defends against this.

11. Compliance map

12. How to verify our claims

This document describes the architecture as of 2026-05. Material changes will be noted at the bottom of /security.