Skip to main content
Part of: IoT Security
Security · 7 min read

Threat Modelling Connected Products: A Practical Guide for Product Teams

How to threat-model an IoT product without academic overhead — STRIDE applied to firmware, cloud, and mobile, with a worked example for a smart-lock product.

Threat modelling has a reputation for academic overhead — multi-day workshops, 80-page reports, and very little that ends up in the codebase. It does not have to be that way. The version we use on real projects produces a one-page artefact, takes half a day, and pays back the first time someone asks “did we think about that?”.

The shape of the artefact

We capture five things, on one page, for every connected product:

  1. Assets — what is worth defending
  2. Trust boundaries — where data crosses from one zone of control to another
  3. Adversaries — who would attack, why, and what tools they have
  4. Threats — what can go wrong at each boundary
  5. Mitigations — what we will do about it, with a status

That’s the whole document. It lives in the repo as THREAT_MODEL.md next to the architecture diagram, gets reviewed every sprint, and updated when the architecture changes.

STRIDE, briefly

STRIDE is a memorable framing for the threats themselves:

  • Spoofing — pretending to be something or someone you are not
  • Tampering — modifying data or code in transit or at rest
  • Repudiation — denying you did something the system has no record of
  • Information disclosure — leaking data that should be private
  • Denial of service — making the system unavailable
  • Elevation of privilege — getting more access than you should

Walk each STRIDE category against each trust boundary. Most threats fall out within an hour.

A worked example: BLE smart lock

Imagine a BLE smart-lock product. Phone app pairs with the lock, generates time-limited guest codes, sends commands to lock/unlock, syncs audit logs to a cloud backend.

Assets

  1. The lock itself (physical asset and the door it protects)
  2. The cryptographic keys on the lock (long-lived device key, session keys)
  3. Audit logs (forensic value, regulatory value)
  4. Guest codes (time-bounded, but compromise of an active code is a real-world break-in)

Trust boundaries

  1. Phone app ↔ lock (over BLE)
  2. Phone app ↔ cloud backend (over HTTPS)
  3. Lock ↔ cloud backend (over BLE-via-phone or direct cellular if equipped)
  4. Cloud backend ↔ admin dashboard (over HTTPS)
  5. Lock ↔ physical attacker with screwdriver — the boundary engineers forget

Adversaries

  1. Opportunistic remote attacker — botnet operator scanning the internet for default credentials
  2. Targeted local attacker — someone within BLE range with knowledge of the product
  3. Insider — disgruntled employee with admin dashboard access
  4. Sophisticated physical attacker — someone with the device on a bench, JTAG probe, and time

STRIDE walk

For boundary 1 (phone ↔ lock over BLE):

  • S: phone clones the lock’s MAC and pretends to be the lock to capture credentials. Mitigation: BLE pairing with mutual authentication, device identity tied to factory-provisioned X.509 cert.
  • T: attacker MITMs an unlock command. Mitigation: end-to-end encrypted commands signed with session keys derived from mutual auth, replay protection via nonce + counter.
  • R: user denies sending an unlock command. Mitigation: signed audit log on the lock with timestamp, session ID, and command hash.
  • I: passive sniffer captures guest code or session keys. Mitigation: BLE LE Secure Connections, session keys never derive from static device key alone.
  • D: BLE jamming or repeated reconnect attempts exhaust battery. Mitigation: rate-limit reconnect on the lock, fall back to physical keyway after N failures, alert via secondary channel.
  • E: app gains admin privileges through a bug in the lock’s BLE service. Mitigation: BLE service requires authenticated session for admin operations, integer overflow checks on length fields, fuzzed in CI.

For boundary 5 (physical attacker):

  • S: clone the device by reading flash. Mitigation: flash encryption with per-device key, secure boot, JTAG fuses blown in production.
  • T: replace firmware to bypass auth. Mitigation: secure boot chain, signed firmware verified against fuse-stored public key.
  • I: extract stored guest codes. Mitigation: codes hashed at rest, never stored as plaintext.
  • E: fault injection attack on secure boot. Mitigation: shielded crypto routines, tamper detection, periodic re-verification of boot integrity at runtime.

This is half a page of artefact. It is enough.

How to run the workshop

Half a day with the right four people:

  1. The lead firmware engineer
  2. The lead cloud engineer
  3. A security-aware engineer (internal or external)
  4. The product manager who can say “we’re not protecting against that — too expensive”

In the room:

  • 30 minutes: identify assets and trust boundaries on a whiteboard
  • 30 minutes: pick adversaries and rate them by capability
  • 90 minutes: STRIDE walk each boundary, capture threats
  • 60 minutes: assign mitigations, set status (planned, implemented, accepted-risk)
  • 30 minutes: write up the one-page artefact

If a workshop runs longer than four hours, the team is overthinking it. Iteration after the first ship beats perfection at the planning stage.

What to do with the artefact

Three things, monthly:

  1. Track mitigation status — “Planned” should not stay “Planned” for two consecutive months
  2. Review with new architecture changes — every PR that crosses a trust boundary triggers a re-read
  3. Test against it — penetration testing scoped against the actual threats, not generic OWASP IoT Top 10 (though the latter is also useful)

What we hand over

For every connected product engagement we ship a populated THREAT_MODEL.md in the repo, with:

  • The five sections above filled in
  • Each mitigation cross-referenced to the code or config that implements it
  • A list of accepted risks signed off by the product owner
  • A change-log section tracking model evolution

The document beats any number of slide decks. It is the artefact your security auditor reads first, the one a new engineer reads on day three, and the one your incident response team will ask for at 3 AM.

If you want a second pair of eyes on a threat model — or a half-day session to build one — we run these as fixed-scope engagements.

By Diglogic Engineering · May 9, 2026

Share

Ready to ship

Let's get started.

Tell us about the problem. We come back within one business day with a clear path, a timeline you can plan around, and a fixed-scope first milestone.