Biomedical DevicesPremium

How to Secure a Device You Can't Turn Off

Pacemakers can't be patched like phones. Explore the impossible constraints of medical device security and the solutions keeping patients alive and protected.

Hyle Editorial·

You cannot patch a pacemaker the way you patch a phone. You cannot turn it off for maintenance. You cannot recall it without surgery. Medical device security must be designed to work within constraints that no other industry faces. In 2024, approximately 3 million people worldwide live with implantable cardiac devices, each one a computer running legacy software embedded in human tissue, connected wirelessly to a world of evolving cyber threats.

The numbers are stark: the average pacemaker battery lasts 8-12 years, meaning any security update that increases power consumption by even 1% could shave months off a patient's device lifespan. A typical smartphone receives security patches monthly. An implantable cardioverter-defibrillator (ICD) might receive one or two firmware updates across its entire operational life—if any at all.

This creates a fundamental tension that no other cybersecurity domain encounters: how do you defend a device that must remain functional 100% of the time, cannot be physically accessed, has severe energy constraints, and where every modification requires regulatory approval?

The Four Constraints That Change Everything

Constraint 1: Battery Life is Non-Negotiable

Every wireless transmission, every encryption operation, every millisecond of processor activity draws from a finite energy reservoir that cannot be recharged without surgery. Consider the power budget of a modern pacemaker:

$$P_{total} = P_{pacing} + P_{sensing} + P_{telemetry} + P_{crypto} + P_{idle}$$

Where telemetry and cryptographic operations can consume up to 30% of total battery capacity over the device lifetime. A single wireless session for a firmware update might consume 0.1% of remaining battery life—acceptable once, potentially catastrophic if repeated.

[!INSIGHT] Unlike conventional IoT devices where security can be "baked in" through continuous updates, implantable devices must ship with security architectures designed to remain effective for 10+ years without modification. The attack surface of 2034 must be anticipated in 2024.

Constraint 2: No Downtime Tolerance

A typical server can be taken offline for patching during maintenance windows. A pacemaker patient's heart does not recognize maintenance windows. The device must remain operational even during security updates, creating the requirement for "hot patching"—updating firmware while the device continues to deliver life-sustaining therapy.

The technical challenge is immense. The update process requires:

  1. Dual-bank memory architecture: Maintaining two complete firmware images, with instant fallback if the update fails
  2. Atomic state preservation: Ensuring cardiac rhythm detection algorithms never lose more than one beat of data
  3. Rollback capability: Returning to the previous firmware version within milliseconds if anomalies occur

This complexity explains why firmware updates for cardiac devices require FDA approval as separate medical procedures, often taking 18-24 months from development to deployment.

Constraint 3: Physical Access Requires Surgery

In traditional security architecture, physical access is the last line of defense—if all else fails, you can physically disconnect or replace the compromised device. For implantable medical devices (IMDs), physical access costs $15,000-$50,000 and carries a 1-3% risk of serious complication.

*"In no other domain must the defender accept that the adversary may eventually have physical proximity to the target, while the defender cannot physically intervene without causing harm to an innocent party.
Dr. Kevin Fu, University of Michigan Medical Device Security Center

This asymmetry forces security architects to assume that determined attackers will eventually achieve physical proximity. The defense model must account for attacks launched from centimeters away, not just across networks.

Constraint 4: FDA Regulatory Burden

Every software change to a Class III medical device requires FDA review. A security patch that modifies one line of code triggers the same regulatory pathway as a new device feature. The 510(k) clearance process for software modifications typically requires:

  • Verification testing across all device configurations
  • Validation of clinical safety
  • Documentation of software development lifecycle compliance
  • Biocompatibility assessment if patient contact changes

[!NOTE] In 2017, the FDA issued a safety communication about St. Jude Medical's Merlin@home pacemaker transmitters, revealing vulnerabilities that could theoretically drain device batteries. The fix required patients to visit clinics for firmware updates—a process that took over 6 months to deploy across the installed base.

Security Strategies That Actually Work

Network Segmentation and Air Gaps

The most effective protection for high-risk IMDs remains the oldest trick in security: don't connect what doesn't need connecting. Modern pacemaker programmers (the external devices that communicate with implants) operate on dedicated short-range communication (DSRC) protocols rather than standard WiFi or Bluetooth.

The communication range is deliberately limited to approximately 2-3 meters, and modern protocols require the programmer to physically touch or nearly touch the patient's chest to initiate sessions. This "security by proximity" transforms the attack model from "anyone on the internet" to "someone in the same room with specialized equipment."

Passive Allow-Lists: The Anti-App Store

Unlike smartphones that run arbitrary applications, implantable devices operate on strict allow-list principles. The device will only execute code that was present at manufacture time, digitally signed by the manufacturer, and validated against a hardware root of trust.

The boot process verification follows a chain:

$$\text{ROM}{bootloader} \rightarrow \text{Verify}(\text{Hash}(\text{Firmware}{app})) \rightarrow \text{Execute}(\text{Firmware}_{app})$$

Any modification to the firmware image breaks the cryptographic chain and causes the device to halt—not ideal for a life-sustaining system, which is why multiple redundant verification paths exist.

Rolling Cryptographic Keys with Physical Reinitialization

One solution to the key management problem uses the physical programming session as a key renewal opportunity. Each time a patient visits their cardiologist, the programmer can exchange new cryptographic keys with the implant, effectively "rotating" credentials without requiring battery-draining wireless activity.

This approach:

  • Limits the window of vulnerability for any stolen credentials
  • Uses scheduled medical appointments (every 6-12 months) as security checkpoints
  • Leverages the physical security of clinical environments
  • Adds no additional patient burden

Fail-Safe Communication Modes

Modern IMDs implement emergency communication modes that bypass normal authentication in life-threatening scenarios. If the device detects ventricular fibrillation, it will accept programming commands from any compatible programmer within range—the logic being that an unauthenticated defibrillation is preferable to death from cardiac arrest.

[!INSIGHT] This fail-safe mechanism represents a fundamental philosophical difference between medical device security and enterprise security: in medicine, availability always takes precedence over confidentiality. A "denied" message that results in patient death is a security failure, not a security success.

The Emerging Threat Landscape

The deployment of home monitoring systems has expanded the attack surface dramatically. Devices like the Merlin@home transmitter upload pacemaker data to manufacturer clouds via cellular or WiFi connections. While the implant itself remains short-range, the ecosystem now includes internet-connected components.

In 2022, researchers demonstrated that compromised home monitoring units could be leveraged to send malicious signals to nearby pacemakers—extending the theoretical attack range from "same room" to "same building." The attack chain would require:

  1. Initial compromise of the home monitoring device (standard IoT vulnerabilities)
  2. Proximity to the pacemaker patient (within ~5 meters)
  3. Execution of a replay or command injection attack

While no real-world attacks on pacemakers have been documented, the research community has responsibly disclosed multiple vulnerabilities that manufacturers have patched—demonstrating that the threat model is not merely theoretical.

Implications for the Future

The constraints of implantable security are driving innovation that benefits the broader IoT ecosystem. Techniques pioneered in medical devices—including ultra-low-power cryptography, formal verification of safety-critical code, and compromise-resilient architectures—are now being adopted in automotive systems, industrial control, and aerospace applications.

The regulatory landscape is also evolving. The FDA's 2014 guidance on medical device cybersecurity, updated in 2023, now requires manufacturers to demonstrate a "reasonable assurance of safety and effectiveness" that includes cybersecurity considerations throughout the device lifecycle, not just at point of sale.

For patients, the calculus remains favorable: the risk of device malfunction due to cyber attack remains orders of magnitude lower than the mortality risk from untreated cardiac conditions. The security community's job is to maintain that ratio as the threat landscape evolves.

Conclusion

Medical device security operates in a constraint space that would be considered unacceptable in any other domain: no downtime, no physical access, minimal energy budget, and regulatory barriers to every modification. The solutions that work—network segmentation, cryptographic rigidity, physical proximity requirements, and integration of security with clinical workflows—represent a mature security philosophy that prioritizes patient safety above theoretical security perfection.

Key Takeaway: The most secure medical device is one that does exactly what it was designed to do, communicates only with authenticated and physically proximate devices, and treats any deviation from expected behavior as a potential safety event requiring defensive response. In this domain, security is not about preventing all attacks—it is about ensuring that the patient survives both the device and any attempt to compromise it.

Sources: FDA Guidance on Medical Device Cybersecurity (2023), University of Michigan Medical Device Security Center publications, IEEE Symposium on Security and Privacy medical device research (2017-2024), St. Jude Medical Merlin@home FDA Safety Communication (2017), Medtronic Pacemaker Security Whitepapers, Journal of Cardiovascular Electrophysiology device programming guidelines

This is a Premium Article

Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.

Related Articles