Tag Archives: medical devices

skin mounted medical devices

Bionic M2M: Are Skin-mounted M2M devices – the future of eHealth?

In the popular American TV series that aired on ABC in the 70s, Steve Austin is the “Six million Dollar Man”, a former astronaut with bionic implants. The show and its spinoff, The Bionic Woman (Lindsay Wagner playing a former tennis player who was rebuilt with bionic parts similar to Austin after a parachuting accident) were hugely successful.

Modern M2M communication has expanded beyond a one-to-one connection and changed into a system of networks that transmits data to personal appliances using wireless data networks.

M2M networks are much much more than remote meter reading.

The fastest growing M2M segment in Germany, with an average annual growth of 47 percent, will be from consumer electronics with over 5 M2M SIM-cards. The main growth driver is “tracking and tracing”. (Research by E-Plus )

What would happen if the personal appliance was part of the person?

Physiological measurement and stimulation techniques that exploit interfaces to the skin have been of interest for over 80 years, beginning in 1929 with electroencephalography from the scalp.

A new class of electronics based on transparent, flexible 50micron silicon film laminates onto the skin with conformal contact and adhesion based on van der Waals interaction. See: Epidermal Electronics John Rogers et al. Science 2011.

This new class of device is mechanically invisible to the user, is accurate compared to traditional electrodes and has RF connectivity.  The thin 50 micron film serve as temporary support for manual mounting of these systems on the skin in an overall construct that is directly analogous to that of a temporary transfer tattoo, as can be seen in the above picture.

Film mounted devices can provide high-quality signals with information on all phases of the heartbeat, EMG (muscle activity) and EEG (brain activity). Using silicon RF diodes, devices can provide short-range RF transmission at 2Ghz.  Note the antenna on the device.

After mounting it onto the skin, one can wash away the PVA and peel the device back with a pair of tweezers.  When completely removed, the system collapses on itself because of its extreme deformability and skin-like physical properties.

Due to their inherent transparent, unguarded, low cost and mass-deployed nature, epidermal mounted medical devices invite new threats that are not mitigated by current security and wireless technologies.

Skin-mounted devices might also become attack vectors themselves, allowing a malicious attacker to apply a device to the spine, and deliver low-power stimuli to the spinal cord.

How do we secure epidermal electronics devices on people?

Let’s start with some definitions:

  • Verification means is the device built/configured for its intended use (for example measuring EMG activity and communicating the data to NFC (near field communications) device.
  • Validation means the ability to assess the security state of the device, whether or not it has been compromised.
  • RIMs (Reference Integrity Measurements) enable vendors/healthcare providers define the desired target configurations of devices, for example, is it configured for RF communications

There are 3 key threats when it comes to epidermal electronics:

  1. Physical attacks: Reflashing before application to the skin in order to modify  intended use.
  2. Compromise of credentials: brute force attacks as well as malicious cloning of credentials.
  3. Protocol attacks against the device: MITM on first network access, DoS, remote reprogramming

What are the security countermeasures against these threats?  We can consider a traditional IT security model and a trusted computing model.

Traditional IT security model?

Very large numbers of low-cost, distributed devices renders an  access-control security model inappropriate. How would a firewall on an epidermal electronics device enforce intended use, and manage access-control policies? What kind of policies would you want to manage? How would you enforce installation of the internal firewall during the manufacturing process?

Trusted computing model?

A “trusted computing model”  may be considered as an alternative security countermeasure to access control and policy management.

An entity can be “trusted” if it predictably and observably behaves in the expected manner for its intended use. But what does “intended use” mean in the case of epidermal electronics that are used for EKG, EEG and EMG measurements on people?

Can the traditional, layered, trusted computing models used in the telecommunications world be used to effectively secure cheap, low-cost, epidermal electronics devices?

In an M2M trusted computing model there are 3 methods:  autonomous validation, remote validation and semi-autonomous validation. We will examine each and try and determine how effective each model is as a security countermeasure for the key threats of epidermal electronics.See: “Security and Trust for M2M Communications” – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein

Autonomous validation

This is essentially the trust model used for smart cards, where the result of local verification is true or false.

Autonomous validation does not depend on the patient herself or the healthcare provider. Local verification is assumed to have occurred before the skin-mounted device attempts communication or performs a measurement operation.

Autonomous validation makes 3 fundamental assumptions – all 3 are wrong in the case of epidermal electronics:

  1. The local verification process is assumed to be perfectly secure since the results are not shared with anyone else, neither the patient nor the healthcare provider.
  2. We assume that the device itself is completely trusted in order to enforce security policies.
  3. We assume that a device failing self-verification cannot deviate from its “intended use”.

Device-based security can be broken and cheap autonomous skin-mounted devices can be manipulated – probably much easier than cell-phones since for now at least, they are much simpler. Wait until 2015 when we have dual core processors on a film.

In addition, autonomous validation does not mitigate partial compromise attacks (for example – the device continues to measure EMG activity but also delivers mild shocks to the spine).

Remote validation

Remote validation has connectivity, scalability and availability issues. It is a probably a very bad idea to rely on network availability in order to remotely validate a skin-mounted epidermal electronics device.

In addition to the network and server infrastructure required to support remote validation, there would also be a huge database of RIMs, to enable vendors and healthcare providers define the target configurations of devices.

Run-time verification is meaningless if it is not directly followed by validation, which requires frequent handshaking with central service providers, which in turn increases traffic and creates additional vulnerabilities, such as side-channel attacks.

Remote validation of personally-mounted devices compromises privacy since the configuration may be virtually unique for a particular person and interception of validation messages could reveal the identity based on location even without deccrypting payloads.

Discrimination by vendors also becomes possible, as manipulation and control of the RIM databases could lock out other applications/vendors.

Semi-Autonomous Validation

Semi-autonomous validation divides verification and enforcement between the device and the healthcare provider.

In semi-autonomous validation, the device verifies itself locally and then sends the results in a network message to the healthcare provider who can decide if he needs to notify the user/patient if the device has been compromised or does not match the intended use.

Such a system needs to ensure authentication, integrity, and confidentiality of messages sent from epidermal electronics devices to the healthcare provider.

RIM certificates are a key part of semi-autonomous validation and would be signed by a trusted third party/CA.

Semi-autonomous validation also allows for more granular delegation of control to the device itself or the healthcare provider – depending on the functionality.

Summary

Epidermal electronics devices are probably going to play a big part in the future of healthcare for monitoring vital signs in a simple, cheap and non-invasive way.  These are medical devices, used today primarily for measuring vital signs that are directly mounted on the skin and not a Windows PC or Android smart phone that can be rebooted if there is a problem.

As their computing capabilities develop, current trusted computing/security models will be inadequate for epidermal electronics devices and attention needs to be devoted as soon as possible in order to build a security (probably semi-autonomous) model that will mitigate threats by malicious attackers.

 References

  1. Security and Trust for M2M Communications – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein
  2. Epidermal Electronics John Rogers et al. Science 2011.
Tell your friends and colleagues about us. Thanks!
Share this

How to secure patient data in a healthcare organization

If you are a HIPAA covered entity or a business associate vendor to a HIPAA covered entity the question of HIPAA – the question of securing patient data is central to your business.  If you are a big organization, you probably don’t need my advice – since you have a lot of money to spend on expensive security and compliance consultants.

But – if you are small to mid-size hospital or nursing home or medical device vendor without large budgets for security compliance, the natural question you ask is “How can I do this for as little money as possible?”

You can do some research online and then hire a HIPAA security and compliance consultant who will walk you through the security safeguards in CFR 45 Appendix A and help you implement as many items as possible.  This seems like a reasonable approach, but the more safeguards you implement, the more money you spend and moreover, you do not necessarily know if your security has improved –since you have not examined your value at risk – i.e how much money it will cost you if you have a data security breach.

If you read CFR 45 Appendix A carefully, you will note that the standard wants you to do a top-down risk analysis, risk management and periodic information security activity review.

The best way to do that top down risk analysis is to build probable threat scenarios – considering what could go wrong – employees stealing a hard disk from a nursing station in an ICU where a celebrity is recuperating for the information or a hacker sniffing the hospital wired LAN for PHI.

Threat scenarios as an alternative to compliance control policies

When we perform a software security assessment of a medical device or healthcare system, we think in terms of “threat scenarios” or “attack scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance. The threat scenarios are not “one size fits all”.  The threat scenarios for an AIDS testing lab using medical devices that automatically scan and analyze blood samples, or an Army hospital using a networked brain scanning device to diagnose soldiers with head injuries, or an implanted cardiac device with mobile connectivity are all totally different.

We evaluate the medical device or healthcare product from an attacker point of view, then from the management team point of view, and then recommend specific cost-effective, security countermeasures to mitigate the damage from the most likely attacks.

In our experience, building a security portfolio on attack scenarios has 3 clear benefits;
  1. A robust, cost-effective security portfolio based on attack analysis  results in robust compliance over time.
  2. Executives related well to the concepts of threat modeling / attack analysis. Competing, understanding the value of their assets, taking risks and protecting themselves from attackers is really, at the end of the day why executives get the big bucks.
  3. Threat scenarios are a common language between IT, security operations teams and the business line managers

This last benefit is extremely important in your healthcare organization, since business delegates security to IT and IT delegates security to the security operations teams.

As I wrote in a previous essay “The valley of death between IT and security“, there is a fundamental disconnect between IT operations (built on maintaining predictable business processes) and security operations (built on mitigating vulnerabilities).

Business executives delegate information systems to IT and information security to security people on the tacit assumption that they are the experts in information systems and security.  This is a necessary but not sufficient condition.

In the current environment of rapidly evolving types of attacks (hacktivisim, nation-state attacks, credit card attacks mounted by organized crime, script kiddies, competitors and malicious insiders and more…), it is essential that IT and security communicate effectively regarding the types of attacks that their organization may face and what is the potential business impact.

If you have any doubt about the importance of IT and security talking to each other, consider that leading up to 9/11, the CIA  had intelligence on Al Qaeda terrorists and the FBI investigated people taking flying lessons, but no one asked the question why Arabs were learning to fly planes but not land them.

With this fundamental disconnect between 2 key maintainers of information protection, it is no wonder that organizations are having difficulty effectively protecting their assets – whether Web site availability for an online business, PHI for a healthcare organization or intellectual property for an advanced technology firm.

IT and security  need a common language to execute their mission, and I submit that building the security portfolio around most likely threat scenarios from an attacker perspective is the best way to cross that valley of death.

There seems to be a tacit assumption with many executives that regulatory compliance is already a common language of security for an organization.  Compliance is a good thing as it drives organizations to take action on vulnerabilities but compliance checklists like PCI DSS 2.0, the HIPAA security rule, NIST 800 etc, are a dangerous replacement for thinking through the most likely threats to your business.  I have written about insecurity by compliance here and here.

Let me illustrate why compliance control policies are not the common language we need by taking an example from another compliance area – credit cards.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control policy that is not rooted in a probable threat scenario that creates additional vulnerabilities for the business.

Now, consider some deeper ramifications of compliance control policy-based security.

When a  QSA or HIPAA auditor records an encounter with a customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities.

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.

Using value at risk to figure out how much a breach will really cost you

Your threat scenarios must consider asset (your patient information, systems, management attention, reputation) values, vulnerabilities, threats and possible security countermeasures. Threat analysis as a methodology does not look for ROI or ROSI (there is no ROI for security anyhow) but considers the best and cheapest way to reduce asset value at risk.

And – as we opened the article – the question is  “How can I do this for as little money as possible?”

Tell your friends and colleagues about us. Thanks!
Share this

Anatonme – a hand held device for improving patient-doctor communications

From a recent article in Healthcare Global.

Studies suggest that 30-50 percent of patients are likely to give up treatments early.  Microsoft Research has developed an innovative, hand-held medical device called Anatonme to help patients understand their issue and complete their treatment plan more often.

We’ve been doing research and development into private, controlled social networking to reinforce private communications between doctor and patient. It’s gratifying to see Microsoft Research doing work in this area.

Private social networking for doctors and patients provides highly effective secure data sharing between doctors and patients. It allows patient-mediated input of data before visits to the office, making the clinical data more accurate and complete and boosting the trust between doctor/healthcare worker and patient.

A private social network has a controlled 1 to N (doctor to patients) topology and physiological and emotional context, unlike Facebook that has a distracting social graph and entertainment context.

A private social network for doctors and patients also provides powerful information exchange and search:

  1. Capture critical events on a timeline (for example blood pressure, dizziness etc) that enables the doctor to respond in a timely fashion.
  2. Reconciles differences between what the doctor ordered and what the patient did.
  3. Granular access control for sharing of data between doctor, patient and referrals.

If you’re interested in hearing more – contact us.

Tell your friends and colleagues about us. Thanks!
Share this

Beyond the firewall

Beyond the firewall – data loss prevention

What a simple idea. It doesn’t matter how they break into your network or servers – if attackers can’t take out your data, then you’ve mitigated the threat.

Data loss prevention is a category of information security products that has matured from Web / email content filtering products into technologies that can detect unauthorized network transfer of valuable digital assets such as credit cards. This paper reviews the motivation for and the taxonomies of advanced content flow monitoring technologies that are being used to audit network activity and protect data inside the network.

Motivation – why prevent data loss?

The majority of hacker attacks and data loss events are not on the IT infrastructure but on the data itself.  If you have valuable data (credit cards, customer lists, ePHI) then you have to protect it.

Content monitoring has traditionally meant monitoring of employee or student surfing and filtering out objective content such as violence, pornography and drugs. This sort of Web content filtering became “mainstream” with wide-scale deployments in schools and larger businesses by commercial closed source companies such as McAfee and Bluecoat and Open Source products such as Censor Net and Spam Assassin. Similar signature-based technologies are also used to perform intrusion detection and prevention.

However, starting in 2003, a new class of content monitoring products started emerging that is aimed squarely at protecting firms from unauthorized “information leakage”, “data theft” or “data loss” no matter what kind of attack was mounted. Whether the data was stolen by hackers, leaked by malicious insiders or disclosed via a Web application vulnerability, the data is flowing out of the organization. The attack vector in a data loss event is immaterial if we focus on preventing the data loss itself.

The motivation for using data loss prevention products is economic not behavioral; transfer of digital assets  such as credit cards and PHI by trusted insiders or trusted systems can cause much more economic damage than viruses to a business.

Unlike viruses, once a competitor steals data you cannot reformat the hard disk and restore from backup.

Companies often hesitate from publicly reporting data loss events because it damages their corporate brand, gives competitors an advantage and undermines customer trust no matter how much economic damage was actually done.

Who buys DLP (data loss prevention)?

This is an interesting question. On one hand, we understand that protecting intellectual property, commercial assets and compliance-regulated data like ePHI and credit cards is  essentially an issue of  business risk management. On the other hand, companies like Symantec and McAfee and IBM sell security products to IT and information security managers.

IT managers focus on maintaining predictable execution of business processes not dealing with unpredictable, rare, high-impact events like data loss.  Information security managers find DLP technology interesting (and even titillating – since it detects details of employee behavior, good and bad) but an  information security manager who buys Data loss prevention (DLP) technology is essentially admitting that his perimeter security (firewall, IPS) and policies and procedures are inadequate.

While data loss prevention may be a problematic sale for IT and information security staffers, it plays well into the overall risk analysis,  risk management and compliance processes of the business unit.

Data loss prevention for senior executives

There seem to be three schools of thought on this with senior executives:

  1. One common approach is to ignore the problem and brush it under the compliance carpet using a line of reasoning that says “If I’m PCI DSS/HIPAA compliant, then I’ve done what needs to be done, and there is no point spending more money on fancy security technologies that will expose even more vulnerabilities”.
  2. A second approach is to perform passive data loss detection and monitor flow of data(like email and file transfers) without notifying employees or the whole world. Anomalous detection events can then be used to improve business processes and mitigate system vulnerabilities. The advantage of passive monitoring is that neither employees nor hackers can detect a Layer 2 sniffer device and a sniffer is immune to configuration and operational problems in the network. If it can’t be detected on the network. then this school of thought has plausible deniability.
  3. A third approach takes data loss prevention a step beyond security and turns it into a competitive advantage. A smart CEO can use data loss prevention system as a deterrent and as a way of enhancing the brand (“your credit cards are safer with us because even if the Saudi hacker gets past our firewall and into the network, he won’t be able to take the data out”).

A firewall is not enough

Many firms now realize that a firewall is not enough to protect digital assets inside the network and look towards incoming/outgoing content monitoring. This is because: 

  1. The firewall might not be properly configured to stop all the suspicious traffic.

  2. The firewall doesn’t have the capability to detect all types of content, especially embedded content in tunneled protocols.

  3. The major of hacker attacks and data loss events are not on the IT infrastructure but on the data itself.

  4. Most hackers do not expect creative defenses so they assume that once they are in, nobody is watching their nasty activities.

  5. The firewall itself can be compromised. As we have more and more Day-0 attacks and trusted insider threats, so it is good practice to add additional independent controls.

Detection

Sophisticated incoming and outgoing (data loss prevention or DLP) content monitoring technologies basically use three paradigms for detecting security events

  1. AD- Anomaly Detection – describes normal network behavior and flags everything else
  2. MD- Misuse Detection – describes attacks and flags them directly
  3. BA – Burglar alarm – describes abnormal network behavior (“detection by exception”)

In anomaly detection, new traffic that doesn’t match the model is labeled as suspicious or bad and an alert is generated. The main limitation of anomaly detection is that if it is too conservative, then it will generate too many false positives (a false alarm) and over time the analyst will ignore it. On the other hand, if a tool rapidly adapts the model to evolving traffic change, then too little alerts will be generated and the analyst will again ignore it.

Misuse detection describes attacks and flags them directly, using a database of known attack signatures and constantly tries to match the actual traffic against the database. If there is a match, an alert is generated. The database typically contains rules for:

  1. Protocol Stack Verification – RFC’s, ping of death, stealth scanning etc.
  2. Application Protocol Verification – WinNuke , invalid packets that cause DNS cache corruption etc.
  3. Application Misuse – misuse that causes applications to crash or enables a user to gain super user privileges; typically due to buffer overflows or due to implementation bugs.
  4. Intruder detection. Known attacks can be recognized by the effects caused by the attack itself. For example, Back Orifice 2000 sends traffic on default port is 31337
  5. Data loss detection – for example by file types, compound regular expressions, linguistic and/or statistical content profiling. Data loss prevention or detection needs to work at a much higher level than intrusion detection – since it needs to understand file formats and analyze the actual content such as Microsoft Office attachments in a Web mail session as opposed to doing simple pattern matching of an http request string.

Using a burglar alarm model, the analyst needs deep understanding of the network and what should not happen with it. He builds rules that model how the monitored network should conceptually work, in order to generate alerts when suspicious traffic is detected. The richer the rules database, the more effective the tool. The advantage of the burglar alarm model is that a good network administrator can leverage his knowledge of servers, segments and clients (for example a Siebel CRM server which is a client to a Oracle database server) in order to focus-in and manage-by-exception.

What about prevention?

Anomaly detection is an excellent way of identifying network vulnerabilities but a customer cannot prevent extrusion events based on general network anomalies such as usage of anonymous ftp. In comparison there is a conceptual problem with misuse detection. If misuse is detected then unless the event can be prevented (either internally with a TCP reset, by notifying the router or firewall) – then the usefulness of the devices is limited to forensics collection.

What about security management?

SIM – or security information management consolidates reporting, analysis, event management and log analysis. There are a number of tools in this category – Netforensics is one. SIM systems do not perform detection or prevention functions – they manage and receive reports from other systems. Checkpoint for example is a vendor that provides this functionality with partnerships.

Summary

There are many novel DLP/data loss prevention products, most provide capabilities far ahead of both business and IT infrastructure management that are only now beginning to look towards content monitoring behind the firewall.

DLP (Data loss prevention) solutions join an array of content and application-security products around the traditional firewall. Customers are already implementing a multitude of network security products for Inbound Web filtering, Anti-virus, Inbound mail filtering and Instant Messaging enforcement along with products for SIM and integrated log analysis.

The industry has reached the point where the need to simplify and reduce IT security implementation and operational costs becomes a major purchasing driver, perhaps more dominant than any single best-of-breed product.

Perhaps data loss prevention needs to become a network security function that is part of the network switching fabric; providing unified network channel and content security.

Software Associates helps healthcare customers design and implement such a unified network channel and enterprise content security solution today enabling customers to easily define policies such as “No Instant Messaging on our network” or “Prevent patient data leaving the company over any channel that is not an authorized SSH client/server”.

For more information contact us.


Tell your friends and colleagues about us. Thanks!
Share this

Apps vs. the Web, enemy or friend?

Saw this item on Gigaom.

George Colony, the chairman and CEO of Forrester Research, re-ignited a minor firestorm recently, with a presentation at the LeWeb conference in which he argued that the web is dead, and being replaced by the app economy — with mobile and smartphone apps that leverage the cloud or other services rather than the open web.

I have written here and here about the close correlation between Web application security and Web performance.

I know that Mr. Colony has sparked some strong sentiment in the community, in particular from Dave Winer:

If I can’t link in and out of your world, it’s not even close to a replacement for the web. It would be as silly as saying that you don’t need oceans because you have a bathtub. How nice your bathtub is. Try building a continent around it.

Of course, that is neither true nor relevant.

Many apps are indeed well connected, and the apps that are not wired-in, don’t have to be wired; the app is simply doing something useful for the individual consumer (like iAnnotate displaying a PDF file of music on a iPad or Android tablet).

iAnnotate turns your iPad into a world-class productivity tool for reading, annotating, organizing, and sending PDF files. Join the 100,000s of users who turn to iAnnotate for their PDF annotating needs. We designed iAnnotate to suit your individual workflow.

I became even more cognizant that apps may overtake the open Web over the past 2 weeks when Google Apps was going through some rough spots and it was almost impossible to read email to  software.co.il or access or calendars…except from our Android tablets and Nexus S smartphones.   Chrome and Google Apps was almost useless but Android devices just chugged on.

There is a good reason why apps are overtaking the open browser-based web.

They are simply more accessible, easier to use and faster.

This is no surprise as I noted last year:

The current rich Web 2.0 application development and execution model is broken.

Consider that a Web 2.0 application has to serve browsers and smart phones. It’s based on a heterogeneous server stack with 5-7 layers (database, database connectors, middleware, scripting languages like PHP, Java and C#, application servers, web servers, caching servers and proxy servers.  On the client-side there is an additional  heterogeneous stack of HTML, XML, Javascript, CSS and Flash.

On the server-side, we have

  • 2-5 languages (PHP, SQL, tcsh, Java, C/C++, PL/SQL)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Server-side database management (MySQL, MS SQL Server, Oracle, PostgreSQL)

On the client side, we have

  • 2-5 languages ((Javascript, XML, HTML, CSS, Java, ActionScript)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Local data storage – often duplicating session and application data stored on the server data tier.

A minimum of 2 languages on the server side (PHP, SQL) and 3 on the client side (Javascript, HTML, CSS) turns developers into frequent searchers for answers on the Internet (many of which are incorrect)  driving up the frequency of software defects relative to a single language development platform where the development team has a better chance of attaining maturity and proficiency. More bugs means more security vulnerabilities.

More bugs in this complex, broken execution stack means more things will go wrong and as devices and apps are almost universally accessible now; it means that customers like you and me will not tolerate 2 weeks of downtime from a Web 2.0 service provider.  If we have the alternative to use an app on a tablet  device, we will take that alternative and not look back.

Tell your friends and colleagues about us. Thanks!
Share this

Killed by code

I think it’s only a question of time before we have a drive by execution of a politician with an ICD (implanted cardiac device).

I’ve been talking to our medical device customers about mobile security of implanted devices for over a year now.

I  gave a talk about mobile medical device security at the Logtel Mobile security conference in Herzliya a year ago and discussed proof of concept attacks on implanted cardiac devices with mobile connectivity.

Mocana, is a company with a pretty impressive line of security products for embedded devices – working at the firmware layer it appears. Mocana secures the “Internet of Things” – the 20 billion non-PC devices that are increasingly connecting to networks across every sector of our economy including Smartphones, Datacom, Smartgrid, Federal, Consumer and Medical. These devices already outnumber workstations on the Internet by about five to one, representing a $900 billion market that’s growing twice as fast as the PC market.

The Mocana Deviceline blog reports that “Alarmed by new research showing the increasing vulnerability of wireless implanted medical devices, two members of Congress have asked for hearings on the security of these devices

Mobile and medical and regulatory is a pretty sexy area and I’m not surprised that politicians are picking up on the issues. After all, there was an episode of CSI New York last year that used the concept of an EMP to kill a person with an ICD, although I imagine that a radio exploit of  an ICD or embedded insulin pump might be hard to identify unless the device itself was logging external commands.

Congress was more concerned about the regulatory issues than the patient safety and security issues:

Representatives Anna Eshoo (D-CA) and Ed Markey (D-MA), both members of the House Energy and Commerce Committee sent a letter last August asking the GAO to Study Safety, Reliability of Wireless Healthcare Tech and report on the extent to which FCC is:

  • Identifying the challenges and risks posed by the proliferation of medical implants and other devices that make use of broadband and wireless technology.
  • Taking steps to improve the efficiency of the regulatory processes applicable to broadband and wireless enabled medical devices.
  • Ensuring wireless enabled medical devices will not cause harmful interference to other equipment.
  • Overseeing such devices to ensure they are safe, reliable, and secure.Coordinating its activities with the Food and Drug Administration.

At  Black Hat August 2011, researcher Jay Radcliffe, who is also a diabetic, reported how he used his own equipment to show how attackers could compromise instructions to wireless insulin pumps.

Radcliffe found that his monitor had no verification of the remote signal. Worse, the pump broadcasts its unique ID so he was able to send the device a command that put it into SUSPEND mode (a DoS attack). That meant Radcliffe could overwrite the device configurations to inject more insulin. With insulin, you cannot remove it from the body (unless he drinks a sugary food).

The FDA position that it is sufficient for them to warn medical device makers that they are responsible for updating equipment after it’s sold and the downplaying of  the threat by industry groups like The Advanced Medical Technology Association is not constructive.

Following the proof of concept attack on ICDs by Daniel Halperin from the University of Washington, Kevin Fu from U. Mass Amherst et al “Pacemakers and Implantable Cardiac Defibrillators:Software Radio Attacks and Zero-Power Defenses”  this is a strident wakeup call to medical device vendors  to  implement more robust protocols  and tighten up software security of their devices.

Tell your friends and colleagues about us. Thanks!
Share this
Information Security Best Practices

Medical device security

What is more important – patient safety or the health of the enterprise hospital Windows network?  What is more important – writing secure code or installing an anti-virus?

Software Associates specializes in helping medical device vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

A threat analysis was performed on a medical device used in intensive care units.  The threat analysis used the PTA (Practical threat analysis) methodology.

Our analysis considered threats to three assets: medical device availability, the hospital enterprise network and patient confidentiality/HIPAA compliance. Following the threat analysis, a prioritized plan of security countermeasures was built and implemented including the issue of propagation of viruses and malware into the hospital network (See Section III below).

Installing anti-virus software on a medical device is less effective than implementing other security countermeasures that mitigate more severe threats – ePHI leakage, software defects and USB access.

A novel benefit of our approach is derived by providing the analytical results as a standard threat model database, which can be used by medical device vendors and customers to model changes in risk profile as technology and operating environment evolve. The threat modelling software can be downloaded here.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Why Microsoft Windows is a bad idea for medical devices

I’m getting some push back on LinkedIn on my articles on banning Microsoft Windows from medical devices that are installed in hospitals – read more about why Windows is a bad idea for medical devices here and here.

Scott Caldwell tells us that the FDA doesn’t rule “out” or “in” any particular technology, including Windows Embedded.

Having said that, Microsoft has very clear language in their EULA regarding the use of Windows Embedded products:

“The Products are not fault-tolerant and are not designed, manufactured or intended for any use requiring fail-safe performance in which the failure of a Product could lead to death, serious personal injury, severe physical or environmental damage (“High Risk Activities”).”

Medical device vendors  that  use Windows operating systems for less critical devices, or for the user interface are actually increasing the threat surface for a hospital, since any Windows host can be a carrier of malware that can take down the entire hospital network, regardless of it’s primary mission function, be it user-friend UI at a nursing station or intensive care monitor at the bedside.

Medical device vendors that use Microsoft IT systems management “best-practices” often  take the approach of “bolting-on” third party solutions for anti-virus and software distribution instead of developing robust, secure software, “from the ground up” with a secure design, threat analysis, software security assessment and secure software implementation.

Installing third-party security solutions that need to be updated in the field, may be inapplicable to an embedded medical device as the MDA (Medical Device Amendments of 1976) clearly states:

These devices may enter the market only if the FDA reviews their design, labeling, and manufacturing specifications and determines that those specifications provide a reasonable assurance of safety and effectiveness. Manufacturers may not make changes to such devices that would affect safety or effectiveness unless they first seek and obtain permission from the FDA.

It’s common knowledge that medical device technicians use USB flash drives and notebook computers to update medical devices in the hospital. Given that USB devices and Windows computers are notoriously vulnerable to viruses and malware, there is a reasonable threat that a field update may infect the Windows-based medical device. If the medical device is isolated from the rest of hospital network, then the damage is  localized, but if the medical device is networked to an entire segment, then all other Windows based computers on that segment may be infected as well – propagating to the rest of the hospital in a cascade attack.

It’s better to get the software security right than to try and bolt in security after the implementation.Imagine that you had to buy the brakes for a new car and install them yourself after you got that bright new Lexus.

It is not unusual for medical device vendors to fall victim to the same Microsoft marketing messages used with enterprise IT customers – “lower development costs, and faster time to market” when in fact, Windows is so complex and vulnerable that the smallest issue may take a vendor months to solve. For example – try and get Windows XP to load the wireless driver without the shell.   Things that may take months to research and resolve in Windows are often easily solved in Linux with some expertise and a few days work. That’s why you have professional medical device  software security specialists like Software Associates.

With Windows, you get an application up and running quickly, but it is never as reliable and secure as you need.

With Linux, you need expertise to get up and running, and once it works, it will be as reliable and secure as you want.

Yves Rutschle says that outlawing Microsoft Windows from medical devices in hospitatls  sounds too vendor-dependant to be healthy (sic) (Seems to me that this would make the medical device industry LESS vendor-dependent, not more vendor-dependent, considering the number of embedded Linux options out there.)

Yves suggests that instead, the FDA should create a “proper medical device certification cycle. If you lack of inspiration, ask the FAA how they do it, and maybe make the manufacturers financially responsible for any software failure impact, including death of a patient“. (The FDA does not certify medical devices, they grant pre-market approval).

I like a free market approach but consider this:

(Held)The MDA’s pre-emption clause bars common-law claims challenging the safety or effectiveness of a medical device marketed in a form that received premarket approval from the FDA. Pp. 8–17.

Maybe the FDA should learn from the FAA but in the meantime, it seems to me if the FDA pre-market validation process had an item requiring a suitable operating system EULA, that would pretty much solve the problem.

Tell your friends and colleagues about us. Thanks!
Share this

The ethical aspects of data security

Ethical breaches or data breaches.

I was standing in line at Ben Gurion airport, waiting for my bag to be x-rayed. A conversation started with a woman standing next to me in line. The usual sort – “Where are you traveling and what kind of work do you do?”. I replied that I was traveling to Warsaw and that I specialize in data security and compliance – helping companies prevent trusted insider theft and abuse of sensitive data.

She said, “well sure, I understand exactly what you mean – you help enforce ethical behavior of people in the organization”.

I stopped for a moment and asked her, hold on – “what kind of business are you in”? She said – “well, I worked in the GSS for years training teams tasked with protecting high echelon politicians and diplomats. I understand totally the notion of enforcing ethical behavior”. And now? I asked. Now, she said, ” I do the same thing, but on my own”.

Let’s call my new friend “Sarah”.

Sarah’s ethical approach was for me, a breath of fresh air. Until that point, I had defined our data security practice as an exercise in data collection, risk analysis and implementation of the appropriate technical security countermeasures to reduce the risk of data breach and abuse. Employees, competitors and malicious attackers are all potential attackers.  The objective is to implement a cost-effective portfolio of data security countermeasures – policies and procedures, software security assessments, network surveillance, data loss prevention (DLP) and encryption at various levels in the network and applications.

I define security as protecting information assets.

Sarah defines security as protecting ethical behavior.

In my approach to data security, employee behavior is an independent variable, something that might be observed but certainly, not something that can be controlled. Since employees, contractors and business partners tend to have their own weaknesses and problems that are not reported on the balanced score card of the company, my strategy for data security posits that it is more effective to monitor data than to monitor employees and prevent unauthorized transfer or modification of data instead of trying to prevent irrational or criminal behavior of people who work in the extended enterprise.

In Sarah’s approach to data security, if you make a set of rules and train and enforce ethical behavior with good management, sensing and a dosage of fear in the workplace; you have cracked the data security problem.

So – who is right here?

Well – we’re both right, I suppose.

The answer is that without asset valuation and analysis of asset vulnerabilities, protecting a single asset class (human resources, data, systems or network) while ignoring others, may be a mistake.

Let’s examine two specific examples in order to test the truth of this statement.

Consider a call center with 500 customer service representatives. They use a centralized CRM application, they have telephones and email connectivity. Each customer service representative has a set of accounts that she handles. A key threat scenario is leaking customer account information to unauthorized people – private investigators, reporters, paparazzi etc… The key asset is customer data but the key vulnerability is the people that breach ethical behavior on the way to breaching customer data.

In the case of customer service representatives breaching customer privacy, Sarah’s strategy of protecting ethical behavior is the best security countermeasure.

Now, consider a medical device company with technology that performs imaging analysis and visualization. The company deploys MRI machines in rural areas and uses the Internet to provided remote expert diagnosis for doctors and patients who do not have access to big city hospitals. The key asset transmitted from the systems for remote diagnosis is PHI (protected health information), and the key vulnerabilities are in the network interfaces, the applications software and operating systems that the medical device company uses.

In  the case of remote data transfer and distributed/integrated systems, a combined strategy of software security, judicious network design and operating system selection (don’t use Microsoft Windows…) is the correct way to protect the data.

My conversation with Sarah at the airport gave me a lot of food for thought.

Data loss prevention (DLP technology) is great  and  ethical employee behavior is crucial but they need to work hand in glove.

Where there are people, there is a need to mandate, monitor and reinforce ethical behavior using  a clearly communicated corporate strategy with employees and contractors. In an environment where users require freedom and flexibility in using applications such as email and search, the ethical behavior for protecting company assets starts with company executives who show from personal example that IT infrastructure is to be used to further the company’s business and improving customer service and not for personal entertainment, gain or gratification.

It’s the simple things in life that count.

Tell your friends and colleagues about us. Thanks!
Share this

Why outlawing Windows from embedded medical devices is a good idea

In a previous post The Microsoft Monoculture as a threat to national security, I suggested that the FDA might consider banning Windows as an operating system platform for medical devices and their accompanying information management systems.

One of my readers took umbrage at the notion of legislating one monoculture (Microsoft) with another (Linux) and how the Linux geeks are hooked on the CLI just like Windows users are hooked on a GUI.

The combination of large numbers of software vulnerabilities,  user lock in created by integrating applications with Windows,  complexity of Microsoft products and their code and Microsoft predatory trade practices are diametrically different than Linux and the FOSS movement.

One of the biggest threats to medical devices in hospitals is the widespread use of USB flash disk drives and Windows notebooks to update medical device software. With the infamous auto-run feature on Microsoft USB drives – flash memory is an easy attack vector for propagating malware via Windows based medical devices into a hospital network. This is one (and not the only) reason, why I am campaigning against use of Windows in medical devices.

This  has nothing to do with the CLI or GUI of the operating system and personal preferences for a user interface.

This has everything to do with manufacturing secure embedded medical devices that must survive in most demanding, heterogeneous and mission critical environment one can imagine – a modern hospital.

I never advocated mandating Linux by law for medical devices.

It might be possible to mandate a complex set of software security requirements instead of outlawing Windows in embedded medical devices as a more politically-correct but far more costly alternative for the the FDA and the US taxpayer.

Regardless of the politics involved (and they are huge…) – if the FDA were to remove Windows from an approved list of embedded medical device operating systems – the costs to the FDA would decrease since the FDA would need less Windows expertise for audits and the threat surface they would have to cover for critical events would be smaller.

Tell your friends and colleagues about us. Thanks!
Share this