Tag Archives: healthcare

Protecting your blackberry

Why Google is a bad idea for security and compliance

Dear consultant,

I worry because so many of the best practices documents I read say that we need to store data in the cloud in Canada if we do business in Canada. See page 19 here – Health privacy in Canada

Sincerely – consumer healthcare product manager

Dear consumer healthcare product manager –

First of all. Don’t worry be happy! Thanks for sharing.

Everyone uses Google to ask questions.  That includes security and compliance specialists in Israel for biomed like me (Danny Lieberman) and my company (Software Associates).

The problems start when clients start consulting with Google for their data security and privacy compliance affairs.   Unlike healthcare problems, where there are very large numbers of people asking and answering questions and wisdom of the crowds kicks in – data security and privacy compliance is a niche market and it’s very political.

The bottom line is that you do not have host locally in Canada – until they change the law.

There is no specific legal requirement in Canadian law for country-hosting (as in France).

Unfortunately – as elsewhere in the world – there is a certain amount misinformed, and/or politically-motivated media discussion following the Snowden affair.

People that write these documents like to point at the US Patriot Act as a reason for country hosting – by not bothering to note what the Patriot Act really is – a US law that is intended to Provide Appropriate Tools Required to Intercept and Obstruct Terrorism and intercept lone wolf terrorists.

The suggestion that the NSA will intercept depersonalized consumer health records that you collect in your application  as part of the war on individual terrorists borders on the absurd.

Suppose you have a user who is obese and/or has Type II diabetes and/or is pregnant and/or loves to dance Zumba.  Is that information part of the NSA threat model for lone wolf terrorists?

I don’t think so.

The document in question  makes an  absurd suggestion on Page 19 that individual doctor offices are more secure than in a Tier 1 Cloud service provider.

The data loss risk in a doctor office is several orders of magnitude higher than in Microsoft, Amazon or Rackspace cloud hosting facilities.

Since the document is misleading from a security and compliance perspective (misleading regarding the Patriot Act and incorrect regarding data loss risk) – we see that we cannot rely on it as a source of so-called “security best practices”.

In general – it is not best practice to use Google for security and compliance best practice.


Danny Lieberman-Security and compliance specialists for biomed companies

Tell your friends and colleagues about us. Thanks!
Share this

Why big data for healthcare is dangerous and wrong

The Mckinsey Global Institute recently published a report entitled – Big data: The next frontier for innovation, competition, and productivity .

The Mckinsey Global Institute report on big data is no more than a lengthy essay in fallacies, inflated hyperbole, faulty assumptions, lacking in evidence for its claims and ignoring the two most important stakeholders of healthcare – namely doctors and patients.

They just gloss over the security and privacy implications of putting up a big big target with a sign that says “Here is a lot of patient healthcare data – please come and steal me“.

System efficiency does not improve patient health

In health care, big data can boost efficiency by reducing systemwide costs linked to undertreatment and overtreatment and by reducing errors and duplication in treatment. These levers will also improve the quality of care and patient outcomes.

To calculate the impact of big-data-enabled levers on productivity, we assumed that the majority of the quantifiable impact would be on reducing inputs.

We held outputs constant—i.e., assuming the same level of health care quality. We know that this assumption will underestimate the impact as many of our big-data-enabled levers are likely to improve the quality of health by, for instance, ensuring that new drugs come to the market faster…

They don’t know that.

The MGI report does not offer any correlation between reduction in systemwide costs and improving the quality of care of the individual patient.

The report deals with the macroeconomics of the pharmaceutical and healthcare organization industries.

In order to illustrate why systemwide costs are not an important factor in the last mile of healthcare delivery, let’s consider the ratio of system overhead to primary care teams in Kaiser-Permanente – one of the largest US HMOs. At KP, (according to their 2010 annual report) – out of 167,000 employees, there were 16,000 doctors, and 47,000 nurses.

Primary care teams account for only 20 percent of KP head-count. Arguably, big-data analytics might enable KP management to deploy services in more effective way but do virtually nothing for the 20 percent headcount that actually encounter patients on a day to day basis.

Let’s not improve health, let’s make it cheaper to keep a lot of people sick

Note the sentence – “assuming the same level of health care quality”. In other words, we don’t want to improve health, we want to reduce the costs of treating obese people who eat junk food and ride in cars instead of walking instead of fixing the root causes. Indeed MGI states later in the their report:

Some actions that can help stem the rising costs of US health care while improving its quality don’t necessarily require big data. These include, for example, tackling major underlying issues such as the high incidence and costs of lifestyle and behavior-induced disease.

Lets talk pie in the sky about big data and ignore costs and ROI

…the use of large datasets has the potential to play a major role in more effective and cost-saving care initiatives, the emergence of better products and services, and the creation of new business models in health care and its associated industries.

Being a consulting firm, MGI stays firmly seated on the fence and only commits itself to fluffy generalities about the potential to save costs with big data. The terms ROI or return on investment is  not mentioned even once because it would ruin their argumentation. As a colleague in the IT division of the Hadassah Medical Organization in Jerusalem told me yesterday, “Hadassah management has no idea of how much storing all that vital sign from smart phones will cost. As a matter of fact, we don’t even have the infrastructure to store big data”.

It’s safe to wave a lot of high-falutin rhetoric around about $300BN value-creation (whatever that means), when you don’t have to justify a return on investment or ask grass-level stakeholders if the research is crap.

MGI does not explain how that potential might be realized. It sidesteps a discussion of the costs of storing and analyzing big data, never asks if big data helps doctors make better decisions and it glosses over low-cost alternatives related to educating Americans on eating healthy food and walking instead of driving.

The absurdity of automated analysis

..we included savings from reducing overtreatment (and undertreatment) in cases where analysis of clinical data contained in electronic medical records was able to determine optimal medical care.

MGI makes an absurd assumption that automated analysis of clinical data contained in electronic medical records can determine optimal medical care.

This reminds me of a desert island joke.

A physicist and economist were washed up on a desert island. They have a nice supply of canned goods but no can-opener. To no avail, the physicist experiments with throwing the cans from a high place in the hope that they will break open (they don’t). The economist tells his friend “Why waste your time looking for a practical solution, let’s just assume that we have a can-opener!”.

The MGI report just assumes that we have a big data can-opener and that big data can be analyzed to optimize medical care (by the way, they do not even attempt to offer any quantitive indicators for optimization – like reducing the number of women that come down with lymphema after treatment for breast cancer – and lymphedema is a pandemic in Westerm countries, affecting about 140 million people worldwide.

In Western countries, secondary lymphedema is most commonly due to cancer treatment.Between 38 and 89% of breast cancer patients suffer from lymphedema due to axillary lymph node dissection and/or radiation.See :

^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.

  1. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  2. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  3. ^ Kissin, MW; G. Guerci della Rovere, D Easton et al (1986). “Risk of lymphoedema following the treatemnt of breast cancer.”. Br. J. Surg. 73: 580-584.
  4. ^ Segerstrom, K; P. Bjerle, S. Graffman, et al (1992). “Factors that influence the incidence of brachial oedema after treatment of breast cancer”. Scand. J. Plast. Reconstr. Surg. Hand Surg. 26: 223-227.

More is not better

We found very significant potential to create value in developed markets by applying big data levers in health care.  CER (Comparative effectiveness research ) and CDS (Clinical decision support) were identified as key levers and can be valued based on different implementations and timelines

Examples include joining different data pools as we might see at financial services companies that want to combine online financial transaction data, the behavior of customers in branches, data from partners such as insurance companies, and retail purchase history. Also, many levers require a tremendous scale of data (e.g., merging patient records across multiple providers), which can put unique demands upon technology infrastructures. To provide a framework under which to develop and manage the many interlocking technology components necessary to successfully execute big data levers, each organization will need to craft and execute a robust enterprise data strategy.

The American Recovery and Reinvestment Act of 2009 provided some $20 billion to health providers and their support sectors to invest in electronic record systems and health information exchanges to create the scale of clinical data needed for many of the health care big data levers to work.

Why McKinsey is dead wrong about the efficacy of analyzing big EHR data

  1. The notion that more data is better (the approach taken by Google Health and Microsoft and endorsed by the Obama administration and blindly adopted by MGI in their report.
  2. EHR is based on textual data, and is not organized around patient clinical issue.

Meaningful machine analysis of EHR is impossible

Current EHR systems store large volumes of data about diseases and symptoms in unstructured text, codified using systems like SNOMED-CT1. Codification is intended to enable machine-readability and analysis of records and serve as a standard for system interoperability.

Even if the data was perfectly codified, it is impossible to achieve meaningful machine diagnosis of medical interview data that was uncertain to begin with and not collected and validated using evidence-based methods.

More data is less valuable for a basic reason

A fundamental observation about utility functions is that their shape is typically concave: Increments of magnitude yield successively smaller increments of subjective value.2

In prospect theory3, concavity is attributed to the notion of diminishing sensitivity, according to which the more units of a stimulus one is exposed to, the less one is sensitive to additional units.

Under conditions of uncertainty in a medical diagnosis process, as long as it is relevant, less information enables taking a better and faster decision, since less data processing is required by the human brain.

Unstructured EHR data  is not organized around patient issue

When a doctor examines and treats a patient, he thinks in terms of “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up.

In current EHR systems, when a doctor records the encounter, he records planning, tests, therapies, and follow-up, but not under a main “issue” entity; since there is no place for it.

The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, and records everything as unstructured text unrelated to issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

The “reverse-engineering” problem is the root cause for a series of additional problems:

  • Lack of overview of the patient
  • No connection to clinical guidelines, no indication of which guidelines to follow or which have been followed
  • No connection between prescriptions and diseases, except circumstantial
  • No ability to detect and warn for contraindications
  • No archiving or demoting of less important and solved problems
  • Lack of overview of status of the patient, only a series of historical observations
  • In most systems, no search capabilities of any kind
  • An excess of textual data that cannot possibly be read by every doctor at every encounter
  • Confidentiality borders are very hard to define
  • Very rigid and closed interfaces, making extension with custom functionality very difficult


MGI states that their work is independent and has not been commissioned or sponsored in any way by any business, government, or other institution. True, but  MGI does have consulting gigs with IBM and HP that have vested interests in selling technology and services for big data.

The analogies used in the MGI report and their tacit assumptions probably work for retail in understanding sales trends of hemlines and high heels but they have very little to do with improving health, increasing patient trust and reducing doctor stress.

The study does not cite a single interview with a primary care physician or even a CEO of a healthcare organization that might support or validate their theories about big data value for healthcare. This is shoddy research, no matter how well packaged.

The MGI study makes cynical use of “framing”  in order to influence the readers’ perception of the importance of their research. By citing a large number like $300BN readers assume that impact of big data is well, big. They don’t pay attention to the other stuff – like “well it’s only a potential savings” or “we never considered if primary care teams might benefit from big data (they don’t).

At the end of the day, $300BN in value from big data healthcare is no more than a round number. What we need is less data and more meaningful relationships with our primary care teams.


2 Current Directions in Psychological Science, Vol 14, No. 5 http://faculty.chicagobooth.edu/christopher.hsee/vita/Papers/WhenIsMoreBetter.pdf

Tell your friends and colleagues about us. Thanks!
Share this
skin mounted medical devices

Bionic M2M: Are Skin-mounted M2M devices – the future of eHealth?

In the popular American TV series that aired on ABC in the 70s, Steve Austin is the “Six million Dollar Man”, a former astronaut with bionic implants. The show and its spinoff, The Bionic Woman (Lindsay Wagner playing a former tennis player who was rebuilt with bionic parts similar to Austin after a parachuting accident) were hugely successful.

Modern M2M communication has expanded beyond a one-to-one connection and changed into a system of networks that transmits data to personal appliances using wireless data networks.

M2M networks are much much more than remote meter reading.

The fastest growing M2M segment in Germany, with an average annual growth of 47 percent, will be from consumer electronics with over 5 M2M SIM-cards. The main growth driver is “tracking and tracing”. (Research by E-Plus )

What would happen if the personal appliance was part of the person?

Physiological measurement and stimulation techniques that exploit interfaces to the skin have been of interest for over 80 years, beginning in 1929 with electroencephalography from the scalp.

A new class of electronics based on transparent, flexible 50micron silicon film laminates onto the skin with conformal contact and adhesion based on van der Waals interaction. See: Epidermal Electronics John Rogers et al. Science 2011.

This new class of device is mechanically invisible to the user, is accurate compared to traditional electrodes and has RF connectivity.  The thin 50 micron film serve as temporary support for manual mounting of these systems on the skin in an overall construct that is directly analogous to that of a temporary transfer tattoo, as can be seen in the above picture.

Film mounted devices can provide high-quality signals with information on all phases of the heartbeat, EMG (muscle activity) and EEG (brain activity). Using silicon RF diodes, devices can provide short-range RF transmission at 2Ghz.  Note the antenna on the device.

After mounting it onto the skin, one can wash away the PVA and peel the device back with a pair of tweezers.  When completely removed, the system collapses on itself because of its extreme deformability and skin-like physical properties.

Due to their inherent transparent, unguarded, low cost and mass-deployed nature, epidermal mounted medical devices invite new threats that are not mitigated by current security and wireless technologies.

Skin-mounted devices might also become attack vectors themselves, allowing a malicious attacker to apply a device to the spine, and deliver low-power stimuli to the spinal cord.

How do we secure epidermal electronics devices on people?

Let’s start with some definitions:

  • Verification means is the device built/configured for its intended use (for example measuring EMG activity and communicating the data to NFC (near field communications) device.
  • Validation means the ability to assess the security state of the device, whether or not it has been compromised.
  • RIMs (Reference Integrity Measurements) enable vendors/healthcare providers define the desired target configurations of devices, for example, is it configured for RF communications

There are 3 key threats when it comes to epidermal electronics:

  1. Physical attacks: Reflashing before application to the skin in order to modify  intended use.
  2. Compromise of credentials: brute force attacks as well as malicious cloning of credentials.
  3. Protocol attacks against the device: MITM on first network access, DoS, remote reprogramming

What are the security countermeasures against these threats?  We can consider a traditional IT security model and a trusted computing model.

Traditional IT security model?

Very large numbers of low-cost, distributed devices renders an  access-control security model inappropriate. How would a firewall on an epidermal electronics device enforce intended use, and manage access-control policies? What kind of policies would you want to manage? How would you enforce installation of the internal firewall during the manufacturing process?

Trusted computing model?

A “trusted computing model”  may be considered as an alternative security countermeasure to access control and policy management.

An entity can be “trusted” if it predictably and observably behaves in the expected manner for its intended use. But what does “intended use” mean in the case of epidermal electronics that are used for EKG, EEG and EMG measurements on people?

Can the traditional, layered, trusted computing models used in the telecommunications world be used to effectively secure cheap, low-cost, epidermal electronics devices?

In an M2M trusted computing model there are 3 methods:  autonomous validation, remote validation and semi-autonomous validation. We will examine each and try and determine how effective each model is as a security countermeasure for the key threats of epidermal electronics.See: “Security and Trust for M2M Communications” – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein

Autonomous validation

This is essentially the trust model used for smart cards, where the result of local verification is true or false.

Autonomous validation does not depend on the patient herself or the healthcare provider. Local verification is assumed to have occurred before the skin-mounted device attempts communication or performs a measurement operation.

Autonomous validation makes 3 fundamental assumptions – all 3 are wrong in the case of epidermal electronics:

  1. The local verification process is assumed to be perfectly secure since the results are not shared with anyone else, neither the patient nor the healthcare provider.
  2. We assume that the device itself is completely trusted in order to enforce security policies.
  3. We assume that a device failing self-verification cannot deviate from its “intended use”.

Device-based security can be broken and cheap autonomous skin-mounted devices can be manipulated – probably much easier than cell-phones since for now at least, they are much simpler. Wait until 2015 when we have dual core processors on a film.

In addition, autonomous validation does not mitigate partial compromise attacks (for example – the device continues to measure EMG activity but also delivers mild shocks to the spine).

Remote validation

Remote validation has connectivity, scalability and availability issues. It is a probably a very bad idea to rely on network availability in order to remotely validate a skin-mounted epidermal electronics device.

In addition to the network and server infrastructure required to support remote validation, there would also be a huge database of RIMs, to enable vendors and healthcare providers define the target configurations of devices.

Run-time verification is meaningless if it is not directly followed by validation, which requires frequent handshaking with central service providers, which in turn increases traffic and creates additional vulnerabilities, such as side-channel attacks.

Remote validation of personally-mounted devices compromises privacy since the configuration may be virtually unique for a particular person and interception of validation messages could reveal the identity based on location even without deccrypting payloads.

Discrimination by vendors also becomes possible, as manipulation and control of the RIM databases could lock out other applications/vendors.

Semi-Autonomous Validation

Semi-autonomous validation divides verification and enforcement between the device and the healthcare provider.

In semi-autonomous validation, the device verifies itself locally and then sends the results in a network message to the healthcare provider who can decide if he needs to notify the user/patient if the device has been compromised or does not match the intended use.

Such a system needs to ensure authentication, integrity, and confidentiality of messages sent from epidermal electronics devices to the healthcare provider.

RIM certificates are a key part of semi-autonomous validation and would be signed by a trusted third party/CA.

Semi-autonomous validation also allows for more granular delegation of control to the device itself or the healthcare provider – depending on the functionality.


Epidermal electronics devices are probably going to play a big part in the future of healthcare for monitoring vital signs in a simple, cheap and non-invasive way.  These are medical devices, used today primarily for measuring vital signs that are directly mounted on the skin and not a Windows PC or Android smart phone that can be rebooted if there is a problem.

As their computing capabilities develop, current trusted computing/security models will be inadequate for epidermal electronics devices and attention needs to be devoted as soon as possible in order to build a security (probably semi-autonomous) model that will mitigate threats by malicious attackers.


  1. Security and Trust for M2M Communications – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein
  2. Epidermal Electronics John Rogers et al. Science 2011.
Tell your friends and colleagues about us. Thanks!
Share this