Tag Archives: Privacy

hipaa cloud security

Privacy, Security, HIPAA and you.

Medical devices, mobile apps, Web applications – storing data in the cloud, sharing with hospitals and doctors. How do I comply with HIPAA? What applies to me – the Security Rule, the Privacy Rule or both?

Consider a common use case these days – you’re a medical device vendor and your device stores health information in the cloud. You have a web and/or mobile application that enable doctors/hospitals to access the data from my device as part of their healthcare services. If you operate in the United States, what HIPAA regulations apply ? Do I need to comply with the Privacy Rule, the Security Rule or both?

There is a good deal of confusion regarding the HIPAA Privacy and Security Rules and how things work. In this article, we will examine the original content of the HIPAA regulation and explain who needs to do what.

What is the Privacy Rule?

The HIPAA Final Rule (enacted in Jan 2013) has 2 pieces – the Privacy Rule and the Security Rule.

The Privacy Rule establishes standards for the protection of health information. The Security Rule establishes security standards for protecting health information that is held or transferred in electronic form. The Privacy Rule broadly defines ‘‘protected health information’’ as individually identifiable health information maintained or transmitted by a covered entity in any form or medium. The Privacy Rule is located at 45 CFR Part 160 and Subparts A and E of Part 164.

Who needs to comply with the Privacy Rule?

By law, the HIPAA Privacy Rule applies only to covered entities – health plans, health care clearinghouses, and certain health care providers. However, most health care providers and health plans do not carry out all of their health care activities and functions by themselves. Instead, they often use the services of a variety of other persons or businesses – and transfer/exchange health information in electronic form to use these services. These “persons or businesses” are called “business associates”; defined in 45 CFR 164.502(e), 164.504(e), 164.532(d) and (e) 45 CFR § 160.102, 164.500.

What is the Security Rule?

The Security Rule operationalizes the Privacy Rule by addressing the technical and non-technical safeguards that the “covered entities” and their business associates must implement in order to secure individuals’ “electronic protected health information” (EPHI). The Security Rule is located at 45 CFR Part 160 and Subparts A and C of Part 164.

Who needs to comply with the Security Rule?

Since its an operational requirement, the Security Rule (by law) applies to covered entities, business associates and their sub-contractors. While the Privacy Rule applies to protected health information in all forms, the Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.

Business associate liability

Section 13404 of the HITECH Act creates direct liability for impermissible uses and disclosures of protected health information by a business associate of a covered entity “that obtains or creates” protected health information “pursuant to a written contract or other arrangement described in § 164.502(e)(2)” and for compliance with the other privacy provisions in the HITECH Act.

Section 13404 does not create direct liability for business associates with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). Therefore, under the final rule, a business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

Permitted use of EPHI by a business associate

While a business associate does not have health care operations, it is permitted by § 164.504(e)(2)(i)(A) to use and disclose protected health information as necessary for its own management and administration if the business associate agreement permits such activities, or to carry out its legal responsibilities. Other than the exceptions for the business associate’s management and administration and for data aggregation services relating to the health care operations of the covered entity, the business associate may not use or disclose protected health information in a manner that would not be permissible if done by the covered entity (even if such a use or disclosure is permitted by the business associate agreement).

Taken from the Federal Register

General Definitions

See § 160.103 for HIPAA general definitions used by the law – definitions of business associates, protected health information and more.

Summary

  • The Privacy Rule establishes standards for the protection of health information.
  • The Security Rule establishes operational security standards for protecting health information that is held or transferred in electronic form.
  • The Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.
  • Business associates do not have direct liability with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). A business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

 

Tell your friends and colleagues about us. Thanks!
Share this

Why big data for healthcare is dangerous and wrong

The Mckinsey Global Institute recently published a report entitled – Big data: The next frontier for innovation, competition, and productivity .

The Mckinsey Global Institute report on big data is no more than a lengthy essay in fallacies, inflated hyperbole, faulty assumptions, lacking in evidence for its claims and ignoring the two most important stakeholders of healthcare – namely doctors and patients.

They just gloss over the security and privacy implications of putting up a big big target with a sign that says “Here is a lot of patient healthcare data – please come and steal me“.

System efficiency does not improve patient health

In health care, big data can boost efficiency by reducing systemwide costs linked to undertreatment and overtreatment and by reducing errors and duplication in treatment. These levers will also improve the quality of care and patient outcomes.

To calculate the impact of big-data-enabled levers on productivity, we assumed that the majority of the quantifiable impact would be on reducing inputs.

We held outputs constant—i.e., assuming the same level of health care quality. We know that this assumption will underestimate the impact as many of our big-data-enabled levers are likely to improve the quality of health by, for instance, ensuring that new drugs come to the market faster…

They don’t know that.

The MGI report does not offer any correlation between reduction in systemwide costs and improving the quality of care of the individual patient.

The report deals with the macroeconomics of the pharmaceutical and healthcare organization industries.

In order to illustrate why systemwide costs are not an important factor in the last mile of healthcare delivery, let’s consider the ratio of system overhead to primary care teams in Kaiser-Permanente – one of the largest US HMOs. At KP, (according to their 2010 annual report) – out of 167,000 employees, there were 16,000 doctors, and 47,000 nurses.

Primary care teams account for only 20 percent of KP head-count. Arguably, big-data analytics might enable KP management to deploy services in more effective way but do virtually nothing for the 20 percent headcount that actually encounter patients on a day to day basis.

Let’s not improve health, let’s make it cheaper to keep a lot of people sick

Note the sentence – “assuming the same level of health care quality”. In other words, we don’t want to improve health, we want to reduce the costs of treating obese people who eat junk food and ride in cars instead of walking instead of fixing the root causes. Indeed MGI states later in the their report:

Some actions that can help stem the rising costs of US health care while improving its quality don’t necessarily require big data. These include, for example, tackling major underlying issues such as the high incidence and costs of lifestyle and behavior-induced disease.

Lets talk pie in the sky about big data and ignore costs and ROI

…the use of large datasets has the potential to play a major role in more effective and cost-saving care initiatives, the emergence of better products and services, and the creation of new business models in health care and its associated industries.

Being a consulting firm, MGI stays firmly seated on the fence and only commits itself to fluffy generalities about the potential to save costs with big data. The terms ROI or return on investment is  not mentioned even once because it would ruin their argumentation. As a colleague in the IT division of the Hadassah Medical Organization in Jerusalem told me yesterday, “Hadassah management has no idea of how much storing all that vital sign from smart phones will cost. As a matter of fact, we don’t even have the infrastructure to store big data”.

It’s safe to wave a lot of high-falutin rhetoric around about $300BN value-creation (whatever that means), when you don’t have to justify a return on investment or ask grass-level stakeholders if the research is crap.

MGI does not explain how that potential might be realized. It sidesteps a discussion of the costs of storing and analyzing big data, never asks if big data helps doctors make better decisions and it glosses over low-cost alternatives related to educating Americans on eating healthy food and walking instead of driving.

The absurdity of automated analysis

..we included savings from reducing overtreatment (and undertreatment) in cases where analysis of clinical data contained in electronic medical records was able to determine optimal medical care.

MGI makes an absurd assumption that automated analysis of clinical data contained in electronic medical records can determine optimal medical care.

This reminds me of a desert island joke.

A physicist and economist were washed up on a desert island. They have a nice supply of canned goods but no can-opener. To no avail, the physicist experiments with throwing the cans from a high place in the hope that they will break open (they don’t). The economist tells his friend “Why waste your time looking for a practical solution, let’s just assume that we have a can-opener!”.

The MGI report just assumes that we have a big data can-opener and that big data can be analyzed to optimize medical care (by the way, they do not even attempt to offer any quantitive indicators for optimization – like reducing the number of women that come down with lymphema after treatment for breast cancer – and lymphedema is a pandemic in Westerm countries, affecting about 140 million people worldwide.

In Western countries, secondary lymphedema is most commonly due to cancer treatment.Between 38 and 89% of breast cancer patients suffer from lymphedema due to axillary lymph node dissection and/or radiation.See :

^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.

  1. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  2. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  3. ^ Kissin, MW; G. Guerci della Rovere, D Easton et al (1986). “Risk of lymphoedema following the treatemnt of breast cancer.”. Br. J. Surg. 73: 580-584.
  4. ^ Segerstrom, K; P. Bjerle, S. Graffman, et al (1992). “Factors that influence the incidence of brachial oedema after treatment of breast cancer”. Scand. J. Plast. Reconstr. Surg. Hand Surg. 26: 223-227.

More is not better

We found very significant potential to create value in developed markets by applying big data levers in health care.  CER (Comparative effectiveness research ) and CDS (Clinical decision support) were identified as key levers and can be valued based on different implementations and timelines

Examples include joining different data pools as we might see at financial services companies that want to combine online financial transaction data, the behavior of customers in branches, data from partners such as insurance companies, and retail purchase history. Also, many levers require a tremendous scale of data (e.g., merging patient records across multiple providers), which can put unique demands upon technology infrastructures. To provide a framework under which to develop and manage the many interlocking technology components necessary to successfully execute big data levers, each organization will need to craft and execute a robust enterprise data strategy.

The American Recovery and Reinvestment Act of 2009 provided some $20 billion to health providers and their support sectors to invest in electronic record systems and health information exchanges to create the scale of clinical data needed for many of the health care big data levers to work.

Why McKinsey is dead wrong about the efficacy of analyzing big EHR data

  1. The notion that more data is better (the approach taken by Google Health and Microsoft and endorsed by the Obama administration and blindly adopted by MGI in their report.
  2. EHR is based on textual data, and is not organized around patient clinical issue.

Meaningful machine analysis of EHR is impossible

Current EHR systems store large volumes of data about diseases and symptoms in unstructured text, codified using systems like SNOMED-CT1. Codification is intended to enable machine-readability and analysis of records and serve as a standard for system interoperability.

Even if the data was perfectly codified, it is impossible to achieve meaningful machine diagnosis of medical interview data that was uncertain to begin with and not collected and validated using evidence-based methods.

More data is less valuable for a basic reason

A fundamental observation about utility functions is that their shape is typically concave: Increments of magnitude yield successively smaller increments of subjective value.2

In prospect theory3, concavity is attributed to the notion of diminishing sensitivity, according to which the more units of a stimulus one is exposed to, the less one is sensitive to additional units.

Under conditions of uncertainty in a medical diagnosis process, as long as it is relevant, less information enables taking a better and faster decision, since less data processing is required by the human brain.

Unstructured EHR data  is not organized around patient issue

When a doctor examines and treats a patient, he thinks in terms of “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up.

In current EHR systems, when a doctor records the encounter, he records planning, tests, therapies, and follow-up, but not under a main “issue” entity; since there is no place for it.

The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, and records everything as unstructured text unrelated to issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

The “reverse-engineering” problem is the root cause for a series of additional problems:

  • Lack of overview of the patient
  • No connection to clinical guidelines, no indication of which guidelines to follow or which have been followed
  • No connection between prescriptions and diseases, except circumstantial
  • No ability to detect and warn for contraindications
  • No archiving or demoting of less important and solved problems
  • Lack of overview of status of the patient, only a series of historical observations
  • In most systems, no search capabilities of any kind
  • An excess of textual data that cannot possibly be read by every doctor at every encounter
  • Confidentiality borders are very hard to define
  • Very rigid and closed interfaces, making extension with custom functionality very difficult

Summary

MGI states that their work is independent and has not been commissioned or sponsored in any way by any business, government, or other institution. True, but  MGI does have consulting gigs with IBM and HP that have vested interests in selling technology and services for big data.

The analogies used in the MGI report and their tacit assumptions probably work for retail in understanding sales trends of hemlines and high heels but they have very little to do with improving health, increasing patient trust and reducing doctor stress.

The study does not cite a single interview with a primary care physician or even a CEO of a healthcare organization that might support or validate their theories about big data value for healthcare. This is shoddy research, no matter how well packaged.

The MGI study makes cynical use of “framing”  in order to influence the readers’ perception of the importance of their research. By citing a large number like $300BN readers assume that impact of big data is well, big. They don’t pay attention to the other stuff – like “well it’s only a potential savings” or “we never considered if primary care teams might benefit from big data (they don’t).

At the end of the day, $300BN in value from big data healthcare is no more than a round number. What we need is less data and more meaningful relationships with our primary care teams.

1ttp://www.nlm.nih.gov/research/umls/Snomed/snomed_main.html

2 Current Directions in Psychological Science, Vol 14, No. 5 http://faculty.chicagobooth.edu/christopher.hsee/vita/Papers/WhenIsMoreBetter.pdf

Tell your friends and colleagues about us. Thanks!
Share this

The Private Social Network for healthcare

In his post on the Pathcare blog, I trust you to keep this private, Danny Lieberman talked about the roles that trust, security and privacy play in online healthcare interactions. In this post, Danny talks about healthcare privacy challenges in social networks and describes how to implement a private social network for healthcare without government privacy regulation and IT balls and chains.

Online interactions with our HMO

We have online interactions with our healthcare organizations; accessing a Web portal for medical history, scheduling visits etc. Our PHI (protected healthcare information) is hopefully well-secured by our healthcare provider under government regulation (HIPAA in the US, and the Data Protection Directive in the EU). Albeit in the name of privacy, healthcare providers often take security to absurd extremes, witness the following anecdote:

I tried using online medical services with my provider in Hawaii but they could not respond due to my not being in Hawaii. What good is online diagnostic services when the patient is not in his/her home state?

Well now, I thought, that’s why Al Gore invented the Internet so that we could access healthcare services anywhere, anytime. Guess not. With our healthcare provider, we interact with the IT department. Bummer. On Facebook we interact with our friends. Compassion.

A healthcare provider’s business model requires them to protect your health information from disclosure. This is generally interpreted as doing as little as possible to help you be healthy. Social media business models require them to maximize distribution of your content. This means that your privacy is up to you and the people you connect with.

It seems obvious to me, that privacy regulation cannot work in social media because the connectivity is so high. There is no central data center where you can install an IPS and DLP systems and implement all of HIPAA CFR 45 Appendix A administrative, physical and technical safeguards. In that case, let’s get back to basics. We agree that privacy in our healthcare interactions is critical.

What is privacy?

pri·va·cy/ˈprīvəsē/

  1. The state or condition of being free from being observed or disturbed by other people.
  2. The state of being free from public attention

Healthcare privacy by design

Just like you are alone with your doctor in his office,we can build a private social network where the topology of the network guarantees privacy. We describe a star topology where one doctor interacting with many patients. We guarantee online privacy in our star topology network with 3 simple principles;

  1. Each doctor has his own private network of patients.
  2. In the private network, patients do not interact with other patients (interact as in friending, messaging etc.). We can expand the definition a bit by allowing a patient to friend another person in a caregiver role, but this is the only exception to the rule.
  3. A doctors private network does not overlap with other doctor networks, although doctors connect with each other for referrals.

This is a private network for healthcare by design.

What makes it a private social network, is the use of the same social apps we use in social media like Twitter and Facebook: friending, short messaging, status updates, groups, content sharing and commenting/liking.

A doctor uses a private social network for healthcare with the same 3 basic primitives of public social networking: Connect (or friend), Follow and Share.

One of the things that excites me the most about private social networks for healthcare is the potential to make the information technology go away and put the focus back on the patient-physican interaction and quality of clinical care.

  • Doctors save time in interviews because patients can record events and experiences before they come in to the office.
  • Data is more accurate since patients can record critical events like falls and BP drops, in proximity to the event itself.
  • Better data makes physician decisions easier and faster.
  • Better data is good for health and easier and faster is good for business.

What a beautiful business model – compassion, care and great business!

Tell your friends and colleagues about us. Thanks!
Share this
risk-driven medical device security

Can I use Dropbox for storing healthcare data?

First of all, I’m a great fan of Dropbox.  It’s easy to use, fast, runs on Windows, Mac and Linux  and that means you can share files with colleagues and patients for consultations because that old assumption (that a lot of vendors still make by the way) that everyone is on Windows just isn’t true these days.  People have Windows 7, Mac, Ubutu 12.04, Android smart phones, iPads and they all run Dropbox.

When you have multiple Dropbox clients configured, your files will be instantly synchronized between all your devices when they come online. I use it daily to exchange files between my Android phone, Android tablet and Ubuntu desktop. Any change performed in the monitored folder is immediately synchronized with the other devices. My colleague Sharon, who has an iPad3 and a iMac, is synchronized with me and we can quickly exchange files regarding cases we are working on together especially leading up to our weekly review meeting.

Dropbox – public by design

Dropbox is easy but is it private?  The short answer is that you should not store PHI (protected health information on Dropbox – since they share data with third party applications and service providers, but the real reason is you should not use Dropbox for sharing healthcare information with patients is simply that it is not private by design.  Everyone who shares a folder in your dropbox sees all the files in the dropbox.

From the Dropbox Privacy policy:

We may collect and store the following information when running the Dropbox Service:

Information You Provide.   When you register an account, we collect some personal information, such as your name, phone number, credit card or other billing information, email address and home and business postal addresses.

Personal Information.   In the course of using the Service, we may collect personal information that can be used to contact or identify you (“Personal Information”). Personal Information is or may be used: (i) to provide and improve our Service, (ii) to administer your use of the Service, (iii) to better understand your needs and interests, (iv) to personalize and improve your experience, and (v) to provide or offer software updates and product announcements.

Service Providers, Business Partners and Others.   We may use certain trusted third party companies and individuals to help us provide, analyze, and improve the Service (including but not limited to data storage, maintenance services, database management, web analytics, payment processing, and improvement of the Service’s features). These third parties may have access to your information only for purposes of performing these tasks on our behalf and under obligations similar to those in this Privacy Policy.

Third-Party Applications.   We may share your information with a third party application with your consent,

Data retention. We may retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements.

Privacy of healthcare information by design

If you want to have complete control and privacy of data that you share with patients, you need a controlled, private social network for healthcare that ensures no overlap between patients and no overlap between physician networks.  This is privacy by design.

 

Tell your friends and colleagues about us. Thanks!
Share this

How to keep secrets in healthcare online


The roles of trust, security and privacy in healthcare.  If President Obama had told his psychiatrist he was gay, you can bet that it would be on Facebook in 5′. So much for privacy.

pri·va·cy/ˈprīvəsē/

Noun:

The state or condition of being free from being observed or disturbed by other people.

The state of being free from public attention

When it comes to healthcare information, there have always been two circles of trust – the trust relationship with your physician and the trust that you place in your healthcare provider/insurance company/government health service.

With social networks like Facebook, a third circle of trust has been created: the circle of trust between you and your friends in the social network.

Patient-doctor privacy

When we share our medical situation with our doctor, we assume we can trust her to keep it private in order to help us get well. Otherwise – we might never share information regarding thoses pains in in the right side over our abdomen, and discover after an ultrasound has been done, that our fatty liver is closely related to imbibing too many pints of beer and vodka chasers with the mates after work – when you have been telling the missus that you are working late at the office.

Healthcare provider – patient privacy

When we share medical information with our healthcare provider, we trust their information security as being strong enough to protect our medical information from a data breach. Certainly – as consumers of healthcare services, it’s impossible for us to audit the effectiveness of their security portfolio.

With our healthcare provider, revealing personal information depends on how much we trust them and that trust depends on how good a job they do on information security, and how effectively they implemented the right management, technical and physical safeguards.

If you’re not sure about the privacy, trust and security triangle, just consider Swiss banks.

Millions of people have online healthcare interactions – asking doctors questions onlines, sharing experiences in forums, interacting with doctors using social media tools like blogs and groups and of course – asking Dr. Google.

Privacy among friends

When we share medical information with our friends on Facebook/Google+ or Twitter we trust them to keep it private within our own personal parameters of vulnerability analysis.

Note that there is feeling secure (but not being secure – chatting about your career in crime on Facebook) and being secure while not feeling secure (not wanting to use your credit card online – face it, with over 300 million credit cards breached in the past 5 years, chances are, your credit card is out there and it doesn’t seem to make a difference now, does it?).

Trust between 2 people interacting (whether its face-to-face or on Facebook) is key to sharing sensitive information, since it mitigates or eliminates the damage of unexpected disclosure.

Let’s illustrate the notion of personal trust as a security countermeasure for unexpected disclosure with a story:

Larry interacts with his lawyer Sarah regularly, once a week or more. It’s a professional relationship, and over time, Larry and Sarah gain each others trust, and in addition to contracts and commercial terms and conditions, the conversations encompass children, career and life. Larry knows Sarah is divorced and is empathetic to the challenges of being a full-time mother and corporate lawyer. Come end of year, Larry sends Sarah a box of chocolate wishing her a successful and prosperous New Year. Sarah’s 14 year old daughter, who is pushing her to start dating again, sees the gift package and draws conclusions that Mom has a new beau. Sarah now has to go into damage control mode with a teenage daughter. It may take Larry months (if ever…) to regain the trust of his colleague. This is literally the damage of unexpected disclosure of private information.

Unlike a healthcare provider, on Facebook we only interact with our friends.

We have digital interactions with our healthcare provider, accessing a Web portal for medical history, scheduling visits and lab tests online etc. These are interactions unrelated to the personal relationship with our physician. The data in these interactions is regulated by governments and secured by healthcare provider information security organizations.

Your healthcare provider’s business model requires them to protect your health information from disclosure.

In our digital interactions on Facebook or Twitter,  there is no organizational element to the security, trust and privacy equation only the personal element. This is because your Gmail, tweets and Facebook conversations are the content that drives Google, Twitter and Facebook advertising revenues.

Social media business models require them to distribute as much of your content as possible.

So, is there a reasonable solution to ensure private healthcare interactions on social networks?

The answer,  I believe, lies in getting back to the dictionary definition of privacy, and creating a private social network for healthcare that enables you, your doctor and family to “be free from being observed or disturbed by other people”.

Tell your friends and colleagues about us. Thanks!
Share this

The Tao of GRC

I have heard of military operations that were clumsy but swift, but I have never seen one that was skillful and lasted a long time. Master Sun (Chapter 2 – Doing Battle, the Art of War).

The GRC (governance, risk and compliance) market is driven by three factors: government regulation such as Sarbanes-Oxley, industry compliance such as PCI DSS 1.2 and growing numbers of data security breaches and Internet acceptable usage violations in the workplace. $14BN a year is spent in the US alone on corporate-governance-related IT spending .

It’s a space that’s hard to ignore.

Are large internally-focused GRC systems the solution for improving risk and compliance? Or should we go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies .

This article introduces a practical approach that will help the CISOs/CSOs in any sized business unit successfully improve compliance and reduce information value at risk. We call this approach “GRC 2.0” and base it on 3 principles.

1.    Adopt a standard language of GRC
2.    Learn to speak the language fluently
3.    Go green – recycle your risk and compliance

GRC 1.0

GRC (Governance, Risk and Compliance) was first coined by Michael Rasmussen.  GRC products like Oracle GRC Suite and Sword Achiever, cost in the high six figures and enable large enterprises to automate the workflow and documentation management associated with costly and complex GRC activities.

GRC – an opportunity to improve business process

GRC regulation comes in 3 flavors: government legislation, industry regulation and vendor-neutral security standards.  Government legislation such as SOX, GLBA, HIPAA and EU Privacy laws were enacted to protect the consumer by requiring better governance and a top-down risk analysis process. PCI DSS 2.0; a prominent example of Industry regulation, was written to protect the card associations by requiring merchants and processors to use a set of security controls for the credit card number with no risk analysis.  The vendor-neutral standard, ISO27001 helps protect information assets using a comprehensive set of people, process and technical controls with an audit focus.

The COSO view is that GRC is an opportunity to improve the operation:

“If the internal control system is implemented only to prevent fraud and comply with laws and regulations, then an important opportunity is missed…the same internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.”

GRC 2.0

The COSO position makes sense, but in practice it’s difficult to attain process improvement through enterprise GRC management.

Unlike ERP, GRC lacks generally accepted principles and metrics. Where finance managers routinely use VaR (value at risk) calculations, information security managers are uncomfortable with assessing risk in financial measures. The finance department has quarterly close but information security staffers fight a battle that ebbs and flows and never ends. This creates silos – IT governance for the IT staff and consultants and a fraud committee for the finance staff and auditors.

GRC 1.0 assumes a fixed structure of systems and controls.  The problem is that, in reducing the organization to passive executives of defense rules in their procedures and firewalls, we ignore the extreme ways in which attack patterns change over time. Any control policy that is presumed optimal today is likely to be obsolete tomorrow. Learning about changes must be at the heart of day-to-day GRC management.

A fixed control model of GRC is flawed because it disregards a key feature of security and fraud attacks – namely that both attackers and defenders have imperfect knowledge in making their decisions. Recognizing that our knowledge is imperfect is the key to solving this problem. The goal of the CSO/CISO should be to develop a more insightful approach to GRC management.

The first step is to get everyone speaking the same language.

Adopt a standard language of GRC – the threat analysis base class

We formalize this language using a threat analysis base class which (like any other class), has attributes and methods. Attributes have two sub-types – threat entities and people entities.

Threat entities

Assets have value, fixed or variable in Dollar, Euro, and Rupee etc.  Examples of assets are employees and intellectual property contained in an office.

Vulnerabilities are weaknesses or a lacking in the business. For example – a wood office building with a weak foundation built in an earthquake zone.

Threats exploit vulnerabilities to cause damage to assets. For example – an earthquake is a threat to the employees and intellectual property stored on servers in the building.

Countermeasures have a cost, fixed are variable and mitigate the vulnerability. For example – relocating the building and using a private cloud service to store the IP.

People entities

Business decision makers encounter vulnerabilities and threats that damage company assets in their business unit. In a process of continuous interaction and discovery, risk is part of the cost of doing business.

Attackers create threats and exploit vulnerabilities to damage the business unit. Some do it for the notoriety, some for the money and some do it for the sales channel.

Consultants assess risk and recommend countermeasures. It’s all about the billable hours.

Vendors provide security countermeasures. The effectiveness of vendor technologies is poorly understood and often masked with marketing rhetoric and pseudo-science.

Methods

The threat analysis base class prescribes 4 methods:

  • SetThreatProbability -estimated annual rate of occurrence of the threat
  • SetThreatDamageToAsset – estimated damage to asset value in a percentage
  • SetCountermeasureEffectiveness – estimated effectiveness of the countermeasure in a percentage.
  • GetValueAtRisk

Speak the language fluently

A language with 8 words is not hard to learn, it’s easily accepted by CFO, CIO and CISO since these are familiar business terms.

The application of our 8 word language is also straightforward.

Instances of the threat analysis base class are “threat models” – and can be used in the entire gamut of GRC activities:  Sarbanes-Oxley, which requires a top down risk analysis of controls, ISO27001 – controls are countermeasures that map nicely to vulnerabilities and threats (you bring the assets) and PCI DSS 1.2 – the PAN is an asset, the threats are criminals who collude with employees to steal cards and the countermeasures are specified by the standard.

You can document the threat models in your GRC system (if you have one and it supports the 8 attributes). If you don’t have a GRC system, there is an excellent free piece of software to do threat modeling – available at http://www.ptatechnologies.com

Go green – recycle your threat models

Leading up to the Al Qaida attack on the US in 9/11, the FBI investigated, the CIA analyzed but no one bothered to discuss the impact of Saudis learning to fly but not land airplanes.

This sort of GRC disconnect in organizations is easily resolved between silos, by the common, politically neutral language of the threat analysis base class.

Summary

Effective GRC management requires neither better mathematical models nor complex enterprise software.  It does require us to explore new threat models and go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies that may threaten our business.  If you follow the Tao of GRC 2.0 – it will be more than a fulfillment exercise.

Tell your friends and colleagues about us. Thanks!
Share this

Customer convenience or customer privacy

This is a presentation I gave at the UPU (Universal Postal Union) EPSG (Electronic Products and Services working Group) working meeting in Bern on Feb 20, 2007. About 25 people from 20 countries were present and it was a great experience for me to hear how Postal operations see themselves and what they do in the B2C e-commerce space.
Click here to download the presentation

Tell your friends and colleagues about us. Thanks!
Share this

Lies of social networking

Is marketing age segmentation dead?

My sister-in-law Ella and husband Moshe came over last night for coffee. Moshe and I sat outside on our porch, so he could smoke his cigars and we rambled over a bunch of topics, private networking,  online banking and the Israeli stock market.  Moshe grumbled about his stock broker not knowing about customer segmentation and how he used the same investment policy with all his clients.   A few anecdotes like that and I realized:

Facebook doesn’t segment friends

There is an outstanding presentation from a person in google research discussing this very point – a lack of segmentation in social networks:

http://www.slideshare.net/padday/the-real-life-social-network-v2

Almost every social networking site makes 4 assumptions, despite the fact that there is ample evidence that they’re wrong.

  1. Your friends are equally important
  2. Your friends are arranged into discrete groups
  3. You can manage hundreds of friends
  4. Friendship is reciprocal and equal

 

In fact :

  1. People tend to have 4 – 6 groups
  2. Each group has 2-10 people
  3. There are strong ties and weak ties.
  4. Strong ties are always in the physical world are < 6
  5. Weak ties in a business context are  < 150

 

Tell your friends and colleagues about us. Thanks!
Share this

Medical device security trends

Hot spots for medical device software security

I think that 2011 is going to be an exciting year for medical device security as the FDA gets more involved in the approval and clearance process with software-intensive medical device vendors. Considering how much data is exchanged between medical devices and customer service centers/care givers/primary clinical care teams and how vulnerable this data really is, there is a huge amount of work to be done to ensure patient safety, patient privacy and delivery of the best medical devices to patients and their care givers.

On top of a wave of new mobile devices and more compliance, some serious change is in the wings in Web services as well.

The Web application execution model is going to go through an inflection point in the next two years transitioning from stateless HTTP, heterogeneous stacks on clients and servers and message passing in the user interface (HTTP query strings) to WebSocket and HTML5 and running the application natively on the end point appliance rather than via a browser communicating to a Web server.

That’s why we are in for interesting times I believe.

Drivers
There are 4 key drivers for improving software security of medical devices, some exogenous, like security, others product-oriented like ease of use and speed of operation.  Note that end-user concerns for data security don’t seem to be a real market driver.

  1. Medical device quality (robustness, reliability,usability, ease of installation, speed of user interaction)
  2. Medical device safety (will the device kill the patient if the software fails, or be a contributing factor to damaging the patient)
  3. Medical device availability (will the device become unavailable to the user because of software bugs, security vulnerabilities that enable denial of service attacks)
  4. Patient privacy (HIPAA – aka – data security, does the device store ePHI and can this ePHI be disclosed as a result of malicious attacks by insiders and hackers on the device)

Against the backdrop of these 4 drivers, I see 4 key verticals: embedded devices, mobile applications, implanted devices and Web applications.

Verticals

Embedded devices (Device connected to patient)

  1. Operating systems, Windows vs. Linux
  2. Connectivity and integration into enterprise hospital networks: guidelines?
  3. Hardening the application verus bolting on security with anti-virus and network segmentation

Medical applications on mobile consumer devices (Device held in patient hand)

  1. iPhone and Android – for example, Epocrates for Android
  2. Software vulnerabilities that might endanger patient health
  3. Is the Apple Store, Android Market a back door for medical device software with vulnerabilities?
  4. Application Protocols/message passing methods
  5. Use of secure tokens for data exchange
  6. Use of distributed databases like CouchDB to store synchronized data in a head end data provider and in the mobile device The vulnerability is primarily patient privacy since a distributed setup like this probably increases total system reliability rather than decreasing it. For the sake of discussion, CouchDB is already installed on 10 million devices world wide and it is a given that data will be pushed out and stored at the end point hand held application.

Implanted devices (Device inside patient)

  1. For example ICD (implanted cardiac defibrillators)
  2. Software bugs that results in vulnerabilities that might endanger patient health
  3. Design flaws (software, hardware, software+hardware) that might endanger patient health
  4. Vulnerability to denial of service attacks, remote control attacks when the ICD is connected for remote
  5. programming using GSM connectivity

Web applications  (Patient interacting with remote Web application using a browser)

  1. Software vulnerabilities that might endanger patient health because of a wrong diagnosis
  2. Application Protocols/message passing methods
  3. Use of secure tokens for data exchange
  4. Use cloud computing as service delivery model.

In addition, there are several “horizontal” areas of concern, where I believe the FDA may be involved or getting involved

  1. Software security assessment standards
  2. Penetration testing
  3. Security audit
  4. Security metrics
  5. UI standards
  6. Message passing standards between remote processes
Tell your friends and colleagues about us. Thanks!
Share this

Controlled private networking

This evening I was added to a FB Group – apparently – you don’t have to agree to be joined in. FB Groups is a way to organize your contacts and get better control over your social networking.  It looks pretty cool to me but the New York Times suggests that Facebook groups may engender even more privacy control issues for Facebook Groups users:

Mr. Zuckerberg said that other applications and services that use Facebook’s technology would be able to use Groups, and that Groups would help improve other parts of Facebook.
“Knowing the groups you are part of helps us understand the people who are most important to you, and that can help us rank items in the news feed,” he said.

Knowing this – would you use Facebook Groups for a business networking application – like sales professionals talking to clients?  I don’t think so.  FB will never give up their profiling data since their revenue model is advertising-based.  The low cost of running a private controlled  social network like Elgg in the cloud should be a competitive alternative to FB Groups for a small business looking to leverage social networking to reduce cost of customer support, marketing and distribution of material.

Marc Rotenberg, executive director of the Electronic Privacy Information Center, called the new service “double-edged

“Yes, it’s good to be able to segment posts for particular friends,” he said. “But you will also be revealing information to Facebook about the basis of your online connections.”

Tell your friends and colleagues about us. Thanks!
Share this