Tag Archives: Data classification

Federal Healthcare Chart

Healthcare data interoperability pain

Data without interoperability =  pain.

What is happening in the US healthcare space is fascinating as stimulus funds (or what they call in the Middle East – “baksheesh”) are being paid to doctors to acquire an Electronic Health Records system that has “meaningful use”. The term “meaningful use” is vaguely  defined in the stimulus bill as programs that can enable data interchange, e-prescribing and quality indicators.

Our hospital recently spent millions on a emr that does not integrate with any outpatient emr. Where is the data exchanger and who deploys it? What button is clicked to make this happen! My practice is currently changing its emr. We are paying big bucks for partial data migration. All the assurances we had about data portability when we purchased our original emr were exaggerated to make a sale. Industry should have standards. In construction there are 2×4 ‘s , not 2×3.5 ‘s.
Government should not impinge on privacy and free trade but they absolutely have a key role in creating standards that ensure safety and promote growth in industry.
Read more here:  Healthcare interoperatbility pains

Mr Obama’s biggest weakness is that he has huge visions but he can’t be bothered with the details so he lets his team and party members hack out implementations, which is why his healthcare initiatives are on a very shaky footing – as the above doctor aptly noted.  But perhaps something more profound is at work. The stimulus bill does not mention standards as a pre-requisite for EHR, and I assume that the tacit assumption (like many things American) is that standards will “happen” due to the power of free markets. This is at odds with Mr. Obama’s political agenda of big socialistic government with central planning. As the doctor said: “government absolutely (must) have a key role in creating standards that ensure safety and promote growth in industry”.  The expectation that this administration set is that they will take care of things, not that free markets will take care of things.  In the meantime, standards are being developed by private-public partnerships like HITSP – enabling healthcare interoperability

The Healthcare Information Technology Standards Panel (HITSP) is a cooperative partnership between the public and private sectors. The Panel was formed for the purpose of harmonizing and integrating standards that will meet clinical and business needs for sharing information among organizations and systems.

It’s notable that HITSP stresses their mission as meeting clinical and business needs for sharing information among organizations and systems.   The managed-care organizations call people consumers so that they don’t have to think of them as patients.

I have written here, here and here about the drawbacks of packaging Federal money, defense contractors and industry lobbies as “private-public partnerships”.

You can give a doctor $20k of Federal money to buy EMR software, but if it doesn’t interact with the most important data source of all (the patient), everyone’s ROI (the doctor, the patient and the government) will approach zero.

Vendor-neutral standards are key to interoperability. If the Internet were built to HITSP style standards, there would be islands of Internet connectivity and back-patting press-releases, but no Internet.

The best vendor-neutral standards we have today are created by the IETF – a private group of volunteers, not by a “private-public partnership”.

The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF Mission Statement is documented in RFC 3935.

However – vendor-neutral standards are a necessary but insufficient condition for “meaningful use” of data.  There also has to be fast, cheap and easy to use access in the “last mile”.  In healthcare – the last mile is the patient-doctor interaction.

About 10-15 years ago, interoperability in the telecommunications and  B2B spaces was based on an EDI paradigm with centralized messaging hubs for system to system document interchange. As mobile evolved into 3G, cellular applications made a hard shift to a distributed paradigm with middleware-enabled interoperability from a consumer handset to all kinds of 3G services – location, games, billing, accounting etc running at the operator and it’s content partners.

The healthcare industry is still at the EDI stage of development – as we can see from organizations like WEDI and HIMSS

The Workgroup for Electronic Data Interchange (WEDI)

Improve the administrative efficiency, quality and cost effectiveness of healthcare through the implementation of business strategies for electronic record-keeping, and information exchange and management...provide multi-stakeholder leadership and guidance to the healthcare industry on how to use and leverage the industry’s collective technology, knowledge, expertise and information resources to improve the administrative efficiency, quality and cost effectiveness of healthcare information.

What happened to quality and effectiveness of patient-care?

It is not about IT and cost-effectiveness of information (whatever that means). It’s about getting the doctor and her patient exactly the data they need when they need it.   That’s why the doctor went to medical school.

Compare EDI-style message-hub centric protocols to RSS/Atom on the Web where any Web site can publish content and any endpoint (browser or tablet device) can subscribe easily. As far as I can see, the EHR space is still dominated by the  “message hub, system-system, health-provider to health provider to insurance company to government agency” model, while in the meantime, tablets are popping everywhere with interesting medical applications. All these interesting applications will not be worth much if they don’t interact enable the patient and doctor to share the data.

Imagine the impact of IETF style standards, lightweight protocols (like RSS/Atom) and $50 tablets running data sharing apps between doctors and patients.

Imagine vendor-neutral, standard middleware for  EHR applications that would expose data for patients and doctors using an encrypted Atom protocol – very simple, very easy to implement, easy to secure and with very clear privacy boundaries. Perhaps not my first choice for sharing radiology data but a great way to share vital signs and significant events like falling and BP drops.

This would be the big game changer  for the entire healthcare industry.  Not baksheesh. Not EDI. Not private-public partnerships.

Tell your friends and colleagues about us. Thanks!
Share this

Moving your data to the cloud – sense and sensibility

Data governance  is a sine qua non to protect your data in the cloud. Data governance is of particular importance for the cloud service delivery model which is philosophically different from the traditional IT product delivery model.

In a product delivery model, it is difficult for a corporate IT group to quantify asset value and data security value at risk over time due to changes in staff, business conditions, IT infrastructure, network connectivity and software application changes.

In a service delivery model, payment is made for services consumed on a variable basis as a function of volume of transactions, storage or compute cycles. The data security and compliance requirements can be negotiated into the cloud service provider service level agreement.  This makes quantifying the costs of security countermeasures relatively straightforward since the security is built into the service and renders the application of practical threat analysis models more accessible then ever.

However – this leaves the critical question of data asset value and data governance. We believe that data governance is a primary requirement for moving your data to the cloud and a central data security countermeasure in the security and compliance portfolio of a cloud customer.

With increasing numbers of low-priced, high-performance SaaS, PaaS and IaaS cloud service offerings,  it is vital that organizations start formalizing their approach to data governance.  Data governance means defining the data ownership, data access controls, data traceability and regulatory compliance, for example PHI (protected health information as defined for HIPAA compliance).

To build an effective data governance strategy for the cloud, start by asking and answering 10 questions – striking the right balance between common sense and  data security requirements:

  1. What is your most valuable data?
  2. How is that data currently stored – file servers, database servers, document management systems?
  3. How should that data  be maintained and secured?
  4. Who should have access to that data?
  5. Who really has access to that data?
  6. When was the last time you examined your data security/encryption polices?
  7. What do your programmers know about data security in the cloud?
  8. Who can manipulate your data? (include business partners and contractors)
  9. If leaked to unauthorized parties how much would the damage cost the business?
  10. If you had a data breach – how long would it take you to detect the data loss event?

A frequent question from clients regarding data governance strategy in the cloud is “what kind of data should be retained in local IT infrastructure?”

A stock response is that obviously sensitive data should remain in local storage. But instead, consider the cost/benefit of storing the data in an infrastructure cloud service provider and not disclosing those sensitive data assets to trusted insiders, contractors and business partners.

Using a cloud service provider for storing sensitive data may actually reduce the threat surface instead of increasing it and give you more control by centralizing and standardizing data storage as part of your overall data governance strategy.

You can RFP/negotiate robust data security controls in a commercial contract with cloud service providers – something you cannot easily do with employees.

A second frequently asked question regarding data governance in the cloud is “How can we protect our unstructured data from a data breach?”

The answer is that it depends on your business and your application software.

Although analysts like Gartner have asserted that over 80% of enterprise data sets are stored in unstructured files like Microsoft Office – this is clearly very dependent on the kind of business you’re in. Arguably, none of the big data breaches happened by people stealing Excel files.

If anything, the database threat surface is growing rapidly. Telecom/cellular service providers have far more data (CDRs, customer service records etc…) in structured databases than in Office and with more smart phones, Android tablets and Chrome OS devices – this will grow even more. As hospitals move to EMR (electronic medical records), this will also soon be the case in the entire health care system where almost all sensitive data is stored in structured databases like Oracle, Microsoft SQL Server, MySQL or PostgreSQL.

Then. there is the rapidly growing  use of  MapReduce/JSON database technology used by Facebook and Digg: CouchDB (with 10 million installations) and MongoDB that connect directly to Web applications. These noSQL databases  may be vulnerable to some of the traditional injection attacks that involve string catenation. Developers are well-advised to use native APIs for building safe queries and patch frequently since the technology is developing rapidly and with large numbers of eyeballs – vulnerabilities are quickly being discovered and patched. Note the proactive approach the the Apache Foundation is taking towards CouchDB security and a recent (Feb 1, 2011) version release for a CouchDB cross-site scripting vulnerability.

So – consider these issues when building your data governance strategy for the cloud and start by asking and answering the 10 key questions for cloud data security.

Tell your friends and colleagues about us. Thanks!
Share this

The effectiveness of access controls

With all due respect to Varonis and access controls in general (Just the area of Sharepoint is a fertile market for data security), the problem of internally-launched attacks is that they are all done by the “right” people and / or by software agents who have the “right” access rights.

There are 3 general classes of internal attacks that are never going to be mitigated by access controls:

Trusted insider theft

A trivial example is a director of new technology development at a small high-tech startup who would have access to the entire company’s IP, the competitive analyses, patent applications and minutes of conversations with all the people who ever stopped in to talk about the startup’s technology. That same person has access by definition but when he takes his data and sucks it out the network using a back-door, a proxy, an HTTP GET or just a plain USB or Gmail account – there is no way an Active Directory access control will be able to detect that as “anomalous behavior”.

Social engineering

Collusion between insiders, gaming the system, taking advantage of friends and DHL messengers who go in and out of the office all the time with their bags.

Side channel attacks

Detecting data at a distance with acoustic or Tempest attacks – for example. or watching parking lot traffic patterns….

Tell your friends and colleagues about us. Thanks!
Share this