Tag Archives: HIPAA

dilbert-paradigm-intro1

10 ways to detect employees who are a threat to PHI

Software Associates specializes in software security and privacy compliance for medical device vendors in Israel.   One of the great things about working with Israeli medical device vendors is the level of innovation, drive and abundance of smart people.

It’s why I get up in the morning.

Most people who don’t work in security, assume that the field is very technical, yet really – it’s all about people.   Data security breaches happen because people or greedy or careless.    100% of all software vulnerabilities are bugs, and most of those are design bugs which could have been avoided or mitigated by 2 or 3 people talking about the issues during the development process.

I’ve been talking to several of my colleagues for years about writing a book on “Security anti-design patterns” – and the time has come to start. So here we go:

Security anti-design pattern #1 – The lazy employee

Lazy employees are often misdiagnosed by security and compliance consultants as being stupid.

Before you flip the bozo bit on customer’s employee as being stupid, consider that education and IQ are not reliable indicators of dangerous employees who are a threat to the company assets.

Lazy employees may be quite smart but they’d rather rely on organizational constructs instead of actually thinking and executing and occasionally getting caught making a mistake.

I realized this while engaging with a client who has a very smart VP – he’s so smart he has succeeded in maintaining a perfect record of never actually executing anything of significant worth at his company.

As a matter of fact – the issue is not smarts but believing that organizational constructs are security countermeasures in disguise.

So – how do you detect the people (even the smart ones) who are threats to PHI, intellectual property and system availability:

  1. Their hair is better organized then their thinking
  2. They walk around the office with a coffee cup in their hand and when they don’t, their office door is closed.
  3. They never talk to peers who challenge their thinking.   Instead they send emails with a NATO distribution list.
  4. They are strong on turf ownership.  A good sign of turf ownership issues is when subordinates in the company have gotten into the habit of not challenging the VP coffee-cup holding persons thinking.
  5. They are big thinkers.    They use a lot of buzz words.
  6. When an engineer challenges their regulatory/procedural/organizational constructs – the automatic answer is an angry retort “That’s not your problem”.
  7. They use a lot of buzz-words like “I need a generic data structure for my device log”.
  8. When you remind them that they already have a generic data structure for their device log and they have a wealth of tools for data mining their logs – amazing free tools like Elasticsearch and R….they go back and whine a bit more about generic data structures for device logs.
  9. They seriously think that ISO 13485 is a security countermeasure.
  10. They’d rather schedule a corrective action session 3 weeks after the serious security event instead of fixing it the issue the next day and documenting the root causes and changes.

If this post pisses you off (or if you like it),  contact danny Lieberman me.  I’m always interested in challenging projects with people who challenge my thinking.

Tell your friends and colleagues about us. Thanks!
Share this
Security is not fortune telling

The top 5 things a medical device vendor should do for HIPAA compliance

We specialize in software security assessments, FDA cyber-security and HIPAA compliance for medical device vendors in Israel.

The first question that every medical device vendor CEO asks us is “What is the fastest and cheapest way for us to be HIPAA-compliant”?

So here are the top 5 things a medical device vendor should do in order to achieve HIPAA compliance:

1. Don’t store EPHI

If you can, do not store EPHI in your system at all.  That way you can side-step the entire HIPAA compliance process.    (This is not to say that you don’t have to satisfy FDA cyber-security requirements or have strong software security in general but that is a separate issue).

What is EPHI? EPHI (electronic protected health information) is any combination of PII (personally identifiable information and any clinical data.   OK – you ask so what is the definition of PII from the perspective of HIPAA?   Basically – PII is any combination of data that can be used to steal someone’s identity – in a more formal sense – here is a list of PHI identifiers:

  1. A name
  2. An address. The kind that FedEx or USPS understands
  3. Birth dates – age does not count.
  4. Phone numbers including (especially) mobile phone
  5. Email addresses
  6. Usernames of online services
  7. Social Security numbers
  8. Medical record numbers
  9. Health plan beneficiary number
  10. Account numbers
  11. Certificate/license numbers – any number that identifies the individual. A certificate on winning a spelling bee in Junior High doesn’t count.
  12. Vehicle identifiers and serial numbers, including license plate numbers;
  13. Device identifiers and serial numbers that can be tied back to a person
  14. URLs – that can be tied back to a person using DNS lookups
  15. IP address – for example the IP address of a home router that can be used to lookup an identify a person
  16. Biometric identifiers, including finger and voice prints;
  17. Full face pictures

2. If you store EPHI do a threat analysis of your medical device

The HIPAA Security Rule and the FDA cyber security guidance are very clear on this point. You can learn more about threat modeling and analysis here, here and here. Regarding encryption and medical device security, read this.

3. Implement software configuration management and deployment tools

The best advice I can give a medical device vendor is to use Git.   If you use Azure or are a Microsoft shop (our condolences – read here and here why Windows is a bad choice for medical devices) then TFS is a great solution that is integrated nicely in Azure. Note that Azure is a great cloud solution for Linux as well. Don’t get me wrong – Microsoft does a lot of things right.  But using Windows for medical devices is a really bad idea.

4. Implement log monitoring

Monitoring your logs for peaks in CPU, memory or disk usage is a great way to know if you’re being attacked.  But – if you have medical device logs and no one is home to answer the phone then it’s a waste of time.

5. Make sure the lights are on and some one is home

You’ve done a great job on your medical device software.   You did Verification and Validation and you implemented threat modeling in your development process and you have logs.  Just make sure that it’s someone knows that it’s their job to keep on eye on security events.   If you get a notice from a customer or a ping from your log manager, or an email from your cloud provider that they’re gonna reboot your services because of VENOM – just make sure the lights are on and some one is home.

 

 

 

In summary

Robust security for your medical is not fortune telling but neither is it an organizational construct.  The best way to think about your medical devices is to think about something you would give a child (or a soldier on the battle field). It has to totally reliable and safe for the patient even under the most adverse conditions.

 

Tell your friends and colleagues about us. Thanks!
Share this
risk-driven medical device security

The facts of life for HIPAA business associates

If you are a biomed vendor and you collect any  kind of PHI (protected health information) in your medical device or store information in the cloud (including public cloud services like Google Drive and Dropbox) you need to be aware of US healthcare information privacy regulation.

As a medical device vendor selling to healthcare providers, hospitals, physicians and health information providers in the US, you may be directly liable for violations of the HIPAA Security Rule for impermissible use and disclosure of PHI (protected health information) in any form, paper or digital.

You cannot hide behind your contract with the covered entity or sub-contract your services to another entity.

You must now comply with the HIPAA Security Rule yourself.

In the past you could rely on your business contract with your covered entity customer as a business associate.

The Final Rule makes business associates of covered entities directly liable for Federal penalties for failures to comply.

The Security Rule’s administrative, physical, and technical safeguards requirements in §§ 164.308, 164.310, and 164.312, as well as the Rule’s policies and procedures and documentation requirements in § 164.316, apply to business associates in the same manner as these requirements apply to covered entities; business associates are now civilly and criminally liable for violations of these provisions.

When a breach of patient privacy occurs, business associates and their sub-contractors must notify HHS if more than 500 records have been disclosed.

The HIPAA Final rule becomes effective March 26, 2013. Everyone has to comply by September 23, 2013.  That includes medical device vendors like you.

 I’m a small biomed startup – what should I do?

Smaller or less sophisticated  biomed vendors may not have engaged in the formal safeguards required by the HIPAA Security Rule, and may find the Final Rule and even intimidating new territory .

Software Associates specialize in software security and HIPAA compliance for biomed. We use a robust threat modeling process that  analyzes multiple threat scenarios and generates best-fit cost-effective safeguards  in a  highly effective way of achieving robust software security and HIPAA compliance

We will help you achieve HIPAA compliance and implement the right safeguards for your product.

Please feel free to contact us at any time and ask for a free phone consultation.

 

Tell your friends and colleagues about us. Thanks!
Share this

How to secure patient data in a healthcare organization

If you are a HIPAA covered entity or a business associate vendor to a HIPAA covered entity the question of HIPAA – the question of securing patient data is central to your business.  If you are a big organization, you probably don’t need my advice – since you have a lot of money to spend on expensive security and compliance consultants.

But – if you are small to mid-size hospital or nursing home or medical device vendor without large budgets for security compliance, the natural question you ask is “How can I do this for as little money as possible?”

You can do some research online and then hire a HIPAA security and compliance consultant who will walk you through the security safeguards in CFR 45 Appendix A and help you implement as many items as possible.  This seems like a reasonable approach, but the more safeguards you implement, the more money you spend and moreover, you do not necessarily know if your security has improved –since you have not examined your value at risk – i.e how much money it will cost you if you have a data security breach.

If you read CFR 45 Appendix A carefully, you will note that the standard wants you to do a top-down risk analysis, risk management and periodic information security activity review.

The best way to do that top down risk analysis is to build probable threat scenarios – considering what could go wrong – employees stealing a hard disk from a nursing station in an ICU where a celebrity is recuperating for the information or a hacker sniffing the hospital wired LAN for PHI.

Threat scenarios as an alternative to compliance control policies

When we perform a software security assessment of a medical device or healthcare system, we think in terms of “threat scenarios” or “attack scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance. The threat scenarios are not “one size fits all”.  The threat scenarios for an AIDS testing lab using medical devices that automatically scan and analyze blood samples, or an Army hospital using a networked brain scanning device to diagnose soldiers with head injuries, or an implanted cardiac device with mobile connectivity are all totally different.

We evaluate the medical device or healthcare product from an attacker point of view, then from the management team point of view, and then recommend specific cost-effective, security countermeasures to mitigate the damage from the most likely attacks.

In our experience, building a security portfolio on attack scenarios has 3 clear benefits;
  1. A robust, cost-effective security portfolio based on attack analysis  results in robust compliance over time.
  2. Executives related well to the concepts of threat modeling / attack analysis. Competing, understanding the value of their assets, taking risks and protecting themselves from attackers is really, at the end of the day why executives get the big bucks.
  3. Threat scenarios are a common language between IT, security operations teams and the business line managers

This last benefit is extremely important in your healthcare organization, since business delegates security to IT and IT delegates security to the security operations teams.

As I wrote in a previous essay “The valley of death between IT and security“, there is a fundamental disconnect between IT operations (built on maintaining predictable business processes) and security operations (built on mitigating vulnerabilities).

Business executives delegate information systems to IT and information security to security people on the tacit assumption that they are the experts in information systems and security.  This is a necessary but not sufficient condition.

In the current environment of rapidly evolving types of attacks (hacktivisim, nation-state attacks, credit card attacks mounted by organized crime, script kiddies, competitors and malicious insiders and more…), it is essential that IT and security communicate effectively regarding the types of attacks that their organization may face and what is the potential business impact.

If you have any doubt about the importance of IT and security talking to each other, consider that leading up to 9/11, the CIA  had intelligence on Al Qaeda terrorists and the FBI investigated people taking flying lessons, but no one asked the question why Arabs were learning to fly planes but not land them.

With this fundamental disconnect between 2 key maintainers of information protection, it is no wonder that organizations are having difficulty effectively protecting their assets – whether Web site availability for an online business, PHI for a healthcare organization or intellectual property for an advanced technology firm.

IT and security  need a common language to execute their mission, and I submit that building the security portfolio around most likely threat scenarios from an attacker perspective is the best way to cross that valley of death.

There seems to be a tacit assumption with many executives that regulatory compliance is already a common language of security for an organization.  Compliance is a good thing as it drives organizations to take action on vulnerabilities but compliance checklists like PCI DSS 2.0, the HIPAA security rule, NIST 800 etc, are a dangerous replacement for thinking through the most likely threats to your business.  I have written about insecurity by compliance here and here.

Let me illustrate why compliance control policies are not the common language we need by taking an example from another compliance area – credit cards.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control policy that is not rooted in a probable threat scenario that creates additional vulnerabilities for the business.

Now, consider some deeper ramifications of compliance control policy-based security.

When a  QSA or HIPAA auditor records an encounter with a customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities.

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.

Using value at risk to figure out how much a breach will really cost you

Your threat scenarios must consider asset (your patient information, systems, management attention, reputation) values, vulnerabilities, threats and possible security countermeasures. Threat analysis as a methodology does not look for ROI or ROSI (there is no ROI for security anyhow) but considers the best and cheapest way to reduce asset value at risk.

And – as we opened the article – the question is  “How can I do this for as little money as possible?”

Tell your friends and colleagues about us. Thanks!
Share this

Beyond the firewall

Beyond the firewall – data loss prevention

What a simple idea. It doesn’t matter how they break into your network or servers – if attackers can’t take out your data, then you’ve mitigated the threat.

Data loss prevention is a category of information security products that has matured from Web / email content filtering products into technologies that can detect unauthorized network transfer of valuable digital assets such as credit cards. This paper reviews the motivation for and the taxonomies of advanced content flow monitoring technologies that are being used to audit network activity and protect data inside the network.

Motivation – why prevent data loss?

The majority of hacker attacks and data loss events are not on the IT infrastructure but on the data itself.  If you have valuable data (credit cards, customer lists, ePHI) then you have to protect it.

Content monitoring has traditionally meant monitoring of employee or student surfing and filtering out objective content such as violence, pornography and drugs. This sort of Web content filtering became “mainstream” with wide-scale deployments in schools and larger businesses by commercial closed source companies such as McAfee and Bluecoat and Open Source products such as Censor Net and Spam Assassin. Similar signature-based technologies are also used to perform intrusion detection and prevention.

However, starting in 2003, a new class of content monitoring products started emerging that is aimed squarely at protecting firms from unauthorized “information leakage”, “data theft” or “data loss” no matter what kind of attack was mounted. Whether the data was stolen by hackers, leaked by malicious insiders or disclosed via a Web application vulnerability, the data is flowing out of the organization. The attack vector in a data loss event is immaterial if we focus on preventing the data loss itself.

The motivation for using data loss prevention products is economic not behavioral; transfer of digital assets  such as credit cards and PHI by trusted insiders or trusted systems can cause much more economic damage than viruses to a business.

Unlike viruses, once a competitor steals data you cannot reformat the hard disk and restore from backup.

Companies often hesitate from publicly reporting data loss events because it damages their corporate brand, gives competitors an advantage and undermines customer trust no matter how much economic damage was actually done.

Who buys DLP (data loss prevention)?

This is an interesting question. On one hand, we understand that protecting intellectual property, commercial assets and compliance-regulated data like ePHI and credit cards is  essentially an issue of  business risk management. On the other hand, companies like Symantec and McAfee and IBM sell security products to IT and information security managers.

IT managers focus on maintaining predictable execution of business processes not dealing with unpredictable, rare, high-impact events like data loss.  Information security managers find DLP technology interesting (and even titillating – since it detects details of employee behavior, good and bad) but an  information security manager who buys Data loss prevention (DLP) technology is essentially admitting that his perimeter security (firewall, IPS) and policies and procedures are inadequate.

While data loss prevention may be a problematic sale for IT and information security staffers, it plays well into the overall risk analysis,  risk management and compliance processes of the business unit.

Data loss prevention for senior executives

There seem to be three schools of thought on this with senior executives:

  1. One common approach is to ignore the problem and brush it under the compliance carpet using a line of reasoning that says “If I’m PCI DSS/HIPAA compliant, then I’ve done what needs to be done, and there is no point spending more money on fancy security technologies that will expose even more vulnerabilities”.
  2. A second approach is to perform passive data loss detection and monitor flow of data(like email and file transfers) without notifying employees or the whole world. Anomalous detection events can then be used to improve business processes and mitigate system vulnerabilities. The advantage of passive monitoring is that neither employees nor hackers can detect a Layer 2 sniffer device and a sniffer is immune to configuration and operational problems in the network. If it can’t be detected on the network. then this school of thought has plausible deniability.
  3. A third approach takes data loss prevention a step beyond security and turns it into a competitive advantage. A smart CEO can use data loss prevention system as a deterrent and as a way of enhancing the brand (“your credit cards are safer with us because even if the Saudi hacker gets past our firewall and into the network, he won’t be able to take the data out”).

A firewall is not enough

Many firms now realize that a firewall is not enough to protect digital assets inside the network and look towards incoming/outgoing content monitoring. This is because: 

  1. The firewall might not be properly configured to stop all the suspicious traffic.

  2. The firewall doesn’t have the capability to detect all types of content, especially embedded content in tunneled protocols.

  3. The major of hacker attacks and data loss events are not on the IT infrastructure but on the data itself.

  4. Most hackers do not expect creative defenses so they assume that once they are in, nobody is watching their nasty activities.

  5. The firewall itself can be compromised. As we have more and more Day-0 attacks and trusted insider threats, so it is good practice to add additional independent controls.

Detection

Sophisticated incoming and outgoing (data loss prevention or DLP) content monitoring technologies basically use three paradigms for detecting security events

  1. AD- Anomaly Detection – describes normal network behavior and flags everything else
  2. MD- Misuse Detection – describes attacks and flags them directly
  3. BA – Burglar alarm – describes abnormal network behavior (“detection by exception”)

In anomaly detection, new traffic that doesn’t match the model is labeled as suspicious or bad and an alert is generated. The main limitation of anomaly detection is that if it is too conservative, then it will generate too many false positives (a false alarm) and over time the analyst will ignore it. On the other hand, if a tool rapidly adapts the model to evolving traffic change, then too little alerts will be generated and the analyst will again ignore it.

Misuse detection describes attacks and flags them directly, using a database of known attack signatures and constantly tries to match the actual traffic against the database. If there is a match, an alert is generated. The database typically contains rules for:

  1. Protocol Stack Verification – RFC’s, ping of death, stealth scanning etc.
  2. Application Protocol Verification – WinNuke , invalid packets that cause DNS cache corruption etc.
  3. Application Misuse – misuse that causes applications to crash or enables a user to gain super user privileges; typically due to buffer overflows or due to implementation bugs.
  4. Intruder detection. Known attacks can be recognized by the effects caused by the attack itself. For example, Back Orifice 2000 sends traffic on default port is 31337
  5. Data loss detection – for example by file types, compound regular expressions, linguistic and/or statistical content profiling. Data loss prevention or detection needs to work at a much higher level than intrusion detection – since it needs to understand file formats and analyze the actual content such as Microsoft Office attachments in a Web mail session as opposed to doing simple pattern matching of an http request string.

Using a burglar alarm model, the analyst needs deep understanding of the network and what should not happen with it. He builds rules that model how the monitored network should conceptually work, in order to generate alerts when suspicious traffic is detected. The richer the rules database, the more effective the tool. The advantage of the burglar alarm model is that a good network administrator can leverage his knowledge of servers, segments and clients (for example a Siebel CRM server which is a client to a Oracle database server) in order to focus-in and manage-by-exception.

What about prevention?

Anomaly detection is an excellent way of identifying network vulnerabilities but a customer cannot prevent extrusion events based on general network anomalies such as usage of anonymous ftp. In comparison there is a conceptual problem with misuse detection. If misuse is detected then unless the event can be prevented (either internally with a TCP reset, by notifying the router or firewall) – then the usefulness of the devices is limited to forensics collection.

What about security management?

SIM – or security information management consolidates reporting, analysis, event management and log analysis. There are a number of tools in this category – Netforensics is one. SIM systems do not perform detection or prevention functions – they manage and receive reports from other systems. Checkpoint for example is a vendor that provides this functionality with partnerships.

Summary

There are many novel DLP/data loss prevention products, most provide capabilities far ahead of both business and IT infrastructure management that are only now beginning to look towards content monitoring behind the firewall.

DLP (Data loss prevention) solutions join an array of content and application-security products around the traditional firewall. Customers are already implementing a multitude of network security products for Inbound Web filtering, Anti-virus, Inbound mail filtering and Instant Messaging enforcement along with products for SIM and integrated log analysis.

The industry has reached the point where the need to simplify and reduce IT security implementation and operational costs becomes a major purchasing driver, perhaps more dominant than any single best-of-breed product.

Perhaps data loss prevention needs to become a network security function that is part of the network switching fabric; providing unified network channel and content security.

Software Associates helps healthcare customers design and implement such a unified network channel and enterprise content security solution today enabling customers to easily define policies such as “No Instant Messaging on our network” or “Prevent patient data leaving the company over any channel that is not an authorized SSH client/server”.

For more information contact us.


Tell your friends and colleagues about us. Thanks!
Share this

How to reduce risk of a data breach

Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation and compliance check lists have little direct value in protecting customers.

Software Associates specializes in helping medical device and healthcare vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

The first question any customer asks us regarding HIPAA compliance is how little he can spend. Not how much he should spend. This means we need simple and practical strategies to reduce the risk of data breaches.

There are 2 simple strategies to reduce the risk of data breach, one is technical, one is management:

  1. Use real time detection of security events to  directly protect your customers
  2. Build your security portfolio around specific threat scenarios (e.g a malicious employee stealing IP, a business partner obtaining access to confidential commercial information, a software update exposing PHI etc…) and use the threat scenarios to drive your service and product acquisition process.

Use real-time detection to directly protect your customers

Systems like ERM, SIM and Enterprise information protection are enterprise software applications that serve the back-office business of security delivery; things like log analysis and saving on regulatory documentation. Most of these systems excel at gathering and searching large volumes of data while providing little evidence as to the value of the data or feedback into improving the effectiveness of the current security portfolio.

Enterprise IT security capabilities do not have  a direct relationship with improving customer security and privacy even if they do make the security management process more effective.

This not a technology challenge but a conceptual challenge: It is impossible to achieve a meaningful machine analysis of  security event data in order to improve customer security and privacy using data that was uncertain to begin with, and not collected and validated using standardized evidence-based methods

Instead of log analysis we recommend real-time detection of events. Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation.

  1. Use DLP (data loss prevention) and monitor key digital assets such as credit cards and PHI for unauthorized outbound transfer.  In plain language – if you detect credit cards or PHI in plain text traversing your network perimeter or removable devices, then you have just detected a data breach in real time, far cheaper and faster than mulling through your log files after discovering 3 months later that a Saudi hacker stole 14,000 credit cards from an unpatched server.
  2. Use your customers as early warning sensors for exploits. Provide a human 24×7 hotline that answers on the 3d ring for any customer who thinks they have been phished or had their credit card or medical data breached.  Don’t put this service in the general message queue and never close the service.   Most security breaches become known to a customer when they are not at work.

Build your security portfolio around specific threat scenarios

Building your security portfolio around most likely threat scenarios makes sense.

Nonetheless, current best practices are built around compliance checklists (PCI DSS 2.0, HIPAA security rule, NIST 800 etc…) instead of most likely threat scenarios.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control item that is not rooted in a probable threat scenario.

When we audit a customer for HIPAA compliance or perform a software security assessment of an innovative medical device, we think in terms of “threat scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance.

In current regulatory compliance based systems like PCI DSS or HIPAA, when an auditor records an encounter with the customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.
Tell your friends and colleagues about us. Thanks!
Share this

Problems in current Electronic Health Record systems

Software Associates specializes in helping medical device and healthcare technology vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

As I noted here and here, the security and compliance industry is no different from other industries in having fashion and trends.  Two years ago, PHR (Personal Health Records) systems were fashionable and today they’re not – probably because the business model for PHR applications is unclear and unproven.

Outside of the personal fitness and weight-loss space, it’s doubtful that consumers will pay  money for a Web 2.0 PHR application service to help them store  personal health information especially when they are paying their doctor/insurance company/HMO for  services. The bad news for PHR startups is that it’s not really an app that runs well on Facebook and on the other hand, the average startup is not geared to do big 18-24 month sales cycles with HCP (health care providers) and insurance companies.  But – really, business models is the least of our problems.

There are 3 cardinal  issues with the current generation of EHR/EMR systems.

  1. EHR (Electronic Health Records) systems address the business IT needs of government agencies, hospitals, organizations and medical practices, not the healthcare needs of patients.
  2. PHR (Personal Health Records) systems are not integrated with the doctor-patient workflow.
  3. EHR systems are built on natural language, not on patient-issue.

EHR – Systems are focused on business IT, not patient health

EHR systems are enterprise software applications that serve the business IT elements of helthcare delivery for healthcare providers and insurance companies; things like reducing transcription costs, saving on regulatory documentation, electronic prescriptions and electronic record interchange.1

This clearly does not have much to do with improving patient health and quality of life.

EHR systems also store large volumes of information about diseases and symptoms in natural language, codified using standards like SNOMED-CT2. Codification is intended to serve as a standard for system interoperability and enable machine-readability and analysis of records, leading to improved diagnosis.

However, it is impossible to achieve a meaningful machine diagnosis of natural language interview data that was uncertain to begin with, and not collected and validated using evidence-based methods3.

PHR – does not improve the quality of communications with the doctor

PHR (Personal Health Records) on the other are intended to help patients keep track of their personal health information. The definition of a PHR is still evolving. For some, it is a tool to view patient information in the EHR. Others have developed personal applications such as appointment scheduling and medication renewals. Some solutions such as Microsoft HealthVault and PatientsLikeMe allow data to be shared with other applications or specific people.

PHR applications have a lot to offer the consumer, but even award-winning applications like Epocrates that offer “clinical content” are not integrated with the doctor-patient workflow.

Today, the health care system does not appropriately recognize the critical role that a patient’s personal experience and day-to-day activities play in treatment and health maintenance. Patients are experts at their personal experience; clinicians are experts at clinical care. To achieve better health outcomes, both patients and clinicians will need information from both domains– and technology can play a key role in bridging this information gap.”4

 EHR – builds on natural language, not on patient issues

When a doctor examines and treats a patient, he thinks in terms of “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up.

In current EHR systems, when a doctor records an encounter, he records planning, tests, therapies, and follow-up, just not under the main entity, the issue. The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue and records information, but not under the main “issue” entity.

Other actors such as public health registries and epidemiological researchers go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is.

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the patient
  • No sufficient connection to clinical guidelines, no indication on which guidelines to follow or which have been followed
  • No connection between prescriptions and diseases, except circumstantial
  • No ability to detect and warn for contraindications
  • No archiving or demoting of less important and solved problems
  • Lack of overview of status of the patient, only a series of historical observations
  • In most systems, no sufficient search capabilities
  • An excess of textual data that cannot possibly be read by every doctor at every encounter
  • Confidentiality borders are very hard to define
  • Very rigid and closed interfaces, making extension with custom functionality very difficult

4 Patricia Brennan, “Incorporating Patient-generated Data in meaningful use of HIT” http://healthit.hhs.gov/portal/server.pt/

Tell your friends and colleagues about us. Thanks!
Share this

The Tao of GRC

I have heard of military operations that were clumsy but swift, but I have never seen one that was skillful and lasted a long time. Master Sun (Chapter 2 – Doing Battle, the Art of War).

The GRC (governance, risk and compliance) market is driven by three factors: government regulation such as Sarbanes-Oxley, industry compliance such as PCI DSS 1.2 and growing numbers of data security breaches and Internet acceptable usage violations in the workplace. $14BN a year is spent in the US alone on corporate-governance-related IT spending .

It’s a space that’s hard to ignore.

Are large internally-focused GRC systems the solution for improving risk and compliance? Or should we go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies .

This article introduces a practical approach that will help the CISOs/CSOs in any sized business unit successfully improve compliance and reduce information value at risk. We call this approach “GRC 2.0” and base it on 3 principles.

1.    Adopt a standard language of GRC
2.    Learn to speak the language fluently
3.    Go green – recycle your risk and compliance

GRC 1.0

GRC (Governance, Risk and Compliance) was first coined by Michael Rasmussen.  GRC products like Oracle GRC Suite and Sword Achiever, cost in the high six figures and enable large enterprises to automate the workflow and documentation management associated with costly and complex GRC activities.

GRC – an opportunity to improve business process

GRC regulation comes in 3 flavors: government legislation, industry regulation and vendor-neutral security standards.  Government legislation such as SOX, GLBA, HIPAA and EU Privacy laws were enacted to protect the consumer by requiring better governance and a top-down risk analysis process. PCI DSS 2.0; a prominent example of Industry regulation, was written to protect the card associations by requiring merchants and processors to use a set of security controls for the credit card number with no risk analysis.  The vendor-neutral standard, ISO27001 helps protect information assets using a comprehensive set of people, process and technical controls with an audit focus.

The COSO view is that GRC is an opportunity to improve the operation:

“If the internal control system is implemented only to prevent fraud and comply with laws and regulations, then an important opportunity is missed…the same internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.”

GRC 2.0

The COSO position makes sense, but in practice it’s difficult to attain process improvement through enterprise GRC management.

Unlike ERP, GRC lacks generally accepted principles and metrics. Where finance managers routinely use VaR (value at risk) calculations, information security managers are uncomfortable with assessing risk in financial measures. The finance department has quarterly close but information security staffers fight a battle that ebbs and flows and never ends. This creates silos – IT governance for the IT staff and consultants and a fraud committee for the finance staff and auditors.

GRC 1.0 assumes a fixed structure of systems and controls.  The problem is that, in reducing the organization to passive executives of defense rules in their procedures and firewalls, we ignore the extreme ways in which attack patterns change over time. Any control policy that is presumed optimal today is likely to be obsolete tomorrow. Learning about changes must be at the heart of day-to-day GRC management.

A fixed control model of GRC is flawed because it disregards a key feature of security and fraud attacks – namely that both attackers and defenders have imperfect knowledge in making their decisions. Recognizing that our knowledge is imperfect is the key to solving this problem. The goal of the CSO/CISO should be to develop a more insightful approach to GRC management.

The first step is to get everyone speaking the same language.

Adopt a standard language of GRC – the threat analysis base class

We formalize this language using a threat analysis base class which (like any other class), has attributes and methods. Attributes have two sub-types – threat entities and people entities.

Threat entities

Assets have value, fixed or variable in Dollar, Euro, and Rupee etc.  Examples of assets are employees and intellectual property contained in an office.

Vulnerabilities are weaknesses or a lacking in the business. For example – a wood office building with a weak foundation built in an earthquake zone.

Threats exploit vulnerabilities to cause damage to assets. For example – an earthquake is a threat to the employees and intellectual property stored on servers in the building.

Countermeasures have a cost, fixed are variable and mitigate the vulnerability. For example – relocating the building and using a private cloud service to store the IP.

People entities

Business decision makers encounter vulnerabilities and threats that damage company assets in their business unit. In a process of continuous interaction and discovery, risk is part of the cost of doing business.

Attackers create threats and exploit vulnerabilities to damage the business unit. Some do it for the notoriety, some for the money and some do it for the sales channel.

Consultants assess risk and recommend countermeasures. It’s all about the billable hours.

Vendors provide security countermeasures. The effectiveness of vendor technologies is poorly understood and often masked with marketing rhetoric and pseudo-science.

Methods

The threat analysis base class prescribes 4 methods:

  • SetThreatProbability -estimated annual rate of occurrence of the threat
  • SetThreatDamageToAsset – estimated damage to asset value in a percentage
  • SetCountermeasureEffectiveness – estimated effectiveness of the countermeasure in a percentage.
  • GetValueAtRisk

Speak the language fluently

A language with 8 words is not hard to learn, it’s easily accepted by CFO, CIO and CISO since these are familiar business terms.

The application of our 8 word language is also straightforward.

Instances of the threat analysis base class are “threat models” – and can be used in the entire gamut of GRC activities:  Sarbanes-Oxley, which requires a top down risk analysis of controls, ISO27001 – controls are countermeasures that map nicely to vulnerabilities and threats (you bring the assets) and PCI DSS 1.2 – the PAN is an asset, the threats are criminals who collude with employees to steal cards and the countermeasures are specified by the standard.

You can document the threat models in your GRC system (if you have one and it supports the 8 attributes). If you don’t have a GRC system, there is an excellent free piece of software to do threat modeling – available at http://www.ptatechnologies.com

Go green – recycle your threat models

Leading up to the Al Qaida attack on the US in 9/11, the FBI investigated, the CIA analyzed but no one bothered to discuss the impact of Saudis learning to fly but not land airplanes.

This sort of GRC disconnect in organizations is easily resolved between silos, by the common, politically neutral language of the threat analysis base class.

Summary

Effective GRC management requires neither better mathematical models nor complex enterprise software.  It does require us to explore new threat models and go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies that may threaten our business.  If you follow the Tao of GRC 2.0 – it will be more than a fulfillment exercise.

Tell your friends and colleagues about us. Thanks!
Share this
Federal Healthcare Chart

Healthcare data interoperability pain

Data without interoperability =  pain.

What is happening in the US healthcare space is fascinating as stimulus funds (or what they call in the Middle East – “baksheesh”) are being paid to doctors to acquire an Electronic Health Records system that has “meaningful use”. The term “meaningful use” is vaguely  defined in the stimulus bill as programs that can enable data interchange, e-prescribing and quality indicators.

Our hospital recently spent millions on a emr that does not integrate with any outpatient emr. Where is the data exchanger and who deploys it? What button is clicked to make this happen! My practice is currently changing its emr. We are paying big bucks for partial data migration. All the assurances we had about data portability when we purchased our original emr were exaggerated to make a sale. Industry should have standards. In construction there are 2×4 ‘s , not 2×3.5 ‘s.
Government should not impinge on privacy and free trade but they absolutely have a key role in creating standards that ensure safety and promote growth in industry.
Read more here:  Healthcare interoperatbility pains

Mr Obama’s biggest weakness is that he has huge visions but he can’t be bothered with the details so he lets his team and party members hack out implementations, which is why his healthcare initiatives are on a very shaky footing – as the above doctor aptly noted.  But perhaps something more profound is at work. The stimulus bill does not mention standards as a pre-requisite for EHR, and I assume that the tacit assumption (like many things American) is that standards will “happen” due to the power of free markets. This is at odds with Mr. Obama’s political agenda of big socialistic government with central planning. As the doctor said: “government absolutely (must) have a key role in creating standards that ensure safety and promote growth in industry”.  The expectation that this administration set is that they will take care of things, not that free markets will take care of things.  In the meantime, standards are being developed by private-public partnerships like HITSP – enabling healthcare interoperability

The Healthcare Information Technology Standards Panel (HITSP) is a cooperative partnership between the public and private sectors. The Panel was formed for the purpose of harmonizing and integrating standards that will meet clinical and business needs for sharing information among organizations and systems.

It’s notable that HITSP stresses their mission as meeting clinical and business needs for sharing information among organizations and systems.   The managed-care organizations call people consumers so that they don’t have to think of them as patients.

I have written here, here and here about the drawbacks of packaging Federal money, defense contractors and industry lobbies as “private-public partnerships”.

You can give a doctor $20k of Federal money to buy EMR software, but if it doesn’t interact with the most important data source of all (the patient), everyone’s ROI (the doctor, the patient and the government) will approach zero.

Vendor-neutral standards are key to interoperability. If the Internet were built to HITSP style standards, there would be islands of Internet connectivity and back-patting press-releases, but no Internet.

The best vendor-neutral standards we have today are created by the IETF – a private group of volunteers, not by a “private-public partnership”.

The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF Mission Statement is documented in RFC 3935.

However – vendor-neutral standards are a necessary but insufficient condition for “meaningful use” of data.  There also has to be fast, cheap and easy to use access in the “last mile”.  In healthcare – the last mile is the patient-doctor interaction.

About 10-15 years ago, interoperability in the telecommunications and  B2B spaces was based on an EDI paradigm with centralized messaging hubs for system to system document interchange. As mobile evolved into 3G, cellular applications made a hard shift to a distributed paradigm with middleware-enabled interoperability from a consumer handset to all kinds of 3G services – location, games, billing, accounting etc running at the operator and it’s content partners.

The healthcare industry is still at the EDI stage of development – as we can see from organizations like WEDI and HIMSS

The Workgroup for Electronic Data Interchange (WEDI)

Improve the administrative efficiency, quality and cost effectiveness of healthcare through the implementation of business strategies for electronic record-keeping, and information exchange and management...provide multi-stakeholder leadership and guidance to the healthcare industry on how to use and leverage the industry’s collective technology, knowledge, expertise and information resources to improve the administrative efficiency, quality and cost effectiveness of healthcare information.

What happened to quality and effectiveness of patient-care?

It is not about IT and cost-effectiveness of information (whatever that means). It’s about getting the doctor and her patient exactly the data they need when they need it.   That’s why the doctor went to medical school.

Compare EDI-style message-hub centric protocols to RSS/Atom on the Web where any Web site can publish content and any endpoint (browser or tablet device) can subscribe easily. As far as I can see, the EHR space is still dominated by the  “message hub, system-system, health-provider to health provider to insurance company to government agency” model, while in the meantime, tablets are popping everywhere with interesting medical applications. All these interesting applications will not be worth much if they don’t interact enable the patient and doctor to share the data.

Imagine the impact of IETF style standards, lightweight protocols (like RSS/Atom) and $50 tablets running data sharing apps between doctors and patients.

Imagine vendor-neutral, standard middleware for  EHR applications that would expose data for patients and doctors using an encrypted Atom protocol – very simple, very easy to implement, easy to secure and with very clear privacy boundaries. Perhaps not my first choice for sharing radiology data but a great way to share vital signs and significant events like falling and BP drops.

This would be the big game changer  for the entire healthcare industry.  Not baksheesh. Not EDI. Not private-public partnerships.

Tell your friends and colleagues about us. Thanks!
Share this

Risk assessment for your medical device

We specialize in  cyber-security and privacy compliance for medical device vendors in Israel like you.

We’ve assissted dozens of Israeli software medical device that use Web, mobile, cloud and hospital IT networks achieve cost-effective HIPAA compliance and meet FDA guidance on Premarket Submissions for Management of Cybersecurity in Medical Devices.

As part of our service to our trusted clients, we provide the popular PTA  threat modeling tool, free of charge – with 12 months maintenance included and unlimited threat models.

If you’re not a client  – contact us now for a free phone consultation.

Software Associates threat models are used by thousands of professional security analysts all over the world who use PTA Professional in their risk and compliance practice.

Download the  free risk assessment software now.

What you get with the PTA Software:

  • It’s quantitative: enables business decision makers to state asset values, risk profile and controls in familiar monetary values. This takes security decisions out of the realm of qualitative risk discussion and into the realm of business justification.
  • It’s robust: enables analysts to preserve data integrity of complex multi-dimensional risk models versus Excel spreadsheets that tend to be unwieldy, unstable and difficult to maintain.
  • It’s versatile: enables organizations to reuse existing threat libraries in new business situations and perform continuous risk assessment and what-if analysis on control scenarios without jeopardizing the integrity of the data.
  • It’s effective: helps determine the most effective security countermeasures and their order of implementation, saving you money.
  • It’s databased: based on a robust threat data model with the 4 dimensions of threats, assets, vulnerabilities and countermeasures
  • It’s management level: with a few clicks, you can product VaR reports and be a peer in the boardroom instead of staffer waiting in the hall.

 

Tell your friends and colleagues about us. Thanks!
Share this