Tag Archives: Information security

How to secure patient data in a healthcare organization

If you are a HIPAA covered entity or a business associate vendor to a HIPAA covered entity the question of HIPAA – the question of securing patient data is central to your business.  If you are a big organization, you probably don’t need my advice – since you have a lot of money to spend on expensive security and compliance consultants.

But – if you are small to mid-size hospital or nursing home or medical device vendor without large budgets for security compliance, the natural question you ask is “How can I do this for as little money as possible?”

You can do some research online and then hire a HIPAA security and compliance consultant who will walk you through the security safeguards in CFR 45 Appendix A and help you implement as many items as possible.  This seems like a reasonable approach, but the more safeguards you implement, the more money you spend and moreover, you do not necessarily know if your security has improved –since you have not examined your value at risk – i.e how much money it will cost you if you have a data security breach.

If you read CFR 45 Appendix A carefully, you will note that the standard wants you to do a top-down risk analysis, risk management and periodic information security activity review.

The best way to do that top down risk analysis is to build probable threat scenarios – considering what could go wrong – employees stealing a hard disk from a nursing station in an ICU where a celebrity is recuperating for the information or a hacker sniffing the hospital wired LAN for PHI.

Threat scenarios as an alternative to compliance control policies

When we perform a software security assessment of a medical device or healthcare system, we think in terms of “threat scenarios” or “attack scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance. The threat scenarios are not “one size fits all”.  The threat scenarios for an AIDS testing lab using medical devices that automatically scan and analyze blood samples, or an Army hospital using a networked brain scanning device to diagnose soldiers with head injuries, or an implanted cardiac device with mobile connectivity are all totally different.

We evaluate the medical device or healthcare product from an attacker point of view, then from the management team point of view, and then recommend specific cost-effective, security countermeasures to mitigate the damage from the most likely attacks.

In our experience, building a security portfolio on attack scenarios has 3 clear benefits;
  1. A robust, cost-effective security portfolio based on attack analysis  results in robust compliance over time.
  2. Executives related well to the concepts of threat modeling / attack analysis. Competing, understanding the value of their assets, taking risks and protecting themselves from attackers is really, at the end of the day why executives get the big bucks.
  3. Threat scenarios are a common language between IT, security operations teams and the business line managers

This last benefit is extremely important in your healthcare organization, since business delegates security to IT and IT delegates security to the security operations teams.

As I wrote in a previous essay “The valley of death between IT and security“, there is a fundamental disconnect between IT operations (built on maintaining predictable business processes) and security operations (built on mitigating vulnerabilities).

Business executives delegate information systems to IT and information security to security people on the tacit assumption that they are the experts in information systems and security.  This is a necessary but not sufficient condition.

In the current environment of rapidly evolving types of attacks (hacktivisim, nation-state attacks, credit card attacks mounted by organized crime, script kiddies, competitors and malicious insiders and more…), it is essential that IT and security communicate effectively regarding the types of attacks that their organization may face and what is the potential business impact.

If you have any doubt about the importance of IT and security talking to each other, consider that leading up to 9/11, the CIA  had intelligence on Al Qaeda terrorists and the FBI investigated people taking flying lessons, but no one asked the question why Arabs were learning to fly planes but not land them.

With this fundamental disconnect between 2 key maintainers of information protection, it is no wonder that organizations are having difficulty effectively protecting their assets – whether Web site availability for an online business, PHI for a healthcare organization or intellectual property for an advanced technology firm.

IT and security  need a common language to execute their mission, and I submit that building the security portfolio around most likely threat scenarios from an attacker perspective is the best way to cross that valley of death.

There seems to be a tacit assumption with many executives that regulatory compliance is already a common language of security for an organization.  Compliance is a good thing as it drives organizations to take action on vulnerabilities but compliance checklists like PCI DSS 2.0, the HIPAA security rule, NIST 800 etc, are a dangerous replacement for thinking through the most likely threats to your business.  I have written about insecurity by compliance here and here.

Let me illustrate why compliance control policies are not the common language we need by taking an example from another compliance area – credit cards.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control policy that is not rooted in a probable threat scenario that creates additional vulnerabilities for the business.

Now, consider some deeper ramifications of compliance control policy-based security.

When a  QSA or HIPAA auditor records an encounter with a customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities.

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.

Using value at risk to figure out how much a breach will really cost you

Your threat scenarios must consider asset (your patient information, systems, management attention, reputation) values, vulnerabilities, threats and possible security countermeasures. Threat analysis as a methodology does not look for ROI or ROSI (there is no ROI for security anyhow) but considers the best and cheapest way to reduce asset value at risk.

And – as we opened the article – the question is  “How can I do this for as little money as possible?”

Tell your friends and colleagues about us. Thanks!
Share this

Debugging security

There is an interesting analogy between between debugging software and debugging the security of your systems.

As Brian W. Kernighan and Rob Pike wrote in “The Practice of Programming

As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

In programming, it is faster to examine the contents of a couple of variables than to single-step through entire sections of code.

Collecting security logs is key to information security management not only for understanding what and why an event happened but also in order  to  prove regulatory compliance with regulations such as the HIPAA security rule. The business requirements are that   security logs  should be both relevant and effective.

  1. Relevant content of audit controls:  For example, providing a  detailed trace of an application whenever it elevates privilege in order to execute a system level function.
  2. Effective audit reduction and report generation:  Given the large amount of data that must be analyzed in security  logs, its crucial that critical events are separated from normal traffic and that concise reports can be produced in real-time to help understand  what happened, why it happened and how it was mediated and how to mitigate similar risks in the future.

In security log analysis, it is faster and definitely more effective for a security analyst to examine the contents of a few real time events than to process gigabytes or terabytes of security logs (the equivalent of stepping through or placing watch points in sections of of a sub-modules with  hundreds or thousands of lines of code.

When you have to analyze security logs, it is easy to get lost in details of complicated data and flows of events and find yourself drifting off into all kinds of directions even as the bells go on in the back of your mind that you are chasing ghosts in a futile and time-consuming exercise of investigation and security event debugging.

In order to understand this better, consider another analogy, this time from the world of search engines.

Precision and recall are key to effective security log analysis and effective software debugging.

In pattern recognition and information retrievalprecision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. When a program for recognizing the dogs in a scene correctly identifies four of the nine dogs but mistakes three cats for dogs, its precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. See Precision and recall in the Wikipedia.

In other words – it doesn’t really matter if you have to analyze a program with 100,000 lines of code or a log file with a terabyte of data – if you have good precision and good recall.

The problem is however, that the more data you have, the more difficult it is to achieve high precision and recall and that is why real-time events (or  debugging statements) are more effective in day-to-day security operations.


Tell your friends and colleagues about us. Thanks!
Share this

Problems in current Electronic Health Record systems

Software Associates specializes in helping medical device and healthcare technology vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

As I noted here and here, the security and compliance industry is no different from other industries in having fashion and trends.  Two years ago, PHR (Personal Health Records) systems were fashionable and today they’re not – probably because the business model for PHR applications is unclear and unproven.

Outside of the personal fitness and weight-loss space, it’s doubtful that consumers will pay  money for a Web 2.0 PHR application service to help them store  personal health information especially when they are paying their doctor/insurance company/HMO for  services. The bad news for PHR startups is that it’s not really an app that runs well on Facebook and on the other hand, the average startup is not geared to do big 18-24 month sales cycles with HCP (health care providers) and insurance companies.  But – really, business models is the least of our problems.

There are 3 cardinal  issues with the current generation of EHR/EMR systems.

  1. EHR (Electronic Health Records) systems address the business IT needs of government agencies, hospitals, organizations and medical practices, not the healthcare needs of patients.
  2. PHR (Personal Health Records) systems are not integrated with the doctor-patient workflow.
  3. EHR systems are built on natural language, not on patient-issue.

EHR – Systems are focused on business IT, not patient health

EHR systems are enterprise software applications that serve the business IT elements of helthcare delivery for healthcare providers and insurance companies; things like reducing transcription costs, saving on regulatory documentation, electronic prescriptions and electronic record interchange.1

This clearly does not have much to do with improving patient health and quality of life.

EHR systems also store large volumes of information about diseases and symptoms in natural language, codified using standards like SNOMED-CT2. Codification is intended to serve as a standard for system interoperability and enable machine-readability and analysis of records, leading to improved diagnosis.

However, it is impossible to achieve a meaningful machine diagnosis of natural language interview data that was uncertain to begin with, and not collected and validated using evidence-based methods3.

PHR – does not improve the quality of communications with the doctor

PHR (Personal Health Records) on the other are intended to help patients keep track of their personal health information. The definition of a PHR is still evolving. For some, it is a tool to view patient information in the EHR. Others have developed personal applications such as appointment scheduling and medication renewals. Some solutions such as Microsoft HealthVault and PatientsLikeMe allow data to be shared with other applications or specific people.

PHR applications have a lot to offer the consumer, but even award-winning applications like Epocrates that offer “clinical content” are not integrated with the doctor-patient workflow.

Today, the health care system does not appropriately recognize the critical role that a patient’s personal experience and day-to-day activities play in treatment and health maintenance. Patients are experts at their personal experience; clinicians are experts at clinical care. To achieve better health outcomes, both patients and clinicians will need information from both domains– and technology can play a key role in bridging this information gap.”4

 EHR – builds on natural language, not on patient issues

When a doctor examines and treats a patient, he thinks in terms of “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up.

In current EHR systems, when a doctor records an encounter, he records planning, tests, therapies, and follow-up, just not under the main entity, the issue. The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue and records information, but not under the main “issue” entity.

Other actors such as public health registries and epidemiological researchers go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is.

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the patient
  • No sufficient connection to clinical guidelines, no indication on which guidelines to follow or which have been followed
  • No connection between prescriptions and diseases, except circumstantial
  • No ability to detect and warn for contraindications
  • No archiving or demoting of less important and solved problems
  • Lack of overview of status of the patient, only a series of historical observations
  • In most systems, no sufficient search capabilities
  • An excess of textual data that cannot possibly be read by every doctor at every encounter
  • Confidentiality borders are very hard to define
  • Very rigid and closed interfaces, making extension with custom functionality very difficult

4 Patricia Brennan, “Incorporating Patient-generated Data in meaningful use of HIT” http://healthit.hhs.gov/portal/server.pt/

Tell your friends and colleagues about us. Thanks!
Share this

Security sturm und drang – selling fear.

Sturm und Drang is associated with literature or music aiming to frighten the audience or imbue them with extremes of emotion”.

The Symantec Internet Security Threat Report is a good example of sturm und drung marketing endemic in the information security industry.

Vendors like Symantec sell fear, not security products, when they report on “Rises on Data Theft, Data Leakage, and Targeted Attacks Leading to Hackers’ Financial Gain”, without suggesting cost-effective security countermeasures.

1. Lumps consumers and enterprises together

“End users, whether consumers or enterprises, need to ensure proper security measures to prevent an attacker from gaining access to their confidential information, causing financial loss, harming valuable customers, or damaging their own reputation.”

Since when do consumers have customers…Consumers are insured for credit card theft and PCI DSS certified merchants are protected from chargeback exposure with the acquiring bank. What financial losses do consumers and enterprises have in common?

2. Incorrectly classifies assets, incorrectly uses legal terms

“Symantec tracked the trade of stolen confidential information and captured data frequently sold on underground economy servers. These servers are often used by hackers and criminal organizations to sell stolen information, including social security numbers, credit cards, and e-mail address lists”.

Social security numbers are classified as PII (personally identifiable information) not confidential information. If Symantec is uncertain how to classify this asset, they should read the US State privacy laws and PCI DSS specification. As a matter of fact, the law does not protect confidential information – it protects a confidence relationship. Once the information is disclosed (and Social security numbers are frequently disclosed), a third party is not prevented from independently duplicating and using the information. See the Wikipedia.

3. Provides misleading data

“Increase in Data Breaches Help Facilitate Identity Theft”

By not quantifying the threat probability, Symantec deliberately misleads the reader into thinking that cyber threats are the main attack on PII.

Au contraire. The FTC says that most identify theft cases are caused by offline methods such as dumpster diving, stealing and pretexting. According to Applied Cybersecurity Research, “Internet-related identity theft accounted for about 9 percent of all ID thefts in the United States in 2005”.

4. Cites vulnerability stats without suggesting countermeasures

“Symantec documented 12 zero-day vulnerabilities during the second half of 2006”

What is the point of a threat model without security countermeasures?

a. What were the vulnerabilities, and do consumer PCs have the same vulnerabilities as corporate servers behind a Checkpoint firewall?

b. What are the most cost-effective security countermeasures?

c. Does Symantec recommend that consumers use the same security countermeasures and risk assessment procedures as business enterprises?

See the full report here:
Symantec Reports Rise in Data Theft, Data Leakage, and Targeted Attacks Leading to Hackers’ Financial Gain

Tell your friends and colleagues about us. Thanks!
Share this

The effectiveness of access controls

With all due respect to Varonis and access controls in general (Just the area of Sharepoint is a fertile market for data security), the problem of internally-launched attacks is that they are all done by the “right” people and / or by software agents who have the “right” access rights.

There are 3 general classes of internal attacks that are never going to be mitigated by access controls:

Trusted insider theft

A trivial example is a director of new technology development at a small high-tech startup who would have access to the entire company’s IP, the competitive analyses, patent applications and minutes of conversations with all the people who ever stopped in to talk about the startup’s technology. That same person has access by definition but when he takes his data and sucks it out the network using a back-door, a proxy, an HTTP GET or just a plain USB or Gmail account – there is no way an Active Directory access control will be able to detect that as “anomalous behavior”.

Social engineering

Collusion between insiders, gaming the system, taking advantage of friends and DHL messengers who go in and out of the office all the time with their bags.

Side channel attacks

Detecting data at a distance with acoustic or Tempest attacks – for example. or watching parking lot traffic patterns….

Tell your friends and colleagues about us. Thanks!
Share this

Risk in IT

Dissonance between IT and securityDissonance between IT and security management.

Mark Brewer wrote a thoughtful post on Risk in IT – I liked his use of the  term “resilient organizations”, although I have been using the term “robust organizations”.   The semantic difference between robustness and resilience may be related to the difference between IT and security management world-views.

“Risk in IT”  derives from a fundamental dissonance between information technology and security –

IT management is about planning and executing predictable business processes. Security is about planning for the the unpredictable.

This fundamental dissonance often causes a cultural schism between IT/CIO and Security/CSO. In many organizations the dissonance is amplified by two additional factors – a) splitting of physical and information security into two separate operations silos and b) external regulatory compliance.

Compliance as it pertains to security, finance and IT is often conveniently boxed into politically safe silos. OP (organizational politics) is not a bad thing, but multiple risk silos results in multiple and usually redundant costs. In addition, compliance results in the management board adopting policies that are not organically their own – which is dangerous in its own right.

The short answer to these issues is that security needs to build into (not bolt onto) the business strategy and business process itself.

Tell your friends and colleagues about us. Thanks!
Share this

The physics of risk assessment

Quantity or quality –  that is the question!

There is a great deal of debate between the supporters of quantitative risk assessment and the supporters of qualitative risk assessment in the security and compliance business.

The qualitative people say that since it is impossible to estimate risk as an absolute number such as  “87 percent probability of your customer data being stolen by an angry employee”, they would rather rate that risk as “high”.

The quantitative people say that risk is a function of threat, ARO (annual rate of occurrence) and percent damage to the asset.   If the annual rate of occurrence of an attack is twice/year on the average and the percent damage to a customer list is 10% of its value, then the risk of your customer data being stolen would be 2.0×0.10 = 20 percent on a yearly basis.   The qualitative folks are quick to retort that it’s impossible to estimate ARO, since most organizations don’t collect historical loss data for security  and compliance events. (This is actually a good case to start collecting data now…) They also claim out that it’s impossible to accurately estimate the value of an asset such as a customer list in dollars (need to ask the right person – like the CFO…).

Since I am a physicist, I must say that I am biased towards physical models that can be calculated and observed.  I would start with three assumptions:

1. The estimated value of an asset  is analogous to it’s momentum mv,  the product of its mass and velocity.  A very large database of 10 year old customer data that was archived in the Colorado Rockies might have a large mass but almost zero velocity and therefore low value.   If the database had 100,000 transactions/day then it would have a high velocity,  correspondingly high momentum and high value. Note that this model runs counter to all  privacy regulation but I think it holds water from a practical perspective.  No one ever said that our legislators were good at physics….

This physical analogy leads to some interesting conclusions. If an attacker were to steal 10 million customer records from the archive in the Colorado Rockies – the dollar value of the damage would actually be low in this model.   On the other hand, if  political attackers were to access the flight details of only one  passenger name record, the damage might be very high if it was disclosed that a US presidential candidate called Barack Obama, was using frequent flier mileage to get away for an intimate weekend with Janet Jackson. Or not…

2. The ability of an attacker to damage an asset is analogous to the force it can exert on the object we call an asset.

3. The ability of a security countermeasure to protect an asset is analogous to the force it can exert on the attacker.

Observed from an inertial reference frame, the net force on the object (the asset) is proportional to the rate of change of its momentum F = d (mv) / dt.

Force and momentum are vectors and the  resulting force is the vector sum of all forces present.

Newton’s Second  Law says that  “F = ma: the net force on an object is equal to the mass of the object multiplied by its acceleration.”

If the attacker manages to decelerate the asset to v=0, then the momentum of the asset is zero and it has been rendered inoperative.   In a case like this – the damage to the asset is 100%

If the asset runs faster than the attacker or another force (a security countermeasure) deflects the attacker, then the asset momentum is unchanged, and damage to the asset is 0%.

This simple-minded physical argument shows that risk is indeed a dependent variable;

Risk = the vector sum of the forces of the attackers and security countermeasures relative to the asset.

As in physics,  we must observe and collect data if we want to be able to calculate risk.

1.  Asset value (momentum)

IT security and compliance people should ask their CFO how much the asset is worth in dollars

2. Attacker force  relative to the asset

3. Countermeasure force – relative to the attacker.

No one said it was easy – which is why not everyone is doing quantitative risk assessment. But – that’s why we’re getting paid the big bucks – to calculate risk to the best of our abilities.


High School Physics – Newton’s Laws

Risk assessment – Practical threat analysis calculative method

Tell your friends and colleagues about us. Thanks!
Share this