All posts by Danny Lieberman

About Danny Lieberman

Born in Washington DC, lives in Israel. Danny has a graduate degree in solid state physics and is a professional software security analyst, serious amateur saxophonist and XC rider.

cyber attacks

14 years after 9/11, more connected, more social, more violent

Friday, today is the 14’th anniversary of the Al Queda attack on the US in New York on 9/11/2001.

The world today is more connected, more always-on, more accessible…and more hostile. There are threats from Islamic terror, identity theft, hacking for pay, custom spyware, mobile malware, money laundering and corporate espionage. For those of us working in the fields of risk management, security and privacy, these are all complex challenges in the task of defending a business.

The biggest challenge is the divide between IT and  management. It’s similar to the events leading up to 9/11: The FBI investigated and the CIA analyzed, but the two sides never discussed the threats and the potential damage of Saudis learning to fly, but not how to land airplanes.
Continue reading

Security is not fortune telling

The importance of risk analysis for HIPAA compliance

A chain of risk analysis

The HIPAA Final Rule creates a chain of risk analysis and compliance from the hospital, downstream to the business associates who handle / process PHI for the hospital and sub-contractors who handle / process PHI for the business associate.

And so on.

The first thing an organization needs to do is a risk analysis.   How important is a risk analysis?  Ask Cancer Care Group who were just fined $750,000 for non-compliance with the Security Rule.

$750,000 HIPAA settlement emphasizes the importance of risk analysis and device and media control policies

Cancer Care Group, P.C. agreed to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules with the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR). Cancer Care paid $750,000 and will adopt a robust corrective action plan to correct deficiencies in its HIPAA compliance program. Cancer Care Group is a radiation oncology private physician practice, with 13 radiation oncologists serving hospitals and clinics throughout Indiana.

On August 29, 2012, OCR received notification from Cancer Care regarding a breach of unsecured electronic protected health information (ePHI) after a laptop bag was stolen from an employee’s car. The bag contained the employee’s computer and unencrypted backup media, which contained the names, addresses, dates of birth, Social Security numbers, insurance information and clinical information of approximately 55,000 current and former Cancer Care patients.

OCR’s subsequent investigation found that, prior to the breach, Cancer Care was in widespread non-compliance with the HIPAA Security Rule. It had not conducted an enterprise-wide risk analysis when the breach occurred in July 2012. Further, Cancer Care did not have in place a written policy specific to the removal of hardware and electronic media containing ePHI into and out of its facilities, even though this was common practice within the organization. For more information see the HHS Press Release from Sep 2, 2015 $750,000 HIPAA settlement emphasizes the importance of risk analysis and device and media control policies

Risk analysis is the first step in meeting Security Rule requirements

I have written here, here, here and here about the importance of risk analysis as a process of understanding the value of your assets, the impact of your threats and the depth of your vulnerabilities in order to implement the best security countermeasures.

The HIPAA Security Rule begins with the analysis requirement in § 164.308(a)(1)(ii)(A). Conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule. Therefore, a risk analysis is fundamental to the entire process of HIPAA Security Rule compliance, and must be understood in detail by the organization in order to specifically address safeguards and technologies that will best protect electronic health information. See Guidance on Risk Analysis Requirements under the HIPAA Security Rule. Neither the HHS Guidance nor NIST specify a methodology – we have been using the Practical Threat Analysis methodology for HIPAA Security risk analysis with Israeli medical device companies since 2009 and it works smoothly and effectively helping Israeli medical device vendors comply and implement robust security at the lowest possible cost.

§ 164.308(a)(1)(ii)(A) Risk Analysis (R1): As part of the risk management process, the company performs information security risk analysis for its services (see company procedure XYZ) analyzing software application  security, data security and human related vulnerabilities. Risk analysis is performed according to the Practical Threat Analysis methodology.

1 A refers to addressable safeguards, and R refers to required safeguards.

Do it.  Run Security like you Run your business

skin mounted medical devices

On Shoshin and Software Security

I am an independent software security consultant specializing in medical device security and HIPAA compliance in Israel.   I use the state-of-the art PTA – Practical Threat Analysis tool to perform quantitative threat analysis and produce  a bespoke, cost-effective security portfolio for my customers that fits their medical device technology.

There are over 700 medical device companies in Israel – all doing totally cool and innovative things from My Dario (diabetes management), to Syneron (medical esthetics),  to FDNA (facial dysmorphology  novel analysis at your fingertips) to Intendu (Brain Rehabilitation).

This is a great niche for me because I get to do totally cool projects and  work with a lot of really smart people at Israeli medical device vendors helping them implement cost-effective  security and privacy compliance + it’s fun learning all the time.

One thing I have learned is that there is very little connection between FDA medical device risk assessments and a software security risk assessments.   This somewhat counter-intuitive for people who come from the QA and RA (regulatory assurance) areas.

Security is an adversarial environment very unlike FDA regulatory oversight.

FDA medical device regulatory oversight is about complying in a reliable way with standard operating procedures and software standards.

FDA believes that conformance with guidance documents, when combined with the general controls of the Act, will provide reasonable assurance of safety and effectiveness…

FDA recognizes several software consensus standards. A declaration of conformity to these standards, in part or whole, may be used to show the manufacturer has verified and validated pertinent specifications of the design controls. The consensus standards are:

  • ISO/IEC 12207:1995 Information Technology – Software Life Cycle Processes
  • IEEE/EIA 12207.O-1996 Industry Implementation of International Standard ISO/IEC12207:1995 (ISO/IEC 12207) Standard for Information Technology – Software Life Cycle Processes

Barry Boehm succinctly expressed the difference between Verification and validation:

Verification: Are we building the product right?

Validation: Are we building the right product?

Building the right product right is no more a guarantee of security than Apple guaranteeing you that your Mac Book  Pro  will not be stolen off an airport scanner.

Medical device security is about attackers and totally unpredictable behavior

Medical device security is about anticipating  the weakest link in a system that can be exploited by an attacker who will do totally unpredictable things that were inconceivable last year by other hackers, let alone 20 years ago by an ISO standards body.

You cannot manage unpredictable behavior (think about a 2 year old) although you can develop the means for anticipating threats and responding quickly and in a focused way even when sleep-deprived and caffeine-enriched.

The dark side of security is often hubris and FUD.

For security consultants, there is often an overwhelming temptation to show clients how dangerous their security vulnerabilities are and use that as a lever to sell products and services.   I’ve talked about hubris and FUD here and here and here and here and here.   A good example of exploiting clients with security FUD are the specialty HIPAA-compliant hosting providers like Firehost that are masters of providing expensive services to clients that may or may not really need them.

However, I believe that intimidation is not necessarily a strategy guaranteed to win valuable long-term business with clients.

Instead of saying – “that is a really bad idea, and you will get hacked and destroy your reputation before your QA and RA departments get back from lunch“,  it is better to take a more nuanced approach like:

I see that you are transferring credentials in plain-text to your server in the cloud.   What do you think about the implications of that?“.   Getting a client to think like an attacker is better than dazzling and intimidating them which may result in  the client doing nothing, hunkering down into his current systems or if the client has money – going off and spending it badly.

How did I reach this amazing (slow drum roll…) insight?

About 3 years ago I read a book called Search Inside Yourself and I learned an idea called – “Don’t take action, let action take you“.    I try to apply this approach with clients as a way of helping them learn themselves and as a way of avoiding unnecessary conflict.  The next step in my personal evolution was getting acquainted with a Zen Buddhist concept  called Shoshin:

Shoshin (初心) means “beginner’s mind”. It refers to having an attitude of openness, eagerness, and lack of preconceptions when studying a subject, even when studying at an advanced level, just as a beginner in that subject would.

Shoshin means doing the exact OPPOSITE of what you (the high-powered, all-knowing, medical device security consultant) would normally do in the course of a security threat assessment:

  1. Let go of the need to add value – you do not have to provide novel security countermeasures all the time. Sometimes, doing the basics very well (like hashing and salting passwords) is all the value the client needs.
  2. Let go of the need to win every argument – you do not have to show the client why their RA (regulatory assurance) manager is making fatal mistakes in database encryption after she took some bad advice from Dr. Google.
  3. Ask the client to tell you more – ask what led them to a particular design decision.  You may learn something about their system design alternatives and engineering constraints. This will help you design some neat security countermeasures for their medical device and save them some money.
  4. Assume you are an idiot –  this is a corollary of not taking action.   By assuming you are an idiot, you disable your ego for a few moments and you get into a position of accepting new information  which in the end, may help you anticipate some threats and ultimately take your client out of potentially dangerous adversarial threat scenario.

Thank you to James Clear for his insightful post – Shoshin: This Zen Concept Will Help You Stop Being a Slave to Old Behaviors and Beliefs

Protecting your blackberry

Dealing with DLP and privacy

Dealing with DLP and privacy

It’s a long hot summer here in the Middle East and with 2/3 of  the office out on vacation, you have some time to reflect on data security. Or on the humidity.  Or on a cold beer.

Maybe you are working on building a business case for DLP technology like Websense or Symantec or Verdasys, or Mcafee or Fidelis in your organization.  Or maybe you  already purchased DLP technology and you’re embroiled in turf wars that have put your DLP implementation at a standstill as one of your colleagues is claiming that there are employee privacy issues with DLP and you’re trying to figure out how to get the project back on track after people get back from their work and play vacations in Estonia and brushing up on their hacking skills.

Unlike firewall/IPS, DLP is content-centric. It is technology that drives straight to the core of business asset protection and business process.  This frequently generates opposition from people who own business assets and manage business process. They may have legitimate concerns regarding the cost-effectiveness of DLP as a data security countermeasure.

But – people who oppose DLP on grounds of potential employee privacy violations might be selling sturm and drang to further a political agenda.   If you’re not sure about this – ask them what they’ve done recently to prevent cyber-stalking and sexual harassment in the workplace. 

For sure, there are countries such as France and Germany where any network or endpoint monitoring that touches employees is verboten or interdit as the case may be; but if you are in Israel, the US or the UK, you will want to read on.

What is DLP and what are the privacy concerns?

DLP (data loss prevention) is a solution for monitoring/preventing sensitive outbound content not activity at an endpoint. This is the primary mission. DLP is often a misnomer, as DLP is more often than not, DLD – data loss detection but whatever…Network DLP solutions intercept content from the network and endpoint DLP agents intercept content by hooking into Windows operating system events.  Most of the DLP vendors offer an integrated network DLP and endpoint DLP solution in order to control removable devices in addition to content leaving network egress points. A central command console analyzes the intercepted content and generates security events, visualizes them and stores forensics as part of generating actionable intelligence. Data that is not part of the DLP forensics package is discarded.

In other words, DLP is not about reading your employees email on their PC.  It’s about keeping the good stuff inside the company.    If you want to mount surveillance on your users, you have plenty of other (far cheaper) options like browser history capturer or key loggers. Your mileage will vary and this blog does not provide legal guidance but technically – it’s not a problem.

DLP rules and policies are content-centric not user-centric.

A DLP implementation will involve writing custom content signatures (for example to detect top-secret projects by keyword, IP or source code) or selecting canned content signatures from a library (for example credit cards). 

The signatures are then combined into a policy which maps to the company’s data governance policy – for example “Protect top-secret documents from leaking to the competition”. 

One often combines server endpoints and Web services to make a more specific policy like “Alert if top-secret documents from Sharepoint servers are not sent via encrypted channels to authorized server destinations“. 

In 13 DLP installations in 3 countries, I never saw a policy that targeted a specific user endpoint. The reason for this is that it is far easier using DLP content detection to pickup endpoint violations then to white list and black list endpoints which in a large organization with lots of wireless and mobile devices is an exercise in futility.  

We often hear privacy concerns from people who come from the traditional firewall/IPS world but the firewall/IPS paradigm breaks when you have a lot of rules and endpoint IP addresses and that is why none of the firewall vendors like Checkpoint ever succeeded in selling the internal firewall concept. 

Since DLP is part of the company data governance enforcement, it is commonly used as a tool to reinforce policy such as not posting company assets to Facebook. 

It is important to emphasize again, that DLP is an alert generation and management technology not a general purpose network traffic recording tool – which you can do for free using a Netoptics tap and  Wireshark.

 Any content interception technology can be abused when in the wrong hands or in the right hands and wrong mission.  Witness NSA. 

Making your data governance policy work for your employees

Many companies, (Israeli companies in particular) don’t have a data governance policy but if they do, it should cover the entire space of protecting employees in the workplace from cyber-threats.

An example of using DLP to protect employees are the threat scenarios of cyber-stalking, sexual harassment or drug trafficking in the workplace where DLP can be used to quickly (as in real-time) create very specific content rules and then refined to include specific endpoints to catch forensics and offenders in real-time. Just like inCSI New York New York.

In summary:

There are 3 key use cases for DLP in the context of privacy:

  1. Privacy compliance (for example PCI, HIPAA, US State and EU privacy laws) can be a trigger for installing DLP. This requires appropriate content rules that key to identifying PHI or PII.
  2. Enforcement of your corporate  data governance and compliance policies where privacy is an ancillary concern.   This requires appropriate content rules for IP, suppliers and sensitive projects. So long as you do not target endpoints in your DLP rules, you will be generating security events and collecting forensics that do not infringe on employee privacy.   In some countries like France and Germany this may still be an issue.  Ask your lawyer.
  3. Employee workplace protection – DLP can be an outstanding tool for mitigating and investigating cyber threats in the workplace and at the very least a great tool for security awareness and education. Ask your lawyer.

If you liked this or better yet hated it,  contact  me.  I am a professional security analyst specializing in HIPAA compliance and medical device security and I’m based in Israel and always looking for interesting and challenging projects.

Idea for the post prompted by Ariel Evans.

convoluted laws of privacy and security

What is PHI?

Software Associates specialize in HIPAA security and compliance for Israeli medical device companies – and 2  questions always come up: “What is PHI?” and “What is electronically protected health information?”

Of course, you will have already Googled this problem and come to one conclusion or another by surfing sites like Hipaa Compliance Made Easy or the Wikipedia entry on HIPAA.

But you may ask – “Can I entrust my security and compliance implementation to Dr. Google?” And – the answer is no.

 

Most of the content on the Net on this topic is unclear, outdated and predate the implementation of the HIPAA Final Rule in October 2013 and many articles confuse privacy and security. Much of the content is blatantly self-serving marketing collateral for security products like this plug for a firewall product and this pitch by Checkpoint to register on their web site.

Then, there is a distinct American flavor to the Final Rule which makes it even more confusing for non-American readers who have to try and grasp why payment in cash is related to privacy. (Hint – in Europe, privacy is a fundamental human right unrelated to money).

When individuals pay by cash they can instruct their provider not to share information about their treatment with their health plan.  The final omnibus rule sets new limits on how information is used and disclosed for marketing and fundraising purposes and prohibits the sale of an individuals’ health information without their permission.

But – although Congress low-balled the cost to the American healthcare industry for compliance in order to get the bill approved – for all of the law’s American peculiarities, the HIPAA Final Rule is well thought out and a good example of how to use free market forces to enforce security and compliance.   That however – will be a topic for another post.

For now we want to find a precise answer to the questions “What is PHI?” and “What is EPHI?”

Careful reading of the law itself clearly shows 2 things:

A. PHI (Protected health information) is health / clinical data mixed with PII (personally identifiable information, which is basically having enough information to steal someone’s identity in the US) stored and transmitted verbally or on paper.

B. EPHI (Electronic Protected health information) is PHI transmitted and/or stored electronically.

Simple indeed.

See HIPAA Administrative Simplification
Regulation Text 45 CFR Parts 160, 162, and 164
(Unofficial Version, as amended through March 26, 2013)

Notes – definitions of PHI

Electronic protected health information means information that comes within
paragraphs (1)(i) or (1)(ii) of the definition of protected health information
as specified in this section.

(1) Protected health information means individually
identifiable health information that is:
(i) Transmitted by electronic media;
(ii) Maintained in electronic media; or
(iii) Transmitted or maintained in any other form or medium.

(2) Protected health information excludes individually identifiable health
information:

(i) In education records covered by the Family Educational Rights
and Privacy Act, as amended, 20 U.S.C. 1232g;
(ii) In records described at 20 U.S.C. 1232g(a)(4)(B)(iv);
(iii) In employment records held by a covered entity in its role as employer; and
(iv) Regarding a person who has been deceased for  more than 50 years.

Individually identifiable health information is information that is a subset of
health information, including demographic information collected from an
individual, and:
(1) Is created or received by a health care provider, health
plan,employer, or health care clearinghouse; and
(2) Relates to the past,
present, or future physical or mental health or condition of an individual; the
provision of health care to an individual; or the past, present, or future
payment for the provision of health care to an individual; and
(i) That identifies the individual; or
(ii) With respect to which there is a reasonable basis to believe the
information can be used to identify the individual.

 

 

 

 

dilbert-paradigm-intro1

10 ways to detect employees who are a threat to PHI

Software Associates specializes in software security and privacy compliance for medical device vendors in Israel.   One of the great things about working with Israeli medical device vendors is the level of innovation, drive and abundance of smart people.

It’s why I get up in the morning.

Most people who don’t work in security, assume that the field is very technical, yet really – it’s all about people.   Data security breaches happen because people or greedy or careless.    100% of all software vulnerabilities are bugs, and most of those are design bugs which could have been avoided or mitigated by 2 or 3 people talking about the issues during the development process.

I’ve been talking to several of my colleagues for years about writing a book on “Security anti-design patterns” – and the time has come to start. So here we go:

Security anti-design pattern #1 – The lazy employee

Lazy employees are often misdiagnosed by security and compliance consultants as being stupid.

Before you flip the bozo bit on customer’s employee as being stupid, consider that education and IQ are not reliable indicators of dangerous employees who are a threat to the company assets.

Lazy employees may be quite smart but they’d rather rely on organizational constructs instead of actually thinking and executing and occasionally getting caught making a mistake.

I realized this while engaging with a client who has a very smart VP – he’s so smart he has succeeded in maintaining a perfect record of never actually executing anything of significant worth at his company.

As a matter of fact – the issue is not smarts but believing that organizational constructs are security countermeasures in disguise.

So – how do you detect the people (even the smart ones) who are threats to PHI, intellectual property and system availability:

  1. Their hair is better organized then their thinking
  2. They walk around the office with a coffee cup in their hand and when they don’t, their office door is closed.
  3. They never talk to peers who challenge their thinking.   Instead they send emails with a NATO distribution list.
  4. They are strong on turf ownership.  A good sign of turf ownership issues is when subordinates in the company have gotten into the habit of not challenging the VP coffee-cup holding persons thinking.
  5. They are big thinkers.    They use a lot of buzz words.
  6. When an engineer challenges their regulatory/procedural/organizational constructs – the automatic answer is an angry retort “That’s not your problem”.
  7. They use a lot of buzz-words like “I need a generic data structure for my device log”.
  8. When you remind them that they already have a generic data structure for their device log and they have a wealth of tools for data mining their logs – amazing free tools like Elasticsearch and R….they go back and whine a bit more about generic data structures for device logs.
  9. They seriously think that ISO 13485 is a security countermeasure.
  10. They’d rather schedule a corrective action session 3 weeks after the serious security event instead of fixing it the issue the next day and documenting the root causes and changes.

If this post pisses you off (or if you like it),  contact danny Lieberman me.  I’m always interested in challenging projects with people who challenge my thinking.

Security is not fortune telling

The top 5 things a medical device vendor should do for HIPAA compliance

We specialize in software security assessments, FDA cyber-security and HIPAA compliance for medical device vendors in Israel.

The first question that every medical device vendor CEO asks us is “What is the fastest and cheapest way for us to be HIPAA-compliant”?

So here are the top 5 things a medical device vendor should do in order to achieve HIPAA compliance:

1. Don’t store EPHI

If you can, do not store EPHI in your system at all.  That way you can side-step the entire HIPAA compliance process.    (This is not to say that you don’t have to satisfy FDA cyber-security requirements or have strong software security in general but that is a separate issue).

What is EPHI? EPHI (electronic protected health information) is any combination of PII (personally identifiable information and any clinical data.   OK – you ask so what is the definition of PII from the perspective of HIPAA?   Basically – PII is any combination of data that can be used to steal someone’s identity – in a more formal sense – here is a list of PHI identifiers:

  1. A name
  2. An address. The kind that FedEx or USPS understands
  3. Birth dates – age does not count.
  4. Phone numbers including (especially) mobile phone
  5. Email addresses
  6. Usernames of online services
  7. Social Security numbers
  8. Medical record numbers
  9. Health plan beneficiary number
  10. Account numbers
  11. Certificate/license numbers – any number that identifies the individual. A certificate on winning a spelling bee in Junior High doesn’t count.
  12. Vehicle identifiers and serial numbers, including license plate numbers;
  13. Device identifiers and serial numbers that can be tied back to a person
  14. URLs – that can be tied back to a person using DNS lookups
  15. IP address – for example the IP address of a home router that can be used to lookup an identify a person
  16. Biometric identifiers, including finger and voice prints;
  17. Full face pictures

2. If you store EPHI do a threat analysis of your medical device

The HIPAA Security Rule and the FDA cyber security guidance are very clear on this point. You can learn more about threat modeling and analysis here, here and here. Regarding encryption and medical device security, read this.

3. Implement software configuration management and deployment tools

The best advice I can give a medical device vendor is to use Git.   If you use Azure or are a Microsoft shop (our condolences – read here and here why Windows is a bad choice for medical devices) then TFS is a great solution that is integrated nicely in Azure. Note that Azure is a great cloud solution for Linux as well. Don’t get me wrong – Microsoft does a lot of things right.  But using Windows for medical devices is a really bad idea.

4. Implement log monitoring

Monitoring your logs for peaks in CPU, memory or disk usage is a great way to know if you’re being attacked.  But – if you have medical device logs and no one is home to answer the phone then it’s a waste of time.

5. Make sure the lights are on and some one is home

You’ve done a great job on your medical device software.   You did Verification and Validation and you implemented threat modeling in your development process and you have logs.  Just make sure that it’s someone knows that it’s their job to keep on eye on security events.   If you get a notice from a customer or a ping from your log manager, or an email from your cloud provider that they’re gonna reboot your services because of VENOM – just make sure the lights are on and some one is home.

 

 

 

In summary

Robust security for your medical is not fortune telling but neither is it an organizational construct.  The best way to think about your medical devices is to think about something you would give a child (or a soldier on the battle field). It has to totally reliable and safe for the patient even under the most adverse conditions.

 

skin mounted medical devices

Shock therapy for medical device malware

Israel has over 700 medical device vendors.  Sometimes it seems like half of them are attaching to the cloud and the other are developing mobile apps for all kinds of crazy, innovative applications like Healthy.io ( Visual Input Turned Into Powerful Medical Insight – translation: an app that lets you do urine analysis using your smart phone).

But – let’s not forget that many Medical devices  such as bedside monitors, MRI, nuclear medicine and  catheterization devices all reside on today’s hospital enterprise network.

An enterprise hospital network is a dangerous place.

Medical devices based on Microsoft Windows  can be extremely vulnerable to attack from hackers and malware who penetrate the hospital network and exploit typical vulnerabilities such as default passwords.

More importantly – medical devices that are attached to a hospital network are a significant threat to the hospital network itself since they may propagate malware back into the network.

While a thorough software security assessment of the medical device and appropriate hardening of the operating system and user-space code is the best way to secure a medical device in a hostile hospital network – this is not usually an option for the hospital once the medical device is installed.

Taking a page out of side-channel attacks and using the technique to detect malware, University of Michigan researchers have developed WattsUpDoc, a system designed to detect malware on medical devices by noting small changes in their power consumption.

The researchers say the technology could give hospitals a quick way to identify medical devices with significant vulnerabilities.

The researchers tested WattsUpDoc on an industrial-control workstation and on a compounder, which is used to mix drugs.

The malware detector first learned the devices’ normal power-consumption patterns. Then it tested machines that had been intentionally infected with malware. The system was able to detect abnormal activity more than 94 percent of the time when it had been trained to recognize the malware, and up to 91 percent of the time with previously unknown malware. The researchers say the technology could alert hospital IT administrators that something is wrong, even if the exact virus is never identified.

For the full article see WattsUpDoc

 

The death of the anti-virus

Does anti-virus really protect your data?

 

Additional security controls do not necessarily reduce risk.

Installing more security products is never a free lunch and tends to increase the total system risk and cost of ownership, as a result of the interaction between the elements.

We use the quantitative threat analysis tool – PTA that enables any business  to build a quantitative risk model and construct an economically-justified, cost-effective set of countermeasures that reduces risk in your and your customers’ business environment.

Like everything else in life, security is an exercise in alternatives.

But – do you choose the right one?

Many firms see the information security issue as mainly an exercise permissions and identity management (IDM). However, it is clear from conversations with two of our large telecom customers that (a) IDM is worthless against threats of trusted insiders with appropriate privileges and (b) Since the IDM systems requires so much customization (as much as 90% in a large enterprise network) it actually contributes additional vulnerabilities instead of lowering overall system risk.

The result of providing inappropriate countermeasures to threats, is that your cost of attacks and ownership go up, instead of your risk going down. This is as true for a personal workstation as it is for a large enterprise network.

The question from a security perspective of an individual user is pretty easy to answer. Install a decent personal firewall (not Windows and please stay away from Symantec) and be careful.

For a business, the question is harder to answer because it is a rare company that has such deep pockets they can afford to purchase and install every security product recommended by their integrator and implement and enforce all the best-practice controls recommended by their accountants.

An approach we like is taking standards-based risk assessment and implementing controls that are a good fit to the business.

We use the quantitative threat analysis tool – PTA that enables any business  to build a quantitative risk model and construct an economically-justified, cost-effective set of countermeasures that reduces risk in their and their customers’ business environment.

More importantly, a company can execute a “gentle” implementation plan of controls concomitant with its budget instead of an all-or-nothing compliance checklist implementation that may cost mega-bucks.

And in this economy – fewer and fewer businesses have the big bucks to spend on security and compliance.

Software Associates specializes in helping medical device vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments in the best and most cost-effective way for your business and pocketbook.

mindless IT research

It’s friends and family breaching patient privacy – not Estonian hackers.

A 2011 HIPAA patient privacy violation in Canada, where an imaging technician accessed the medical records of her ex-husband’s girlfriend is illustrative of unauthorized disclosure of patient information by authorized people.

Data leakage of ePHI (electronic protected health information) in hospitals is rampant simply because a) there is a lot of it floating around and b) because of human nature.

Humans being are naturally curious, sometimes vindictive and always worried when it comes to the health condition of friends and family. Being human, they will bend rules to get information and in the course of bending rules, breach patient privacy.

The right to patient privacy

The Health Insurance Portability and Accountability Act expresses a general federal policy favoring patients’ right to confidentiality and HIPAA’s Privacy Rule grants federal protections for patients’ personal health information held by covered entities and gives patients rights regarding that information.

What is ePHI?

The Department of Health and Human Services defines ePHI as a combination of personal identifiers and clinical data in order to protect patient privacy.

Electronic Protected health information (ePHI) is any information in an electronic medical record (EMR) that can be used to identify an individual and that was created, used, or disclosed in the course of providing a health care service such as diagnosis or treatment. This includes names, geographical locations, dates of birth etc, phone numbers, email, social security numbers, medical record numbers, license plate numbers, driver license number, biometrics.

Basically any combination of personal identifiers that can be used to steal a persons identity, when combined with EMR data becomes ePHI.

HIPAA risk and compliance assessments that we’ve been involved with at hospitals in Israel, the US and Australia reveal that most patient privacy breaches are not perpetrated by hackers but by friends and family seeking information or insurance companies seeking to validate claims.

Social engineering methods are often employed with or without a “sweetener” and do not need to rely on exploiting software security vulnerabilities in order to breach patient privacy.

Courtesy of my friend Alan Norquist from Veriphyr

Information and Privacy Commissioner Ann Cavoukian ordered a Hospital in Ottawa to tighten rules on electronic personal health information (ePHI) due to the hospital’s failure to comply with the Personal Health Information Protection Act (PHIPA).

The actions taken to prevent the unauthorized use and disclosure by employees in this hospital have not been effective.” – Information and Privacy Commissioner Ann Cavoukian

The problem began when one of the hospital’s diagnostic imaging technologists accessed the medical records of her ex-husband’s girlfriend. At the time of the snooping, the girlfriend was at the hospital being treated for a miscarriage.

Commissioner Cavoukian faulted the hospital for:

  • Failing to inform the victim of any disciplinary action against the perpetrator.
  • Not reporting the breach to the appropriate professional regulatory college.
  • Not following up with an investigation to determine if policy changes were required.

The aggrieved individual has the right to a complete accounting of what has occurred. In many cases, the aggrieved parties will not find closure … unless all the details of the investigation have been disclosed.” – Information and Privacy Commissioner Ann Cavoukian

It was not the hospital but the victim who instigated an investigation. The hospital determined that the diagnostic imaging technologists had accessed the victim’s medical files six times over 10 months.

The information inappropriately accessed included “doctors’ and nurses’ notes and reports, diagnostic imaging, laboratory results, the health number of the complainant, contact details … and scheduled medical appointments.” – Information and Privacy Commissioner Report

Sources:
(a) Privacy czar orders Ottawa Hospital to tighten rules on personal information – Ottawa Citizen, January, 2011