Tag Archives: HIPAA

informed-consent-consideration

Why HIPAA Policies and Procedures are not copy and paste

Compliance from Dr. Google is a very bad idea.

Searching for HIPAA Security Rule compliance yields about 1.8Million hits on Google. Some of  the information is outdated and does not relate to the Final Rule and a good deal of other information is sponsored by service providers and technology companies selling silver bullets for HIPAA compliance.

The online dialog on HIPAA Security Rule compliance is dominated by discussions by requirements for health providers.   There is very little information online for the downstream medtech and medical device vendors who are increasingly using the cloud to store data and process transactions for their covered entity customers

If you are a small medtech or medical device company, and you copy from a big healthcare provider you will be overpaying and over-implementing SOP’s which may not be relevant to your business situation.

The risk analysis for a SaaS provider or medical device that stores PHI in the cloud is not even remotely similar to the risk analysis for a hospital.

If you copy and paste a risk analysis – you won’t understand what you’re doing and why you’re doing it and since HIPAA privacy infractions carry both a criminal civil penalty, you don’t want to even attempt to comply via Google.

For example – if you are a mobile medical device vendor – you will need to take into account technology and privacy considerations such as mobile app software security, application activity monitoring, mobile and cloud data encryption and key management none of which may be relevant for a traditional IT hospital-run electronic health records system.

What policies and procedures do I need for HIPAA compliance?

We provide clients with  a bespoke package of SOP’s which are required by HIPAA – Acceptable use, Incident response, Security and risk management process, Disaster recovery plan, and Security Officer Job description (which is required by the Final Rule).     This is in addition to the Risk Analysis / Security Assessment report (§ 164.308(a)(1)(ii)(A) ).

6 reasons why  HIPAA security policies and procedures are not copy and paste:

  1. It depends on the business situation and technology model. A biotechnology company doing drug development will not have the same threat surface as a mobile health company.
  2. Your security is worse than you think. When was the last time you looked? Or had an external audit of your encryption policies?
  3. It also depends on timing – if the life science company is doing clinical research, then informed consent may release the sponsor from HIPAA privacy rule compliance. But in clinical research, physician-investigators often stand in dual roles to the subject: as a treating physician (who has to comply with the HIPAA Privacy Rule) and as a researcher (who has to comply with both GCP and 21 CFR Parts 50 and 56 regarding informed consent with adults and children).  In my experience with medical device companies, they often do clinical trials in parallel to commercial betas and work closely with physician-investigators. This means that your HIPAA Security Rule compliance posture needs to be nailed down in order to eliminate holes of privacy leakage.
  4. Life science companies have quality management systems as part of their FDA submissions – the HIPAA Security Rule policies and procedures need to become part of your quality system.We work with you to make sure that your regulatory/QA/QC leads understand what it means to implement them and help them integrate into their own internal formats, policies and training programs.
  5. Covered entities may also have impose specific  requirements in their BAA on the life science vendor;  we then need to customize the policies and procedures to comply with the their internal guidelines.     Sometimes these are quite strange like  the story of the West Coast hospital that deliberately weakened the WiFi signal of their routers in the thought that it was a security countermeasure for hacking their wireless access points.
  6. There are also situations of intersection with other Privacy regulation such as CA 1280.15 for Data breach – California is sticky on this and then if  you do business with U of C – there are will be additional things to cover

Feel free to contact us  for a free quote – we’re always looking for interesting projects.

 

Tell your friends and colleagues about us. Thanks!
Share this
Federal Healthcare Chart

The chasm between FDA regulatory and cyber security

 

When a Risk Analysis is not a Risk analysis

Superficially at least, there is not a lot of difference between a threat analysis that is part of a software/hardware security assessment and a risk analysis (or hazard analysis) that is performed by a medical device company as part of their submission to the FDA.    In the past 2 years, FDA added  guidance for cybersecurity for medical devices and the format and language of the guidance is fairly similar.

The problem is that hazard analysis talks about patient safety and relates to the device itself as a potential attacker whereas software  security talks about confidentiality of patient data, integrity of the code and its data and credentials and system availability and relates to the device as a black box.

So in fact – medical device risk analysis and cyber security assessments are 2 totally different animals.

Some of my best friends work in medical device regulatory affairs. I admire what they do and I like (almost) all of them on a personal basis.

But – over the past year, I have been developing a feeling of deep angst that there an impossible-to-cross chasm of misunderstanding and philosophy that exists between medical device regulatory people and medical device security analysts like me.

But I can no longer deny that even at their best – regulatory affairs folks will never get security.

Why do I say this?   First of all – the empirical data tells me this.  I  interface with several dozen regulatory professionals at our medical device security clients in Israel and the best of them grasp at understanding security.  The worst cases hang on to their document version controls for dear life and claim earnestly even as they are hacked that they cannot modify their device description because their QA process needs several weeks to approve a 3 line bug fix. (This is a true story).

We intend to implement encryption of EPHI in our database running on a cloud server since encryption uses check sums and check sums ensure integrity of the data and help protect patient safety.
True quote from a device description of a mobile medical device that stores data in the cloud…the names have been concealed to protect the innocent.  I did not make this up.

The chasm between FDA regulatory and cyber security

The second reason I believe that there is an impossible-to-cross chasm between medical FDA device regulatory people and medical device security is much more fundamental.

Systems like mobile medical devices live in the adversarial environment of the Internet and mobile devices. Unlike traditional engineering domains like bridge-building that are well understood and deal with winds that obey rules of physics and always blow sideways; security threats do not play by set rules. Imagine a wind that blows up and down and then digs underground to attack bridge pylons from 50m below earth and you begin to understand what we mean when we say “adversarial environment”.

General classes of attacks include elevation of privilege, remote code execution, denial of service and exploits of vulnerable application interfaces and network ports. For example:

  • Attacker that steals personal data from the cloud by exploiting operating system bugs that allow elevation of privilege.  Zero-days that were unknown when your QA people did V&V.
  • Attacker that exploits vulnerabilities in directory services that could allow denial of service. What ?  We’re using directory services???
  • Attacker that tempts users to download malicious mobile code that exploit mobile APIs in order to steal personal data. What?  This is just an app to measure blood sugar – why would a patient click on an add that injects malicious code into our code while he was taking a picture of a strip to measure blood sugar??

We talk about an attacker in an abstract sense, but we don’t know what resources she has, when she will attack or what goals she has. Technology changes rapidly; a system released 2 years ago may have bugs that she will exploit today. Our only recourse is to be paranoid and think like an attacker.

Thinking like attackers not like regulators

By being extremely paranoid and thinking like attackers, medical device security requires us to systematically consider threat models of attacks, attack vectors, assets at risk, their vulnerabilities and appropriate security countermeasures for prevention, detection and response.

While this approach is not a “silver bullet” it lends structure to our paranoia and helps us do our job even as the rules change.

And this where the chasm begins.   The rules change.   There is no QA process. There are no rules.   We are not using IEEE software engineering standards from 40 years ago.  They are useless for us.

Dorothy – we are not in Kansas anymore. And we are definitely not in Washington DC in a regulatory consulting firm of high-priced lawyers.

 

 

Tell your friends and colleagues about us. Thanks!
Share this
Three business people working

Risk does not walk alone

Israeli biomed companies often ask us about the roles of audit and risk management in their HIPAA security and compliance activities.  At the eHealth conference in Israel last week – a lawyer gave a presentation on HIPAA compliance and stated:

If you have to do one thing, make sure everything is documented – your policies and procedures, corrective action you took. Everything.  That is your best line of defense.

Security is not an exercise in paperwork.

With all due respect to lawyers – no.   Your best line of defense is implementing real security countermeasures in a prioritized way an ensuring that you are doing the right stuff all the time by integrating your HIPAA Security Rule and Compliance activities with your internal audit and risk management teams.

Risk does not walk alone

Risk is not an independent variable that can be managed  on its own.  It is not an exercise in paper work. Risk is a function of external and internal attackers that exploit weaknesses (vulnerabilities) in people and systems and processes in order to get something of value (assets).   The HIPAA Security Rule prescribes in a well-structured way – how to implement the right security countermeasures to protect EPHI – the key assets of your patient customers.

The importance of audit for HIPAA

While audit is not specifically mentioned in the HIPAA Security Rule – security review and risk management are key pieces – audit is crucial for you to stay on track over time.

According to the Institute of Internal Auditors, internal auditing is an “independent, objective assurance and consulting activity designed to add value and improve an organization’s operations.” Internal audits provide assurance and consulting services to management in an independent and objective manner. But what does that mean? It means that internal auditors can go into your business operation and determine if your HIPAA security and compliance is a story on paper or a story being acted out in real life.

Audit – necessary but not sufficient

However, internal audit is not a line of defense and neither is a corporate risk management function a line of defense.

HIPAA Security and Privacy Rule compliance regards investigating plausible threats, valuable assets, vulnerabilities and security countermeasures that mitigate asset vulnerabilities and reduce the risk which is the result of threats exploiting vulnerabilities to damage assets.

When we frame security defenses in terms of mitigating attacks – we immediately see that neither audit nor corporate risk management fall into the category of countermeasures.

So why is audit and risk management important?

Audit is crucial to assuring that the security portfolio is actually implemented at all levels. Yes – all levels – including the CEO office and the last of the cleaning team. Audit strengths are also their weakness – they generally do not understand the technical side of security and therefore audit must work hand in glove with the operational and engineering functions in an organization.

Risk management is key to prioritizing implementation of security countermeasures – because – let’s face it – business and engineering operations functions are not qualified to evaluate asset value.

In summary

Your HIPAA and Security Rule compliance is not just about paper-work.  It’s about getting it right  – day in and day out.

 

Tell your friends and colleagues about us. Thanks!
Share this
hipaa cloud security

How do you know that your personal health data is secure in the cloud?

Modern system architecture for medical devices is a triangle of Medical device, Mobile app and Cloud services (storing, processing and visualizing health data collected from the device).  This creates the need for verifying a chain of trust: patient, medical device, mobile app software, distributed interfaces, cloud service software, cloud service provider.

No get out of jail free card if your cloud provider is HIPAA compliant.

We specialize in medical device security and as I’ve written here and here and here – and there is no silver marketing bullet.

Medical device vendors must implement robust software security in their device, app and cloud service layers and implement regulatory compliance in people and technical operations. If you are a medical device vendor, you cannot rely on regulatory compliance alone, nor can you rely on your cloud provider being HIPAA compliant.  I’ve written here and here how medical devices can be pivot points for attacking other systems including your customers’ and end users devices.

Regulatory compliance is not security

There are two notable regulatory standards relating to medical devices and cloud services – the HIPAA Security Rule and the FDA Guidance for Management of cybersecurity in medical devices. This is in addition to European Data Protection requirements and local data security requirements  that a particular country such as France, Germany or New Zealand may enforce for protecting health data in the cloud.

The American security and compliance model is unique (and it is typically American in its flavor) – it is based on market forces – not government coercion.

Complying with FDA Guidance is a requirement for marketing your medical device in the US.

Complying with the HIPAA Security Rule is a requirement for customers and covered entity business associates to buy your medical device.   You can have an FDA 510(K) for your medical device and still be subject to criminal charges if your cloud services are breached.   HHS has announced  in the Breach Notification Rule and here that they will investigate all breaches of 500 records and more. In addition, FDA may enforce a device recall.

But – compliance is not the same as actual enforcement of secure systems

Verifying the chain of trust

Medical device vendors that use cloud services will generally sign upstream and downstream business associate agreements (BAA) but hold on:

There is an elephant in the room:  How do you know that the cloud services are secure?  If you have a data breach, you will have to activate your cyber-security insurance policy not your cloud providers sales team.

Transparency of the cloud provider security operations varies widely with some being fairly non-transparent ()and others being fairly transparent (Rackspace Cloud are excellent in their levels of openness before and after the sale) in sharing data and incidents with customers.

When a cloud service provider exposes details of its own internal policy and technology, it’s customers (and your medical device users) will tend to trust the provider’s security claims. I would also require transparency by the cloud service providers regarding security management, privacy and security incident response.

One interesting and potentially extremely valuable initiative is the Cloud Trust Protocol.

The Cloud Trust Protocol (CTP) enables cloud service customers to request and receive data regarding the security of the services they use in the cloud, promoting transparency and trust.

The source code implements a CTP server that acts as a gateway between cloud customers and cloud providers:

  • A cloud provider can push security measurements to the CTP server.
  • A cloud customer can query the CTP server with the CTP API to access these measurements.

The source code is available here on Github.

 

 

Tell your friends and colleagues about us. Thanks!
Share this
Data loss prevention

Refreshing your HIPAA Security Rule compliance

Clients frequently ask us questions like this.

Danny,

I have a quick question about our HIPAA compliance that we achieved back in early 2013. Since then  we have released a couple of new software versions and we are wondering to what extent we need to perform another security and compliance assessment.  Please let us know what sort of information you might require to evaluate whether or not a new HIPAA security rule assessment is required.

What about the upcoming changes in HIPAA in 2016?

Any software changes that increase the threat surface to attacks (new ports, new interfaces, new modules that use PHI) would be reason to take a look at your Security Rule compliance.
Re HIPAA 2016 – OCR is still making plans but it is almost certain they will be doing audits.    I believe that due to sheer size of the program – they will start with the biggest hospitals – I do not think that small medical device vendors will be on their radar – although the big guys that had serious adverse events will probably get audited (insulin pumps, implanted cardiac devices)
In general, if you are developing medical software that connects to the Internet or the mobile Internet – you should not wait 3 years between security assessments.  Make secure software development methdology part of the way you develop software and audit once/year or on any major release.
Danny

 

Tell your friends and colleagues about us. Thanks!
Share this
hipaa cloud security

Privacy, Security, HIPAA and you.

Medical devices, mobile apps, Web applications – storing data in the cloud, sharing with hospitals and doctors. How do I comply with HIPAA? What applies to me – the Security Rule, the Privacy Rule or both?

Consider a common use case these days – you’re a medical device vendor and your device stores health information in the cloud. You have a web and/or mobile application that enable doctors/hospitals to access the data from my device as part of their healthcare services. If you operate in the United States, what HIPAA regulations apply ? Do I need to comply with the Privacy Rule, the Security Rule or both?

There is a good deal of confusion regarding the HIPAA Privacy and Security Rules and how things work. In this article, we will examine the original content of the HIPAA regulation and explain who needs to do what.

What is the Privacy Rule?

The HIPAA Final Rule (enacted in Jan 2013) has 2 pieces – the Privacy Rule and the Security Rule.

The Privacy Rule establishes standards for the protection of health information. The Security Rule establishes security standards for protecting health information that is held or transferred in electronic form. The Privacy Rule broadly defines ‘‘protected health information’’ as individually identifiable health information maintained or transmitted by a covered entity in any form or medium. The Privacy Rule is located at 45 CFR Part 160 and Subparts A and E of Part 164.

Who needs to comply with the Privacy Rule?

By law, the HIPAA Privacy Rule applies only to covered entities – health plans, health care clearinghouses, and certain health care providers. However, most health care providers and health plans do not carry out all of their health care activities and functions by themselves. Instead, they often use the services of a variety of other persons or businesses – and transfer/exchange health information in electronic form to use these services. These “persons or businesses” are called “business associates”; defined in 45 CFR 164.502(e), 164.504(e), 164.532(d) and (e) 45 CFR § 160.102, 164.500.

What is the Security Rule?

The Security Rule operationalizes the Privacy Rule by addressing the technical and non-technical safeguards that the “covered entities” and their business associates must implement in order to secure individuals’ “electronic protected health information” (EPHI). The Security Rule is located at 45 CFR Part 160 and Subparts A and C of Part 164.

Who needs to comply with the Security Rule?

Since its an operational requirement, the Security Rule (by law) applies to covered entities, business associates and their sub-contractors. While the Privacy Rule applies to protected health information in all forms, the Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.

Business associate liability

Section 13404 of the HITECH Act creates direct liability for impermissible uses and disclosures of protected health information by a business associate of a covered entity “that obtains or creates” protected health information “pursuant to a written contract or other arrangement described in § 164.502(e)(2)” and for compliance with the other privacy provisions in the HITECH Act.

Section 13404 does not create direct liability for business associates with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). Therefore, under the final rule, a business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

Permitted use of EPHI by a business associate

While a business associate does not have health care operations, it is permitted by § 164.504(e)(2)(i)(A) to use and disclose protected health information as necessary for its own management and administration if the business associate agreement permits such activities, or to carry out its legal responsibilities. Other than the exceptions for the business associate’s management and administration and for data aggregation services relating to the health care operations of the covered entity, the business associate may not use or disclose protected health information in a manner that would not be permissible if done by the covered entity (even if such a use or disclosure is permitted by the business associate agreement).

Taken from the Federal Register

General Definitions

See § 160.103 for HIPAA general definitions used by the law – definitions of business associates, protected health information and more.

Summary

  • The Privacy Rule establishes standards for the protection of health information.
  • The Security Rule establishes operational security standards for protecting health information that is held or transferred in electronic form.
  • The Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.
  • Business associates do not have direct liability with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). A business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

 

Tell your friends and colleagues about us. Thanks!
Share this
Security is not fortune telling

Back to basics with HIPAA – 3 things you must do.

Before you start spending money on regulatory consultants get back to basics.  Do you or do you not need to comply with the HIPAA Security Rule?  If you do – what is the very first thing you should do?   In this post – we will get back to basics with 3 practical ways of complying and reducing your regulatory risk.

I specialize in cyber security and privacy compliance consulting for medical device companies in Israel.  While this may sound like a niche, it is actually a very nice and not so small niche – with over 700 biomed vendors and a market growing 7-8% / year.

Israeli biomed startups are incredibly innovative and it’s fun working with smart people.  Here are 3 ways to improve your HIPAA risk analysis in just a few minutes:

Check # 1 – Maybe you are not a HIPAA Business associate

If you are a medical device vendor and you connect to a hospital network or share data  with doctors, you are automatically a BA; a Business associate according to the HIPAA Business Associate definition.   If you are a BA – you need to comply with the HIPAA Security Rule. But maybe you are not a BA.

By law, the HIPAA Privacy Rule applies only to covered entities – health plans, health care clearinghouses, and certain health care providers. A “business associate” is a person or entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity.  If an entity  does not meet the definition of a covered entity or business associate, it does not have to comply with the HIPAA Rules.  See definitions of “business associate” and “covered entity” at 45 CFR 160.103.

So – if you developed a consumer mobile app that monitors stress  with a cool dashboard to visualize data stored in the cloud enabling your users to reduce their stress and compare themselves with other people like them; so long as the software doesn’t share data with a covered entity (like their doctor) – you are not a BA. Check.

 Check # 2 – Maybe you don’t even need to comply with HIPAA

HIPAA applies to storage of EPHI (electronic protected health information).   Simply put – EPHI is the combination of PII (personally identifiable information – such as a name, email and social security number) and healthcare / clinical data.

So – getting back to our hypothetical mobile medical app for personal stress tracking, let’s suppose that the stress data includes heart rate and respiratory rate in addition to how many times/day the consumer called their girl friend to make sure she still loves them  (when they are freaking out with stress before a big exam). If your mobile medical app for stress management doesn’t store personal information with the clinical data, then you don’t have to comply with HIPAA because you don’t you don’t create, store or transmit EPHI in the cloud. Check.

Check # 3 – Using contractors for software development? Vet them and oversee them

There is commonly-used expression in Hebrew – “launch and forget” (שגר ושכח).  I believe that this is a Hebrew translation of the American English term “Fire and forget” that refers to a type of missile guidance which does not require further guidance after launch.

When it comes to contractors you do not want to “launch and forget”.

Maybe you are a BA and you have to comply with HIPAA – just because the HIPAA Security Rule does not have a standard safeguard for vetting contractors, can you afford to gloss over this area?

This is a big one boys and girls.   If you use contractors for code development, make sure you thoroughly vet them before engaging with them on upwork.  I am not talking about quality of work – it is a given that you need someone highly competent in whatever problem problem domain you are trying to crack (your girl-friends brother-in-law may not be your best fit).  I am talking about the threat of a contractor stealing your code, dropping Android malware into your app, or enabling tap-jacking to steal personal data).

Even if they don’t steal your code, consider the threat of your contractor working for a competitor, leaking your IP or being hired by a business partner who click-jacks your entire business model.

It’s tempting to work with software developers in Ukraine or China but be careful. Consider the case of Russian programmers who wrote code for U.S. military communications systems or the  DOJ accuses firm that vetted Snowden of faking 665,000 background checks. 

In the Federal space, it is sad but true that there is huge a Federal procurement machine that enables these kinds of attacks to happen because of organizational complexity ( a nice way of saying that accountability falls between the cracks) and greed  ( a not so nice way of saying that big contracts are a challenge to morality and governance).

US Federal agency purchasing managers who sign purchase orders without appropriate contractor vetting and oversight are not a justification for a privately-held Israeli medical device company to sin…

The calls for more legislation are a knee-jerk reaction to the horses having left the barn years ago but when common sense and good business practices yield to greed then perhaps we do need a tail wind. Vetting software development contractors should be a standard requirement in the core security guidance such as HIPAA, PCI and FDA cyber security but in the meantime – don’t wait for the US Federal Government to tell you to vet your contractors and oversee their work. Put an accountability clause into your contracts.

Tell your friends and colleagues about us. Thanks!
Share this
Security is not fortune telling

The importance of risk analysis for HIPAA compliance

A chain of risk analysis

The HIPAA Final Rule creates a chain of risk analysis and compliance from the hospital, downstream to the business associates who handle / process PHI for the hospital and sub-contractors who handle / process PHI for the business associate.

And so on.

The first thing an organization needs to do is a risk analysis.   How important is a risk analysis?  Ask Cancer Care Group who were just fined $750,000 for non-compliance with the Security Rule.

$750,000 HIPAA settlement emphasizes the importance of risk analysis and device and media control policies

Cancer Care Group, P.C. agreed to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy and Security Rules with the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR). Cancer Care paid $750,000 and will adopt a robust corrective action plan to correct deficiencies in its HIPAA compliance program. Cancer Care Group is a radiation oncology private physician practice, with 13 radiation oncologists serving hospitals and clinics throughout Indiana.

On August 29, 2012, OCR received notification from Cancer Care regarding a breach of unsecured electronic protected health information (ePHI) after a laptop bag was stolen from an employee’s car. The bag contained the employee’s computer and unencrypted backup media, which contained the names, addresses, dates of birth, Social Security numbers, insurance information and clinical information of approximately 55,000 current and former Cancer Care patients.

OCR’s subsequent investigation found that, prior to the breach, Cancer Care was in widespread non-compliance with the HIPAA Security Rule. It had not conducted an enterprise-wide risk analysis when the breach occurred in July 2012. Further, Cancer Care did not have in place a written policy specific to the removal of hardware and electronic media containing ePHI into and out of its facilities, even though this was common practice within the organization. For more information see the HHS Press Release from Sep 2, 2015 $750,000 HIPAA settlement emphasizes the importance of risk analysis and device and media control policies

Risk analysis is the first step in meeting Security Rule requirements

I have written here, here, here and here about the importance of risk analysis as a process of understanding the value of your assets, the impact of your threats and the depth of your vulnerabilities in order to implement the best security countermeasures.

The HIPAA Security Rule begins with the analysis requirement in § 164.308(a)(1)(ii)(A). Conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule. Therefore, a risk analysis is fundamental to the entire process of HIPAA Security Rule compliance, and must be understood in detail by the organization in order to specifically address safeguards and technologies that will best protect electronic health information. See Guidance on Risk Analysis Requirements under the HIPAA Security Rule. Neither the HHS Guidance nor NIST specify a methodology – we have been using the Practical Threat Analysis methodology for HIPAA Security risk analysis with Israeli medical device companies since 2009 and it works smoothly and effectively helping Israeli medical device vendors comply and implement robust security at the lowest possible cost.

§ 164.308(a)(1)(ii)(A) Risk Analysis (R1): As part of the risk management process, the company performs information security risk analysis for its services (see company procedure XYZ) analyzing software application  security, data security and human related vulnerabilities. Risk analysis is performed according to the Practical Threat Analysis methodology.

1 A refers to addressable safeguards, and R refers to required safeguards.

Do it.  Run Security like you Run your business

Tell your friends and colleagues about us. Thanks!
Share this
Protecting your blackberry

Dealing with DLP and privacy

Dealing with DLP and privacy

It’s a long hot summer here in the Middle East and with 2/3 of  the office out on vacation, you have some time to reflect on data security. Or on the humidity.  Or on a cold beer.

Maybe you are working on building a business case for DLP technology like Websense or Symantec or Verdasys, or Mcafee or Fidelis in your organization.  Or maybe you  already purchased DLP technology and you’re embroiled in turf wars that have put your DLP implementation at a standstill as one of your colleagues is claiming that there are employee privacy issues with DLP and you’re trying to figure out how to get the project back on track after people get back from their work and play vacations in Estonia and brushing up on their hacking skills.

Unlike firewall/IPS, DLP is content-centric. It is technology that drives straight to the core of business asset protection and business process.  This frequently generates opposition from people who own business assets and manage business process. They may have legitimate concerns regarding the cost-effectiveness of DLP as a data security countermeasure.

But – people who oppose DLP on grounds of potential employee privacy violations might be selling sturm and drang to further a political agenda.   If you’re not sure about this – ask them what they’ve done recently to prevent cyber-stalking and sexual harassment in the workplace. 

For sure, there are countries such as France and Germany where any network or endpoint monitoring that touches employees is verboten or interdit as the case may be; but if you are in Israel, the US or the UK, you will want to read on.

What is DLP and what are the privacy concerns?

DLP (data loss prevention) is a solution for monitoring/preventing sensitive outbound content not activity at an endpoint. This is the primary mission. DLP is often a misnomer, as DLP is more often than not, DLD – data loss detection but whatever…Network DLP solutions intercept content from the network and endpoint DLP agents intercept content by hooking into Windows operating system events.  Most of the DLP vendors offer an integrated network DLP and endpoint DLP solution in order to control removable devices in addition to content leaving network egress points. A central command console analyzes the intercepted content and generates security events, visualizes them and stores forensics as part of generating actionable intelligence. Data that is not part of the DLP forensics package is discarded.

In other words, DLP is not about reading your employees email on their PC.  It’s about keeping the good stuff inside the company.    If you want to mount surveillance on your users, you have plenty of other (far cheaper) options like browser history capturer or key loggers. Your mileage will vary and this blog does not provide legal guidance but technically – it’s not a problem.

DLP rules and policies are content-centric not user-centric.

A DLP implementation will involve writing custom content signatures (for example to detect top-secret projects by keyword, IP or source code) or selecting canned content signatures from a library (for example credit cards). 

The signatures are then combined into a policy which maps to the company’s data governance policy – for example “Protect top-secret documents from leaking to the competition”. 

One often combines server endpoints and Web services to make a more specific policy like “Alert if top-secret documents from Sharepoint servers are not sent via encrypted channels to authorized server destinations“. 

In 13 DLP installations in 3 countries, I never saw a policy that targeted a specific user endpoint. The reason for this is that it is far easier using DLP content detection to pickup endpoint violations then to white list and black list endpoints which in a large organization with lots of wireless and mobile devices is an exercise in futility.  

We often hear privacy concerns from people who come from the traditional firewall/IPS world but the firewall/IPS paradigm breaks when you have a lot of rules and endpoint IP addresses and that is why none of the firewall vendors like Checkpoint ever succeeded in selling the internal firewall concept. 

Since DLP is part of the company data governance enforcement, it is commonly used as a tool to reinforce policy such as not posting company assets to Facebook. 

It is important to emphasize again, that DLP is an alert generation and management technology not a general purpose network traffic recording tool – which you can do for free using a Netoptics tap and  Wireshark.

 Any content interception technology can be abused when in the wrong hands or in the right hands and wrong mission.  Witness NSA. 

Making your data governance policy work for your employees

Many companies, (Israeli companies in particular) don’t have a data governance policy but if they do, it should cover the entire space of protecting employees in the workplace from cyber-threats.

An example of using DLP to protect employees are the threat scenarios of cyber-stalking, sexual harassment or drug trafficking in the workplace where DLP can be used to quickly (as in real-time) create very specific content rules and then refined to include specific endpoints to catch forensics and offenders in real-time. Just like inCSI New York New York.

In summary:

There are 3 key use cases for DLP in the context of privacy:

  1. Privacy compliance (for example PCI, HIPAA, US State and EU privacy laws) can be a trigger for installing DLP. This requires appropriate content rules that key to identifying PHI or PII.
  2. Enforcement of your corporate  data governance and compliance policies where privacy is an ancillary concern.   This requires appropriate content rules for IP, suppliers and sensitive projects. So long as you do not target endpoints in your DLP rules, you will be generating security events and collecting forensics that do not infringe on employee privacy.   In some countries like France and Germany this may still be an issue.  Ask your lawyer.
  3. Employee workplace protection – DLP can be an outstanding tool for mitigating and investigating cyber threats in the workplace and at the very least a great tool for security awareness and education. Ask your lawyer.

If you liked this or better yet hated it,  contact  me.  I am a professional security analyst specializing in HIPAA compliance and medical device security and I’m based in Israel and always looking for interesting and challenging projects.

Idea for the post prompted by Ariel Evans.

Tell your friends and colleagues about us. Thanks!
Share this
convoluted laws of privacy and security

What is PHI?

Software Associates specialize in HIPAA security and compliance for Israeli medical device companies – and 2  questions always come up: “What is PHI?” and “What is electronically protected health information?”

Of course, you will have already Googled this problem and come to one conclusion or another by surfing sites like Hipaa Compliance Made Easy or the Wikipedia entry on HIPAA.

But you may ask – “Can I entrust my security and compliance implementation to Dr. Google?” And – the answer is no.

 

Most of the content on the Net on this topic is unclear, outdated and predate the implementation of the HIPAA Final Rule in October 2013 and many articles confuse privacy and security. Much of the content is blatantly self-serving marketing collateral for security products like this plug for a firewall product and this pitch by Checkpoint to register on their web site.

Then, there is a distinct American flavor to the Final Rule which makes it even more confusing for non-American readers who have to try and grasp why payment in cash is related to privacy. (Hint – in Europe, privacy is a fundamental human right unrelated to money).

When individuals pay by cash they can instruct their provider not to share information about their treatment with their health plan.  The final omnibus rule sets new limits on how information is used and disclosed for marketing and fundraising purposes and prohibits the sale of an individuals’ health information without their permission.

But – although Congress low-balled the cost to the American healthcare industry for compliance in order to get the bill approved – for all of the law’s American peculiarities, the HIPAA Final Rule is well thought out and a good example of how to use free market forces to enforce security and compliance.   That however – will be a topic for another post.

For now we want to find a precise answer to the questions “What is PHI?” and “What is EPHI?”

Careful reading of the law itself clearly shows 2 things:

A. PHI (Protected health information) is health / clinical data mixed with PII (personally identifiable information, which is basically having enough information to steal someone’s identity in the US) stored and transmitted verbally or on paper.

B. EPHI (Electronic Protected health information) is PHI transmitted and/or stored electronically.

Simple indeed.

See HIPAA Administrative Simplification
Regulation Text 45 CFR Parts 160, 162, and 164
(Unofficial Version, as amended through March 26, 2013)

Notes – definitions of PHI

Electronic protected health information means information that comes within
paragraphs (1)(i) or (1)(ii) of the definition of protected health information
as specified in this section.

(1) Protected health information means individually
identifiable health information that is:
(i) Transmitted by electronic media;
(ii) Maintained in electronic media; or
(iii) Transmitted or maintained in any other form or medium.

(2) Protected health information excludes individually identifiable health
information:

(i) In education records covered by the Family Educational Rights
and Privacy Act, as amended, 20 U.S.C. 1232g;
(ii) In records described at 20 U.S.C. 1232g(a)(4)(B)(iv);
(iii) In employment records held by a covered entity in its role as employer; and
(iv) Regarding a person who has been deceased for  more than 50 years.

Individually identifiable health information is information that is a subset of
health information, including demographic information collected from an
individual, and:
(1) Is created or received by a health care provider, health
plan,employer, or health care clearinghouse; and
(2) Relates to the past,
present, or future physical or mental health or condition of an individual; the
provision of health care to an individual; or the past, present, or future
payment for the provision of health care to an individual; and
(i) That identifies the individual; or
(ii) With respect to which there is a reasonable basis to believe the
information can be used to identify the individual.

 

 

 

 

Tell your friends and colleagues about us. Thanks!
Share this