Federal Healthcare Chart

The chasm between FDA regulatory and cyber security

 

When a Risk Analysis is not a Risk analysis

Superficially at least, there is not a lot of difference between a threat analysis that is part of a software/hardware security assessment and a risk analysis (or hazard analysis) that is performed by a medical device company as part of their submission to the FDA.    In the past 2 years, FDA added  guidance for cybersecurity for medical devices and the format and language of the guidance is fairly similar.

The problem is that hazard analysis talks about patient safety and relates to the device itself as a potential attacker whereas software  security talks about confidentiality of patient data, integrity of the code and its data and credentials and system availability and relates to the device as a black box.

So in fact – medical device risk analysis and cyber security assessments are 2 totally different animals.

Some of my best friends work in medical device regulatory affairs. I admire what they do and I like (almost) all of them on a personal basis.

But – over the past year, I have been developing a feeling of deep angst that there an impossible-to-cross chasm of misunderstanding and philosophy that exists between medical device regulatory people and medical device security analysts like me.

But I can no longer deny that even at their best – regulatory affairs folks will never get security.

Why do I say this?   First of all – the empirical data tells me this.  I  interface with several dozen regulatory professionals at our medical device security clients in Israel and the best of them grasp at understanding security.  The worst cases hang on to their document version controls for dear life and claim earnestly even as they are hacked that they cannot modify their device description because their QA process needs several weeks to approve a 3 line bug fix. (This is a true story).

We intend to implement encryption of EPHI in our database running on a cloud server since encryption uses check sums and check sums ensure integrity of the data and help protect patient safety.
True quote from a device description of a mobile medical device that stores data in the cloud…the names have been concealed to protect the innocent.  I did not make this up.

The chasm between FDA regulatory and cyber security

The second reason I believe that there is an impossible-to-cross chasm between medical FDA device regulatory people and medical device security is much more fundamental.

Systems like mobile medical devices live in the adversarial environment of the Internet and mobile devices. Unlike traditional engineering domains like bridge-building that are well understood and deal with winds that obey rules of physics and always blow sideways; security threats do not play by set rules. Imagine a wind that blows up and down and then digs underground to attack bridge pylons from 50m below earth and you begin to understand what we mean when we say “adversarial environment”.

General classes of attacks include elevation of privilege, remote code execution, denial of service and exploits of vulnerable application interfaces and network ports. For example:

  • Attacker that steals personal data from the cloud by exploiting operating system bugs that allow elevation of privilege.  Zero-days that were unknown when your QA people did V&V.
  • Attacker that exploits vulnerabilities in directory services that could allow denial of service. What ?  We’re using directory services???
  • Attacker that tempts users to download malicious mobile code that exploit mobile APIs in order to steal personal data. What?  This is just an app to measure blood sugar – why would a patient click on an add that injects malicious code into our code while he was taking a picture of a strip to measure blood sugar??

We talk about an attacker in an abstract sense, but we don’t know what resources she has, when she will attack or what goals she has. Technology changes rapidly; a system released 2 years ago may have bugs that she will exploit today. Our only recourse is to be paranoid and think like an attacker.

Thinking like attackers not like regulators

By being extremely paranoid and thinking like attackers, medical device security requires us to systematically consider threat models of attacks, attack vectors, assets at risk, their vulnerabilities and appropriate security countermeasures for prevention, detection and response.

While this approach is not a “silver bullet” it lends structure to our paranoia and helps us do our job even as the rules change.

And this where the chasm begins.   The rules change.   There is no QA process. There are no rules.   We are not using IEEE software engineering standards from 40 years ago.  They are useless for us.

Dorothy – we are not in Kansas anymore. And we are definitely not in Washington DC in a regulatory consulting firm of high-priced lawyers.

 

 

Tell your friends and colleagues about us. Thanks!
Share this

PCI DSS is a standard for the card associations not for your business

 

I recently saw a post from a blog on a corporate web site from a company called Cloud compliance, entitled “Compliance is the New Security Standard“.

Cloud Compliance provides a SaaS-based identity and Access Assessment (IdAA) solution that helps identify and remediate access control and entitlement policy violations. We combine the economies of cloud computing with fundamental performance management principles to provide easy, low cost analysis of access rights to prevent audit findings (sic) and ensure compliance with regulations such as SOX, GLBA, PCI DSS, HIPAA and NERC.

The basic thesis of the blog post was that since companies have to spend money on compliance anyhow, they might as well spend the money once and rename the effort “security”.   This is an interesting notion – although perhaps “placebo security” might be a cheaper approach.

Compliance is not equivalent to security  for several fundamental reasons.  Let’s examine this curious notion, using  PCI DSS 1.2 as a generic example of a regulatory compliance standard that is used to protect payment card numbers:

A. Filling out a form or having an auditor check off a list is not logically equivalent to installing and validating security countermeasures. A threat modeling exercise is stronger than filling out a form or auditing controls – it’s significant that threat modeling is not even mentioned by PCI DSS, despite the ROI in think time.

B. Although PCI DSS 1.2 is better than previous versions – it still lags the curve of typical data security threats – which means that even if a business implements all the controls – they are probably still vulnerable.

C. PCI DSS was designed by the card associations – there is no way that any blanket standard will fit the needs of a particular business – anymore than a size 38 regular suit will fit a 5′ 7″ man who weighs 120 kg and wrestles professionally.

D. PCI DSS talks about controls with absolutely no  context of value at risk. A retailer selling diamond rings on-line, may self-comply as a Level 4 merchant but in fact have more value at risk than then the payment processor service provider he uses. (See my previous post on Small merchants at risk from fraudulent transactions )

E. PCI DSS strives to ensure continued compliance to their (albeit flawed) standard with quarterly (for Level 1) and yearly (for everyone else) audits.   The only problem with this is that a lot of things can happen in 3 months (and certainly in a year).   The automated scanning that many Level 2-4 merchants do is essentially worthless but more importantly – the threat scenarios shift quickly these days – especially when you take into account employees and contractors who as people are by definition, unpredictable.

F. Finally – PCI DSS is a standard for whom? It’s a standard to help the card associations protect their supply chain.   It is not a policy used by the management of a company in order to improve customer service and grow sales volume.

F. PCI DSS 1.2 mandates security controls for untrusted networks and external attacks.   The phrases “trusted insider” or “business partner” are not mentioned once in the standard. This is absurd, since a significant percentage of the customer data breaches in the past few years involved trusted insiders and business partners. A card processor can be 100 percent compliant but because they have a Mafia sleeper working in IT – they could be regularly leaking credit card numbers. This is not a theoretical threat.

To summarize:

  • PCI DSS is a standard for the card associations not for your business nor for your customers.
  • As a security standard it is better than none at all but leaves much to be desired because it is not oriented towards the business and consumer protection
Tell your friends and colleagues about us. Thanks!
Share this
Ethics and data protection

Why the Clinton data leaks matter

In the middle of a US Presidential election that will certainly become more contrast-focused (as politically correct Americans like to call mud-slinging), the Clinton data leaks are interesting and also worth investigation for their longer-term impact on the US economy,

Shaky ethics versus data protection

A friend who is a political science professor told me that Hilary was no different than other US politicians who walk the wrong side of the line of data protection.

But the Hilary Clinton private mail server, her flagrant disregard for protecting sensitive government communications and her dubious personal ethics on US State Department data security policies is much much more than a peculiarly American political issue that is news today and gone tomorrow.

Back in October 2015, the EU High Court struck down a the Safe Harbor agreement – a trans-Atlantic pact used by thousands of companies to transfer Europeans’ personal information to the U.S., throwing into jeopardy data traffic that underpins the world’s largest trading relationship.

The Safe Harbor executive decision allows companies to self certify to provide “adequate protection” for the data of European users to comply with the European data protection directive, and with fundamental European rights such as the right to privacy (under Article 8 of the European Convention for the Protection of Human Rights).

The Americans are just slow or maybe they don’t care about privacy

The Commission issued 13 recommendations for improving Safe Harbor in November 2013 (that is 2 years before the EUJ ruling ) but negotiations to rework the framework are still ongoing.

The ECJ’s judgement is the culmination of a 2013 legal challenge by European privacy campaigner Max Schrems who filed complaints against several U.S. Internet giants — including Facebook — in the Irish courts for alleged collaboration with the NSA’s Prism program. The Irish courts dismissed the complaint.

Why it matters to the rest of the world

A large number widely quoted  (4,700) of US companies rely on Safe Harbor to operate businesses in the region. It also affects those companies that outsource data processing of E.U. users’ data to the U.S.

However – many more than 4700 US companies are affected by Safe Harbor dismissal.    Any company with a US corporate presence will also be impacted.    We saw this recently with an Israeli biotech company with offices in Boston who was requested by a Danish hospital to provide alternate assurances for data protection.   This is a curious case where it is actually better to be Israeli rather than American.

The EU has recognized that the State of Israel provides an adequate level of protection for personal data as referred to in Directive 95/46/EC with regard to automated international transfers of personal data from the European Union to the State of Israel or, where those transfers are not automated, they are subject to further automated processing in the State of Israel.  See this EU ruling on Israeli data protection

You can see the full list of countries (not the US) that provide adequate data protection here.

Long term impact to US economy?

With Snowden, Prism, the contrasted  US Presidential elections, the Hilary Clinton data leaks and the attempts by the FBI to establish a dangerous anti-privacy precedent under the guise that they cannot hack an Apple iPhone – I would not expect resolution of Safe Harbor anytime soon.

The long term impact will be innovative technology / cloud / SaaS companies like our Biotech customer with Boston offices, taking their business out of the US to safer harbor places like Tel Aviv.

Which has better weather than Boston anyhow.

Tell your friends and colleagues about us. Thanks!
Share this
cyber attacks

Why audit and risk management do not mitigate risk – part II

In my previous post Risk does not walk alone – I noted both the importance and often ignored lack of relevance of internal audit and corporate risk management to the business of cyber security.

Audit and risk management are central to the financial services industry

Just because audit and risk management are central to the financial services industry does not make them cyber security countermeasures. Imagine not having a firewall but having an extensive internal audit and risk management activity – the organization and all of it’s paper, policy and procedures would be pillaged in minutes by attackers.

Risk management and audit are “meta activities”

In the financial industry you have risk controls which are the elements audited by internal audit and managed by risk management teams. The risk controls are the defenses not the bureaucracy created by highly regulated industries. So – you can have a risk control of accepting (deciding not to have end point security and accepting the risk of data loss from employee workstations), or mitigating (installing end point DLP agents) or preventing (taking away USB ports and denying Internet access) etc…This is analogous to a bank accepting risk (giving small loans to young families), mitigating (requiring young families to supply 80% collateral), and preventing (deciding not to give loans to young families).

The important part is to understand that risk management and audit are “meta activities” and not defenses in their own right.

Why risk management often fails in cyber security operations

We note that attempts to apply quantitative risk management to cyber generally do not work because the risk management professionals do not understand cyber threats and equate people and process with mitigation.

Conversely – cyber-security/IT professionals do not have the tools to estimate asset value.  Without taking into account asset value, it is impossible to prioritize controls as every car owner knows: you don’t insure a 10 year old Fiat 500 like you insure a late model Lexus RC F.

Unfortunately for the lawyers and regulatory technocrats – while they are performing cross-functional exercises in business alignment of people and processes – the bad guys are stealing 50 Million credit cards from their database servers having hacked their way through the air conditioning systems.

Why cyber, regulatory and governance need to be integrated

Risk management prioritizes application of controls/cyber countermeasures according to control cost, asset value and mitigation effectiveness and internal audit ensures compliance with the company’s cyber, regulatory  and corporate governance policies.

Because these 3 areas (cyber, regulatory and governance) are increasingly entangled and integrated (you can’t comply with HIPAA without dealing with all 3) – it becomes supremely important to integrate the 3 areas because A) it’s expensive no to and B) it creates considerable exposure because it creates “cracks” in compliance.    Witness Target.

At a major Scandinavian telco – we counted over 25 separate functions for security, compliance and governance a few years ago  – and it was clear that this number needed to converge to 2 – risk and cyber and an independent audit unit. Whether or not they succeeded is another story.

Tell your friends and colleagues about us. Thanks!
Share this
Three business people working

Risk does not walk alone

Israeli biomed companies often ask us about the roles of audit and risk management in their HIPAA security and compliance activities.  At the eHealth conference in Israel last week – a lawyer gave a presentation on HIPAA compliance and stated:

If you have to do one thing, make sure everything is documented – your policies and procedures, corrective action you took. Everything.  That is your best line of defense.

Security is not an exercise in paperwork.

With all due respect to lawyers – no.   Your best line of defense is implementing real security countermeasures in a prioritized way an ensuring that you are doing the right stuff all the time by integrating your HIPAA Security Rule and Compliance activities with your internal audit and risk management teams.

Risk does not walk alone

Risk is not an independent variable that can be managed  on its own.  It is not an exercise in paper work. Risk is a function of external and internal attackers that exploit weaknesses (vulnerabilities) in people and systems and processes in order to get something of value (assets).   The HIPAA Security Rule prescribes in a well-structured way – how to implement the right security countermeasures to protect EPHI – the key assets of your patient customers.

The importance of audit for HIPAA

While audit is not specifically mentioned in the HIPAA Security Rule – security review and risk management are key pieces – audit is crucial for you to stay on track over time.

According to the Institute of Internal Auditors, internal auditing is an “independent, objective assurance and consulting activity designed to add value and improve an organization’s operations.” Internal audits provide assurance and consulting services to management in an independent and objective manner. But what does that mean? It means that internal auditors can go into your business operation and determine if your HIPAA security and compliance is a story on paper or a story being acted out in real life.

Audit – necessary but not sufficient

However, internal audit is not a line of defense and neither is a corporate risk management function a line of defense.

HIPAA Security and Privacy Rule compliance regards investigating plausible threats, valuable assets, vulnerabilities and security countermeasures that mitigate asset vulnerabilities and reduce the risk which is the result of threats exploiting vulnerabilities to damage assets.

When we frame security defenses in terms of mitigating attacks – we immediately see that neither audit nor corporate risk management fall into the category of countermeasures.

So why is audit and risk management important?

Audit is crucial to assuring that the security portfolio is actually implemented at all levels. Yes – all levels – including the CEO office and the last of the cleaning team. Audit strengths are also their weakness – they generally do not understand the technical side of security and therefore audit must work hand in glove with the operational and engineering functions in an organization.

Risk management is key to prioritizing implementation of security countermeasures – because – let’s face it – business and engineering operations functions are not qualified to evaluate asset value.

In summary

Your HIPAA and Security Rule compliance is not just about paper-work.  It’s about getting it right  – day in and day out.

 

Tell your friends and colleagues about us. Thanks!
Share this
hipaa cloud security

How do you know that your personal health data is secure in the cloud?

Modern system architecture for medical devices is a triangle of Medical device, Mobile app and Cloud services (storing, processing and visualizing health data collected from the device).  This creates the need for verifying a chain of trust: patient, medical device, mobile app software, distributed interfaces, cloud service software, cloud service provider.

No get out of jail free card if your cloud provider is HIPAA compliant.

We specialize in medical device security and as I’ve written here and here and here – and there is no silver marketing bullet.

Medical device vendors must implement robust software security in their device, app and cloud service layers and implement regulatory compliance in people and technical operations. If you are a medical device vendor, you cannot rely on regulatory compliance alone, nor can you rely on your cloud provider being HIPAA compliant.  I’ve written here and here how medical devices can be pivot points for attacking other systems including your customers’ and end users devices.

Regulatory compliance is not security

There are two notable regulatory standards relating to medical devices and cloud services – the HIPAA Security Rule and the FDA Guidance for Management of cybersecurity in medical devices. This is in addition to European Data Protection requirements and local data security requirements  that a particular country such as France, Germany or New Zealand may enforce for protecting health data in the cloud.

The American security and compliance model is unique (and it is typically American in its flavor) – it is based on market forces – not government coercion.

Complying with FDA Guidance is a requirement for marketing your medical device in the US.

Complying with the HIPAA Security Rule is a requirement for customers and covered entity business associates to buy your medical device.   You can have an FDA 510(K) for your medical device and still be subject to criminal charges if your cloud services are breached.   HHS has announced  in the Breach Notification Rule and here that they will investigate all breaches of 500 records and more. In addition, FDA may enforce a device recall.

But – compliance is not the same as actual enforcement of secure systems

Verifying the chain of trust

Medical device vendors that use cloud services will generally sign upstream and downstream business associate agreements (BAA) but hold on:

There is an elephant in the room:  How do you know that the cloud services are secure?  If you have a data breach, you will have to activate your cyber-security insurance policy not your cloud providers sales team.

Transparency of the cloud provider security operations varies widely with some being fairly non-transparent ()and others being fairly transparent (Rackspace Cloud are excellent in their levels of openness before and after the sale) in sharing data and incidents with customers.

When a cloud service provider exposes details of its own internal policy and technology, it’s customers (and your medical device users) will tend to trust the provider’s security claims. I would also require transparency by the cloud service providers regarding security management, privacy and security incident response.

One interesting and potentially extremely valuable initiative is the Cloud Trust Protocol.

The Cloud Trust Protocol (CTP) enables cloud service customers to request and receive data regarding the security of the services they use in the cloud, promoting transparency and trust.

The source code implements a CTP server that acts as a gateway between cloud customers and cloud providers:

  • A cloud provider can push security measurements to the CTP server.
  • A cloud customer can query the CTP server with the CTP API to access these measurements.

The source code is available here on Github.

 

 

Tell your friends and colleagues about us. Thanks!
Share this
dilbert Data Security

3 things a medical device vendor must do for security incident response

You are VP R&D or CEO or regulatory and compliance officer at a medical device company.

Your medical devices measure something (blood sugar, urine analysis, facial anomalies, you name it…). The medical device interfaces to a mobile app that provides a User Interface and transfers patient data to a cloud application using RESTful services over HTTPS.

Sound familiar?

The Medical device-Mobile app-Cloud storage triad is a common architecture today for many diagnostic, personal well-being and remote patient monitoring indications.

We have numerous clients with the Medical device-Mobile app-Cloud storage system architecture and we help them address 4 key security issue –

  1. How to ensure that personal data and user authentication data is not stolen from the mobile medical app,
  2. How to ensure that the mobile medical app is not used as an attack pivot to attack other medical device users and cloud servers,
  3. How to comply with the HIPAA Security Rule and ensure that health data transferred to the cloud is not breached by attackers who are more than interested in trafficking in your users’ personal health data,
  4. How to execute effective security incident response and remediation – its a HIPAA standard but above all – a basic tenet for information security management.

How effective is your security incident response?

The recent SANS Survey on Security Incident Response covers the challenges faced by incident response teams today—the types of attacks they detect, what security countermeasures they’ve deployed, and their perceived effectiveness and obstacles to incident handling.

Perceived effectiveness is a good way of putting it – because the SANS Survey on Security Incident Response report has some weaknesses.

First – the survey that is dominated by large companies: over 50% of the respondents work for companies with more than 5,000 employees and fully 26% work for companies with more than 20,000 employees.    Small companies with less than 100 employees – which cover almost all medical device companies are underrepresented in the data.

Second – the SANS survey attempts, unsuccessfully, to reconcile reports by the companies they interviewed that they respond and remediate  incidents within 24 hours(!) with reports by the PCI (Payment Card Industry) DSS (Data security standard) Association that retail merchants take over 6 months to respond.       This gap is difficult to understand – although it suggests considerable variance in the way companies define incident response and perhaps a good deal of wishful thinking, back-patting and CYA.

Since most medical device companies have less than 100 employees – it is unclear if the SANS findings (which are skewed to large IT security and compliance organizations) are in fact relevant at all to a medical device industry that is moving rapidly to the medical device-App-Cloud paradigm.

3 things a medical device vendor must have for effective incident response

  1. Establish an IRT.  (Contact us and we will be happy to help you set up an IRT and train them on effective procedure and tools).  Make sure that the IRT trains and conducts simulations every 3-6 months and above all make sure that someone is home to answer the call when it comes.
  2. Lead from the front. Ensure that the head of IRT reports to the CEO.   In security incident response, management needs to up front and not lead from behind.
  3. Detect in real time. Our key concern is cloud server security.    Our recommendation is to install OSSEC on your cloud servers. OSSEC sends alerts to a central server where analysis and notification can occur even if the medical device cloud server goes down or is compromised.
Tell your friends and colleagues about us. Thanks!
Share this
Data loss prevention

Refreshing your HIPAA Security Rule compliance

Clients frequently ask us questions like this.

Danny,

I have a quick question about our HIPAA compliance that we achieved back in early 2013. Since then  we have released a couple of new software versions and we are wondering to what extent we need to perform another security and compliance assessment.  Please let us know what sort of information you might require to evaluate whether or not a new HIPAA security rule assessment is required.

What about the upcoming changes in HIPAA in 2016?

Any software changes that increase the threat surface to attacks (new ports, new interfaces, new modules that use PHI) would be reason to take a look at your Security Rule compliance.
Re HIPAA 2016 – OCR is still making plans but it is almost certain they will be doing audits.    I believe that due to sheer size of the program – they will start with the biggest hospitals – I do not think that small medical device vendors will be on their radar – although the big guys that had serious adverse events will probably get audited (insulin pumps, implanted cardiac devices)
In general, if you are developing medical software that connects to the Internet or the mobile Internet – you should not wait 3 years between security assessments.  Make secure software development methdology part of the way you develop software and audit once/year or on any major release.
Danny

 

Tell your friends and colleagues about us. Thanks!
Share this
hipaa cloud security

Privacy, Security, HIPAA and you.

Medical devices, mobile apps, Web applications – storing data in the cloud, sharing with hospitals and doctors. How do I comply with HIPAA? What applies to me – the Security Rule, the Privacy Rule or both?

Consider a common use case these days – you’re a medical device vendor and your device stores health information in the cloud. You have a web and/or mobile application that enable doctors/hospitals to access the data from my device as part of their healthcare services. If you operate in the United States, what HIPAA regulations apply ? Do I need to comply with the Privacy Rule, the Security Rule or both?

There is a good deal of confusion regarding the HIPAA Privacy and Security Rules and how things work. In this article, we will examine the original content of the HIPAA regulation and explain who needs to do what.

What is the Privacy Rule?

The HIPAA Final Rule (enacted in Jan 2013) has 2 pieces – the Privacy Rule and the Security Rule.

The Privacy Rule establishes standards for the protection of health information. The Security Rule establishes security standards for protecting health information that is held or transferred in electronic form. The Privacy Rule broadly defines ‘‘protected health information’’ as individually identifiable health information maintained or transmitted by a covered entity in any form or medium. The Privacy Rule is located at 45 CFR Part 160 and Subparts A and E of Part 164.

Who needs to comply with the Privacy Rule?

By law, the HIPAA Privacy Rule applies only to covered entities – health plans, health care clearinghouses, and certain health care providers. However, most health care providers and health plans do not carry out all of their health care activities and functions by themselves. Instead, they often use the services of a variety of other persons or businesses – and transfer/exchange health information in electronic form to use these services. These “persons or businesses” are called “business associates”; defined in 45 CFR 164.502(e), 164.504(e), 164.532(d) and (e) 45 CFR § 160.102, 164.500.

What is the Security Rule?

The Security Rule operationalizes the Privacy Rule by addressing the technical and non-technical safeguards that the “covered entities” and their business associates must implement in order to secure individuals’ “electronic protected health information” (EPHI). The Security Rule is located at 45 CFR Part 160 and Subparts A and C of Part 164.

Who needs to comply with the Security Rule?

Since its an operational requirement, the Security Rule (by law) applies to covered entities, business associates and their sub-contractors. While the Privacy Rule applies to protected health information in all forms, the Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.

Business associate liability

Section 13404 of the HITECH Act creates direct liability for impermissible uses and disclosures of protected health information by a business associate of a covered entity “that obtains or creates” protected health information “pursuant to a written contract or other arrangement described in § 164.502(e)(2)” and for compliance with the other privacy provisions in the HITECH Act.

Section 13404 does not create direct liability for business associates with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). Therefore, under the final rule, a business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

Permitted use of EPHI by a business associate

While a business associate does not have health care operations, it is permitted by § 164.504(e)(2)(i)(A) to use and disclose protected health information as necessary for its own management and administration if the business associate agreement permits such activities, or to carry out its legal responsibilities. Other than the exceptions for the business associate’s management and administration and for data aggregation services relating to the health care operations of the covered entity, the business associate may not use or disclose protected health information in a manner that would not be permissible if done by the covered entity (even if such a use or disclosure is permitted by the business associate agreement).

Taken from the Federal Register

General Definitions

See § 160.103 for HIPAA general definitions used by the law – definitions of business associates, protected health information and more.

Summary

  • The Privacy Rule establishes standards for the protection of health information.
  • The Security Rule establishes operational security standards for protecting health information that is held or transferred in electronic form.
  • The Security Rule applies only to electronic health information systems that maintain or transmit individually identifiable health information. Safeguards for protected health information in oral, written, or other non-electronic forms are unaffected by the Security Rule.
  • Business associates do not have direct liability with regard to compliance with all requirements under the Privacy Rule (i.e., does not treat them as covered entities). A business associate is directly liable under the Privacy Rule for uses and disclosures of protected health information that are not in accord with its business associate agreement or the Privacy Rule.

 

Tell your friends and colleagues about us. Thanks!
Share this
Courtesy of firstpost.com

Why your security is worse than you think

Thoughts for Yom Kippur – the Jewish day of atonement – coming up next Wed.

Security on modern operating systems (Windows, OS/X, iOS, Android, Linux) is getting better all the time – but  Android using SELinux and MAC (mandatory access control) doesn’t make for catchy, social-media-sticky news items.

A client (a good one) once told me that people never remember your successes, only your failures. (He also believed that all software developers are innately incapable of telling the truth but that’s another story).

The corollary to this notion of failure-skew in the business (and security) world is media reporting. Consider media emphasis on reporting violent and/or negative events. It’s not a hot news item to say that 39% of Israeli Arabs are proud to be Israeli nor is it newsworthy to report that 29% are very proud. The world (Middle East included) is actually a much better place then it seems when not viewed through the lens of social media news reporting and re-purposing (I’m not sure what the correct term for the Huffington Post is so I’ll just use the word repurpose).

FB and Twitter create discussion threads, not examination-of-empirical data threads. Discussion is easier, more fun and cheaper than collecting data and examining it’s quality.

In addition, radical voices are far more interesting than statistics. Who cares that according to World Bank statistics, in 1990 there were 1.91 billion people who lived on less than $1.25 a day an in 2011 it was just one billion. Radical voices (amusingly adopted by the US President) will continue to blame poverty on the rise in Islamic and Iranian terror even though it emanates from the wealthiest countries in the world.

The Jews over the world are up to bat this coming Wed on Yom Kippur. We can bemoan how bad things are and what a terrible President or PM we all have and how our society is falling apart, or we can take a little piece of our own life and fix it. Send thank you notes to people.  Patch your systems once/week. That’s a good start. And pretty easy to do.

Now what does this have to do with software security you ask?

Everything.

Our clients read social media.  They read about zero-days and they get all excited and then do nothing.

Yet another serious Android security issue was publicized this week, with the latest exploit rendering devices “lifeless,” and said to affect more than half of units currently on the market.  Latest Android security exploit could leave more than half of current devices ‘dead’ & unusable

Now let’s check out that URL – its from Apple Insider. Hmm – somebody has an ax to grind I bet.

Security on modern operating systems (Windows, OS/X, iOS, Android, Linux) is getting better all the time – but  Android using SELinux and MAC (mandatory access control) doesn’t make for catchy, social-media-sticky news items.

So this year – I mean this Wednesday – don’t wring your hands.  Do a security assessment on your systems and prioritize 1 thing, find that one weakest link in your system and harden it up.

 

Tell your friends and colleagues about us. Thanks!
Share this