Tag Archives: Security engineering

Debugging security

There is an interesting analogy between between debugging software and debugging the security of your systems.

As Brian W. Kernighan and Rob Pike wrote in “The Practice of Programming

As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

In programming, it is faster to examine the contents of a couple of variables than to single-step through entire sections of code.

Collecting security logs is key to information security management not only for understanding what and why an event happened but also in order  to  prove regulatory compliance with regulations such as the HIPAA security rule. The business requirements are that   security logs  should be both relevant and effective.

  1. Relevant content of audit controls:  For example, providing a  detailed trace of an application whenever it elevates privilege in order to execute a system level function.
  2. Effective audit reduction and report generation:  Given the large amount of data that must be analyzed in security  logs, its crucial that critical events are separated from normal traffic and that concise reports can be produced in real-time to help understand  what happened, why it happened and how it was mediated and how to mitigate similar risks in the future.

In security log analysis, it is faster and definitely more effective for a security analyst to examine the contents of a few real time events than to process gigabytes or terabytes of security logs (the equivalent of stepping through or placing watch points in sections of of a sub-modules with  hundreds or thousands of lines of code.

When you have to analyze security logs, it is easy to get lost in details of complicated data and flows of events and find yourself drifting off into all kinds of directions even as the bells go on in the back of your mind that you are chasing ghosts in a futile and time-consuming exercise of investigation and security event debugging.

In order to understand this better, consider another analogy, this time from the world of search engines.

Precision and recall are key to effective security log analysis and effective software debugging.

In pattern recognition and information retrievalprecision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. When a program for recognizing the dogs in a scene correctly identifies four of the nine dogs but mistakes three cats for dogs, its precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. See Precision and recall in the Wikipedia.

In other words – it doesn’t really matter if you have to analyze a program with 100,000 lines of code or a log file with a terabyte of data – if you have good precision and good recall.

The problem is however, that the more data you have, the more difficult it is to achieve high precision and recall and that is why real-time events (or  debugging statements) are more effective in day-to-day security operations.

 

Tell your friends and colleagues about us. Thanks!
Share this

Why Microsoft Windows is a bad idea for medical devices

I’m getting some push back on LinkedIn on my articles on banning Microsoft Windows from medical devices that are installed in hospitals – read more about why Windows is a bad idea for medical devices here and here.

Scott Caldwell tells us that the FDA doesn’t rule “out” or “in” any particular technology, including Windows Embedded.

Having said that, Microsoft has very clear language in their EULA regarding the use of Windows Embedded products:

“The Products are not fault-tolerant and are not designed, manufactured or intended for any use requiring fail-safe performance in which the failure of a Product could lead to death, serious personal injury, severe physical or environmental damage (“High Risk Activities”).”

Medical device vendors  that  use Windows operating systems for less critical devices, or for the user interface are actually increasing the threat surface for a hospital, since any Windows host can be a carrier of malware that can take down the entire hospital network, regardless of it’s primary mission function, be it user-friend UI at a nursing station or intensive care monitor at the bedside.

Medical device vendors that use Microsoft IT systems management “best-practices” often  take the approach of “bolting-on” third party solutions for anti-virus and software distribution instead of developing robust, secure software, “from the ground up” with a secure design, threat analysis, software security assessment and secure software implementation.

Installing third-party security solutions that need to be updated in the field, may be inapplicable to an embedded medical device as the MDA (Medical Device Amendments of 1976) clearly states:

These devices may enter the market only if the FDA reviews their design, labeling, and manufacturing specifications and determines that those specifications provide a reasonable assurance of safety and effectiveness. Manufacturers may not make changes to such devices that would affect safety or effectiveness unless they first seek and obtain permission from the FDA.

It’s common knowledge that medical device technicians use USB flash drives and notebook computers to update medical devices in the hospital. Given that USB devices and Windows computers are notoriously vulnerable to viruses and malware, there is a reasonable threat that a field update may infect the Windows-based medical device. If the medical device is isolated from the rest of hospital network, then the damage is  localized, but if the medical device is networked to an entire segment, then all other Windows based computers on that segment may be infected as well – propagating to the rest of the hospital in a cascade attack.

It’s better to get the software security right than to try and bolt in security after the implementation.Imagine that you had to buy the brakes for a new car and install them yourself after you got that bright new Lexus.

It is not unusual for medical device vendors to fall victim to the same Microsoft marketing messages used with enterprise IT customers – “lower development costs, and faster time to market” when in fact, Windows is so complex and vulnerable that the smallest issue may take a vendor months to solve. For example – try and get Windows XP to load the wireless driver without the shell.   Things that may take months to research and resolve in Windows are often easily solved in Linux with some expertise and a few days work. That’s why you have professional medical device  software security specialists like Software Associates.

With Windows, you get an application up and running quickly, but it is never as reliable and secure as you need.

With Linux, you need expertise to get up and running, and once it works, it will be as reliable and secure as you want.

Yves Rutschle says that outlawing Microsoft Windows from medical devices in hospitatls  sounds too vendor-dependant to be healthy (sic) (Seems to me that this would make the medical device industry LESS vendor-dependent, not more vendor-dependent, considering the number of embedded Linux options out there.)

Yves suggests that instead, the FDA should create a “proper medical device certification cycle. If you lack of inspiration, ask the FAA how they do it, and maybe make the manufacturers financially responsible for any software failure impact, including death of a patient“. (The FDA does not certify medical devices, they grant pre-market approval).

I like a free market approach but consider this:

(Held)The MDA’s pre-emption clause bars common-law claims challenging the safety or effectiveness of a medical device marketed in a form that received premarket approval from the FDA. Pp. 8–17.

Maybe the FDA should learn from the FAA but in the meantime, it seems to me if the FDA pre-market validation process had an item requiring a suitable operating system EULA, that would pretty much solve the problem.

Tell your friends and colleagues about us. Thanks!
Share this