Category Archives: Software security

Apps vs. the Web, enemy or friend?

Saw this item on Gigaom.

George Colony, the chairman and CEO of Forrester Research, re-ignited a minor firestorm recently, with a presentation at the LeWeb conference in which he argued that the web is dead, and being replaced by the app economy — with mobile and smartphone apps that leverage the cloud or other services rather than the open web.

I have written here and here about the close correlation between Web application security and Web performance.

I know that Mr. Colony has sparked some strong sentiment in the community, in particular from Dave Winer:

If I can’t link in and out of your world, it’s not even close to a replacement for the web. It would be as silly as saying that you don’t need oceans because you have a bathtub. How nice your bathtub is. Try building a continent around it.

Of course, that is neither true nor relevant.

Many apps are indeed well connected, and the apps that are not wired-in, don’t have to be wired; the app is simply doing something useful for the individual consumer (like iAnnotate displaying a PDF file of music on a iPad or Android tablet).

iAnnotate turns your iPad into a world-class productivity tool for reading, annotating, organizing, and sending PDF files. Join the 100,000s of users who turn to iAnnotate for their PDF annotating needs. We designed iAnnotate to suit your individual workflow.

I became even more cognizant that apps may overtake the open Web over the past 2 weeks when Google Apps was going through some rough spots and it was almost impossible to read email to  software.co.il or access or calendars…except from our Android tablets and Nexus S smartphones.   Chrome and Google Apps was almost useless but Android devices just chugged on.

There is a good reason why apps are overtaking the open browser-based web.

They are simply more accessible, easier to use and faster.

This is no surprise as I noted last year:

The current rich Web 2.0 application development and execution model is broken.

Consider that a Web 2.0 application has to serve browsers and smart phones. It’s based on a heterogeneous server stack with 5-7 layers (database, database connectors, middleware, scripting languages like PHP, Java and C#, application servers, web servers, caching servers and proxy servers.  On the client-side there is an additional  heterogeneous stack of HTML, XML, Javascript, CSS and Flash.

On the server-side, we have

  • 2-5 languages (PHP, SQL, tcsh, Java, C/C++, PL/SQL)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Server-side database management (MySQL, MS SQL Server, Oracle, PostgreSQL)

On the client side, we have

  • 2-5 languages ((Javascript, XML, HTML, CSS, Java, ActionScript)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Local data storage – often duplicating session and application data stored on the server data tier.

A minimum of 2 languages on the server side (PHP, SQL) and 3 on the client side (Javascript, HTML, CSS) turns developers into frequent searchers for answers on the Internet (many of which are incorrect)  driving up the frequency of software defects relative to a single language development platform where the development team has a better chance of attaining maturity and proficiency. More bugs means more security vulnerabilities.

More bugs in this complex, broken execution stack means more things will go wrong and as devices and apps are almost universally accessible now; it means that customers like you and me will not tolerate 2 weeks of downtime from a Web 2.0 service provider.  If we have the alternative to use an app on a tablet  device, we will take that alternative and not look back.

Tell your friends and colleagues about us. Thanks!
Share this

Build management and Governance

Don’t break the build.

There is absolutely no question that the build process is a pivot in the software quality process. Build every day, don’t break the build and do a smoke test before releasing the latest version.

This morning, I installed the latest build of an extremely complex network security product from one of our customers and lo and behold, one of the most basic functions did not work (and has not worked for about 3 revisions now apparently). Wrote a love letter to the customer service and QA managers and chided them for sloppy QA.

An article I saw recently, talks about the “confluence of compliance and governance” and the direct link to software quality. If you read Jim McCarthy’s classic – “Dynamics of Software Development” you will remember the chapter called Don’t break the build.

You may be using Linux make, Microsoft nmake or Apache Ant but in all cases, the build expertise of the person running the build is more important than the tool itself. the development team runs a daily build with a build-meister personally responsible for running the construction of a working system from all the components. If the build breaks he doesn’t go home.

It is better to have a non-programmer do the smoke-test before the final release to manufacturing. A person outside the engineering team does not have the blinders or personal interest to ignore basic functionality that gets broken ( not to mention having motivation to one-up the engineers).

Anyhow, maybe there is still hope if the compliance gurus have discovered software quality.

Tell your friends and colleagues about us. Thanks!
Share this

The top 10 mistakes made by Linux developers

My colleague, Dr. Joel Isaacson talks about the top 10 mistakes made by Linux developers. It’s a great article and great read from one of the top embedded Linux programmers in the world.

The Little Engine That Could

Copyright 2004 Joel Isaacson. This work is licensed under the Creative Commons Attribution License.

I  try to explain what are the top 10 mistakes made by Linux developers as I see it. I’m aware that one person’s mistake is another person’s best practice. My comments are therefore subjective.

I will use an embedded Linux device, the WRT54GS, a wireless router as an illustration of an embedded Linux device.An interesting article about this device can be found in: http://www.pbs.org/cringely/pulpit/pulpit20040527.html.

“The Little Engine That Could” How Linux is Inadvertently Poised to Remake the Telephone and Internet Markets – By Robert X. Cringely

So what are the top 10 mistakes made by Linux developers?

10 – Pick a vendor.
9 – Then pick a platform.
8 – We are not in Kansas anymore.

Support Issues

10 – Pick a Vendor

  • In my experience picking a large foreign company for support is not the best way to go for various reasons.
  • More about this later.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Digital content protection

A customer case study – Digital content protection for VOD on a TCP unicast network

One of our most interesting projects recently was a digital content protection and secure content distribution software development projects  in the field of IPTV and video on demand.

We were called in at a critical stage in project delivery to help manage the development and design the encryption for the digital content protection.

Read more about the VOD IPTV solution

Tell your friends and colleagues about us. Thanks!
Share this

Will security turn into a B2B industry?

Information security is very much product driven and very much network perimeter security driven at that:   firewalls, IPS, DLP, anti-virus, database firewalls, application firewalls, security information management systems and more.

It is convenient for a customer to buy a product and feel “secure” but, as businesses become more and more interconnected, as cloud services percolate deeper and deeper into organizations, and as  government compliance regulation becomes more complex and pervasive; the security “problem” becomes more difficult to solve and even harder to sell.

I believe that there are 3 reasons why it’s hard to sell security:

The first is that it’s complex stuff, hard to explain and even harder to build a cost-justified security countermeasure plan and measure security ROI.  The nonsense propagated by security vendors like Symantec and Websense do little to improve the situation and only exacerbate the low level of credibility for security product effectiveness with  pseudo science and ROI calculations written by wet-behind-the-ears English majors marcom people who freelance for security vendors – as I’ve noted in previous posts here, here, here and here.

The second is related to prospect theory. A CEO is risk hungry for a high impact, low probability event (like an attack on his message queuing transaction processing systems) or theft of IP by a competitior and risk averse to low impact, high probability events like malware and garden variety dictionary attacks on every ssh service on the Net.

The third is related to psychology.   Why is it a good idea to cold call a CIO and tell him that the multi-million dollar application his business developed is highly vulnerable?    Admitting that his software is vulnerable and going to the board to ask for big bucks to fix the problem is tantamount to admitting that he didn’t do his job and that someone else should pay the price.  Very bad idea.

This is why cloud services are a hit.

Security is baked into the service. You pay for the computing/storage/messaging resource like you buy electricity. The security is “someone else’s problem”  and let’s face it, the security professionals at Rackspace or Amazon or Google App Engine are better at security than we are. It’s part of their core business.

The next step after cloud services is the security industry evolving into a B2B industry like the automotive or energy industry.  You don’t buy brakes from a McAfee and a car from Checkpoint – you buy a car from GM and brakes are part of the system.

That’s where we need to go – building the security into the product instead of bolting it on as an after-sale extra

Tell your friends and colleagues about us. Thanks!
Share this

Practical security management for startups

We normally associate the term “small business” or SME (small to medium sized enterprise) with commercial operations that buy and sell, manufacture products or provide services – lawyers, plumbers, accountants, web developers etc…

However – there is an important class of small business operations that is often overlooked when it comes to information security and is the technology startup.   A high tech startup is an SME by all definitions – usually less than 50 employees but it doesn’t buy and sell and neither does it provide professional services.   Unlike other small businesses, a high tech startup is almost purely focussed on product research and development. Almost all startups have a very high percentage of software development. Even if the startup develops hardware – there is still a strong software development focus.

Intuitively – one would say that a primary concern for a startup is IP (intellectual property) protection and that starts with protecting source code.

Counter-intuitively this is not true. There are two basic reasons why source code leakage is not necessarily a major threat to a startup:

1) If the startup uses FOSS (free open source software), there is nothing to hide.  This is not strictly speaking correct – since the actual application developed using FOSS has immense value to the startup and may often involve proprietary closed  source code as well.

2) A more significant reason that source code leakage is of secondary importance is that a startup IP is invariably based on a combination of three components:    Domain expertise, implementation know-how and the implementation itself (the software source code).   The first two factors – domain expertise and  implementation know-how are crucial to successful execution.

The question of how to protect IP still remains on the table but it now is reshaped into a more specific question of how best to prioritize security countermeasures to protect the startup’s domain expertise and  implementation know-how.  Prioritization is of crucial importance here, since startups by definition do not generate revenue and have little money to spend on luxuries like data loss prevention (DLP ) technologies.

Software Associates works exclusively with technology and medical device developers and I’d like to suggest a few simple guidelines for getting the most security for your money:

The startup management needs to know how much their information security measures will cost and how it helps them run the business. Business Threat Modeling (TM) is a practical way for a manager to assess the operational risk for the startup in dollars and cents. The advantages of the business threat modeling methodology are:

  • Threat modeling places the focus on asset management and Value at Risk reduction before acquisition of information and security technologies.
  • Threat modeling helps select  the right countermeasures often prioritizing monitoring before active data loss prevention (for example)
  • Threat  modeling, when done right, quantifies risk in dollar terms. This is particularly important when reporting back to the investors on exposure to data loss of IP.
  • Threat modeling helps justify investments in security, compliance and risk management to the management board – simply because it puts everything into financial values – the value at risk and cost of the security portfolio.

These are similar objectives to GRC (Governance, risk and compliance) systems.

The problem with most GRC (governance, risk and compliance) and ERM (enterprise risk management) systems is that they don’t calculate risk, they make you work hard and they’re not that easy to use.

I think that we can all agree that the last thing that a hi-tech startup needs is a system to manage GRC activities when they’re working to make the next investor milestone.

Startup management needs a simple security management approach that they can deploy themselves, perhaps assisted with some professional consulting to help them get started and get a good feel for their exposure to security and compliance issues.

How does a practical security management methodology like this work? Well – it works by using common language of threat modeling.

You own assets – for example, expensive diamond jewelry stored at home. These assets have a dollar value.

Your asset has vulnerabilities – since you live on the ground floor and your friendly German Shepherd knows where the bedroom is and will happily show anyone around the house.

The key threat to the asset is that an attacker may break in through the ground floor windows.

The countermeasures are bars for the windows, an alarm system and training your dog to be a bit less friendly around strangers with ski-masks.

Using countermeasure costs, asset value, threat probability of occurrence and damage levels, we calculate Value at Risk in financial terms, and propose an prioritized, cost-effective risk mitigation plan.

That’s it – adopt a language with 4 words and you’re on a good start to practical security management for your high tech startup.

Tell your friends and colleagues about us. Thanks!
Share this

The Microsoft monoculture as a threat to national security

This is probably a topic for a much longer essay, but after two design reviews this week with medical device vendor clients on software security issues, I decided to put some thoughts in a blog post.

Almost 8 years ago, Dan Geer, Rebecca Bace,Peter Gutmann, Perry Metzger, Charles Pfleeger, John Quarterman and Bruce Schneier wrote a report titled: CyberInsecurity: The Cost of Monopoly How the Dominance of Microsoft’s Products Poses a Risk to Security.

The report from a stellar cast of information security experts and thought leaders shows that the complexity and dominance of Microsoft’s Windows operating system in US Federal agencies makes the US government prone to cyber attack – a national security threat.

This was in September 2003.

Now fast forward to a congressional hearing on May 25, 2011 by the Committee on Oversight and Government Reform on “Cybersecurity: Assessing the Immediate Threat to the United States Listen to the youtube video – you will note the concern on potential damage to citizens due to virus infecting government PCs breaching personal information.

So the US government is still running Microsoft Windows and is still vulnerable to data security breaches. It seems that the Microsoft lobbying machine has been “successful” over the past 8 years on the Beltway, if you call threats to national security a success.

One of the commonly used canards by Microsoft monoculture groupies is that all operating systems have vulnerabilities and Windows is no better nor worse than Linux or OS/X. If “you” patch properly everything will be hunky-dory. There are a number of reasons why this is fallacious,  to quote the report:

  • Microsoft is a near-monopoly controlling the overwhelming majority of systems. This means that the attack surface is big, on a US national  level.
  • Microsoft has a high level of user-level lock-in; there are strong disincentives to switching operating systems.
  • This inability of consumers to find alternatives to Microsoft products is exacerbated by tight integration between applications and operating systems, and that integration is a long-standing practice.
  • Microsoft’s operating systems are notable for their incredible complexity and complexity is the first enemy of security.
  • The near universal deployment of Microsoft operating systems is highly conducive to cascade failure; these cascades have already been shown to disable critical infrastructure.
  • After a threshold of complexity is exceeded, fixing one flaw will tend to create new flaws; Microsoft has crossed that threshold.
  • Even non-Microsoft systems can and do suffer when Microsoft systems are infected.
  • Security has become a strategic concern at Microsoft but security must not be permitted to become a tool of further monopolization.

As a  medical device security and compliance expert, I am deeply concerned about medical devices that use Windows. If Windows is a threat to national security because it’s used in Federal government offices, Windows is really a bad idea when used in medical devices in hospitals.

I’m concerned about the devices themselves (the FDA classifies Web applications as medical devices also if the indications are medical-related) and the information management systems: the customer support, data collection, analysis management applications that are ubiquitous to networked medical devices.

There are two reasons why the FDA should outlaw Windows in medical devices and their information management systems.

Reason number 1 to ban Windows from medical devices is complexity. We know that the first sin of the 7 deadly sins of software development is making the software complex.  Complexity is the enemy of security because with complex software, there are more design flaws, more software defects and more interfaces where vulnerabilities can arise.

Similar to the history of data security breaches of retail systems, the medical device software industry is (or may soon be) facing a steeply increasing curve of data security and patient safety events due to the Microsoft monoculture.  We are not in Kansas anymore – not credit cards being breached, but entire hospital networks infected by Microsoft Windows viruses and patient monitoring devices that stop working because they got blue screens of death.  Since 300 million credit cards have been breached, it is a reasonable assumption that your card and mine is out there. The damage to your credit card being breached is minimal.  But, if your child was on a patient monitor that went offline due to a Microsoft Windows virus and a critical condition was not detected in time; it’s the difference between life and death.

The complexity and vulnerabilities of Windows technologies are simply not appropriate in the medical device space when you look at the complexity and weight of the components, the SQL injection vulnerabilities provided courtesy of naive ASP.NET programmers and the ever present threat of Windows viruses and malware propagated  by USB sticks and technician notebooks.

The Microsoft monoculture breeds a generation of programmers that are scared of the command line, unable to comprehend what happens behind the GUI and lured by the visual beauty of the development tools.  When a programmer uses a component and doesn’t know it works (see Visual Studio ) and shleps around a shitload of piping in his project, then the energies go into implementing a cute GUI instead of thinking about code threats.

This is on a grander scale, a rerun of Microsoft Powerpoint, where you spend 80% of your time in the application’s GUI instead thinking about and then just stating your message.

Reason number 2 to ban Microsoft Windows from medical devices is more subtle and related to systems management.   The Microsoft monoculture has bred a particular kind of thinking and system management best practices based on Windows servers and Windows PCs running in the office.  This IT system management strategy assumes that PCs are just personal devices that someone has to patch and that they will eventually get infected and or breached and or get a BSOD.

Unlike an office, a hospital is a highly heterogeneous and hostile environment. The system management strategy for network medical devices must be different.

Medical device vendors need to assess their software security with the design objective being a device that runs forever and serves the mission of the doctors and patients.

Medical devices are real time embedded systems living on a hospital network. They should be fail safe, not vulnerable to viruses and should not have to rebooted every few days.

Yes – it’s a tall bill and a lot of people will have to learn how to write code in embedded Linux.

But, there is no alternative, if we want to prevent the medical device industry from suffering the ignominy of the credit card industry.

 

Tell your friends and colleagues about us. Thanks!
Share this

The importance of data collection in a risk assessment

A risk assessment of a business always starts with data collection. The end objective is identifying and then implementing a corrective action plan that will improve data security in a cost-effective way, that is the right fit for the business.

The question in any risk assessment is how do you get from point A (current state) to point B (cost effective security that is the right fit for your business).

The key to cost-effective security is data collection.  Let’s recall that compliance regulation like PCI DSS 2 and the certifiable information security management standard ISO 27001 are based on fixed control frameworks. It’s easy to turn the risk analysis exercise into a check this/check that exercise, which by definition, is not guaranteed to get you to point B since the standard was never designed for your business. This is where we see the difference between ISO 27001 and ISO 27002.

ISO/IEC 27002 is an advisory standard meant to be applied to any type and size of business according to the particular security risks they face.

ISO/IEC 27001 (Information technology – Security techniques – Information security management systems – Requirements) is a certifiable standard. ISO/IEC 27001 specifies a number of firm requirements for establishing, implementing, maintaining and improving an ISMS (information security management standard), and specifies a set of 133 information security controls. These controls are derived from and aligned with ISO/IEC 27002 – this enables a business to implement the security controls that fit their business,and help them prepare for formal certification to ISO 27001.

Let me explain the importance of data collection by telling a story.

After reading this article in the NY Times  An Annual Report on one mans life, I was reminded about a story I read about Rabbi Joseph Horowitz (the “Alter from Novardok”) (1849–1919), relating his practice of writing a daily report on his life.

One of the things I learned from the musical director of the JP Big Band, Eli Benacot, is the importance of knowing where you are really holding in terms of your musical capabilities.  Many musicians, it turns out, have the wrong self-perception of their capabilities.  Sometimes, one sees a professional musician who is convinced of his proficiency and even within an ensemble he (or she) is incapable of really hearing how poorly they actually play.

Many times we feel secure but are not, or don’t feel secure when we really are. For example – a company may feel secure behind a well-maintained firewall but if employees are bringing smart phones and flash drives to work, this is an attack vector which may result in a high level of data loss risk. On the other hand – some people are afraid of flying and would prefer to drive, when in fact, flying is much safer than driving.

After we collect the data and organize it in a clear way, we then have the ability to understand where we are really holding.  That is the first step to building the correct security portfolio.

So, let’s return to the Rabbi Joseph Horowitz, who wrote a daily and annual report on his life. Here is his insight to implementing change – certainly a startling approach for information technology professionals who are used to incremental, controlled change:

“Imagine this scenario: A person decides that he wants to kasher his kitchen. But he claims, ‘Changing my dishes all at once involves throwing out an entire set and buying a brand new one. That’s quite an expense at one time. I’ll go about the kashering step by step. Today I’ll throw out one plate and replace it with a new one, tomorrow with a second and the next day with a third.’

“Of course, once a new plate is mixed with the old ones, it becomes treife like the rest. To kasher a kitchen, one must throw out all of his old dishes at once.

“The same holds true in respect to changing one’s character traits or way of life. One must change them in an instant because there is no guarantee that the anxieties and pressures that deter him on any given day will not deter him the following day, too, since anxieties and pressures are never ending. ”

(Madreigat Ha’adam, Rav Yosef Yoizel Horowitz).

 

Tell your friends and colleagues about us. Thanks!
Share this

10 guidelines for a security audit

What exactly is the role of an information security auditor?  In some cases, such as compliance  by Level 1 and 2 merchants with PCI DSS 2.0,  external audit is a condition to PCI DSS 2.0 compliance.   In the case of ISO 27001, the audit process is a key to achieving ISO 27001 certification (unlike PCI and HIPAA, ISO regards certification, not compliance as the goal).

There is a gap between what the public expects from an auditor and how auditors understand their role.

Auditors look at transactions and controls. They’re not the business owner and the more billable hours, the better.

The “reasonable person” assumes that the role of the security auditor is to uncover vulnerabilities, point out ways to improve security and produce a report that will enable the client to comply with relevant compliance regulation. The “reasonable person” might add an additional requirement of a “get out of jail free card”, namely that the auditor should produce a report that will stand up to legal scrutiny in times of a data security breach.

Auditors don’t give out “get out of jail” cards and audit is not generally part of the business risk management.

The “reasonable person” is a legal fiction of the common law representing an objective standard against which any individual’s conduct can be measured. As noted in the wikipedia article on the reasonable person:

This standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law. The standard also has a presence in contract law, though its use there is substantially different.

Enron, and the resulting Sarbanes-Oxley legislation resulted in significant changes in accounting firms’ behavior,but judging from the 2009 financial crisis from Morgan Stanley to AIG, the regulation has done little to improve our confidence in our auditors. The numbers of data security breaches are an indication that the situation is similar in corporate information security.  We can all have “get out of jail” cards but data security audits do not seem to be mitigating new risk from tablet devices and mobile apps. Neither am I aware of a PCI DSS certified auditor being detained or sued for negligence in data breaches at PCI DSS compliant organizations such as Health Net where 9 data servers that contained sensitive health information went missing from Health Net’s data center in Rancho Cordova, California. The servers contained the personal information of 1.9 million current and former policyholders, compromising their names, addresses, health information, Social Security numbers and financial information.

The security auditor expectation gap has sometimes been depicted by auditor organizations as an issue to be addressed  by educating users to the audit process. This is a response not unlike the notion that security awareness programs are effective data security countermeasures for employees that willfully steal data or bring their personal device to work.

Convenience and greed tend to trump awareness and education in corporate workplaces.

Here are 10 guidelines that I would suggest for client and auditor alike when planning and executing a data security audit engagement:

1. Use an engagement letter every time. Although the SAS 83 regulation makes it clear that an engagement letter must be used, the practical reason is that an engagement letter sets the mutual expectations, reduces risk of litigation and by putting mutual requirements on the table – improves client-auditor relationship.

2.Plan. Plan carefully who needs to be involved, what data needs to be collected and require input from C-level executives to  group leaders and the people who provide customer service and manufacture the product.

3. Make sure the auditor understands the client and the business.  Aside from wasted time, most of the famous frauds happened where the auditors didn’t really understand the business.   Understanding the business will lead to better quality audit engagements and enable the auditor and audit manager to be peers in the boardroom not peons in the hallway.

4. Speak to your predecessor.   Make sure the auditor talks to the people who came before him.  Speak with the people in your organization who did the last data security audit.   Even if they’ve left the company – it is important to understand what they did and what they thought could have been improved.

5. Don’t tread water. It’s not uncommon to spend a lot of time collecting data, auditing procedures and logs and then run out of time and billable hours, missing the big picture which is” how badly the client organization could be damaged if they had a major data security breach”. Looking at the big picture often leads to audit directions that can prevent disasters and  subsequent litigation.

6. Don’t repeat what you did last year.  Renewing a 2,000 hour audit engagement that regurgitates last years security check list will not reduce your threat surface.  The objective is not to work hard, the object is to reduce your value at risk, comply and …. get your “get out of jail card”.

7. Train the client to fish for himself.   This is win-win for the auditor and client. Beyond reducing the amount of work onsite, training client staff to be more self sufficient in the data collection and risk analysis process enables the auditor to better assess client security and risk staff (one of the requirements of a security audit) and improves the quality of data collected since client employees are the closer to actual vulnerabilities and non-compliance areas than any auditor.

As I learned with security audits at telecom service providers and credit card issuers, the customer service teams know where the bodies are buried, not a wet-behind-the-ears auditor from KPMG.

8. Follow up on incomplete or unsatisfactory information.  After a data security breach, there will be litigation.  During litigation, you can always find expert testimony that agrees with your interpretation of information but

The problem is not interpreting the data but acting on unusual or  missing data.  If your ears start twitching, don’t ignore your instincts. Start unraveling the evidence.

9. Document the work you do.  Plan the audit and document the process.  If there is a peer review, you will have the documentation showing the procedures that were done.  Documentation will help you improve the next audit.

10. Spend some time evaluating your client/auditor.   At the end of the engagement, take a few minutes and interview your auditor/client and ask performance review kinds of questions like: What do think your strengths are, what are your weaknesses?  what was succesful in this audit?  what do you consider a failure?   How would you grade yourself on a scale of 10?

Perhaps the biggest mistake we all make is not carefully evaluating the potential we have to meet our goals as audit, risk and security professionals.

A post-audit performance review will help us do it better next time.

Tell your friends and colleagues about us. Thanks!
Share this

Medical device security trends

Hot spots for medical device software security

I think that 2011 is going to be an exciting year for medical device security as the FDA gets more involved in the approval and clearance process with software-intensive medical device vendors. Considering how much data is exchanged between medical devices and customer service centers/care givers/primary clinical care teams and how vulnerable this data really is, there is a huge amount of work to be done to ensure patient safety, patient privacy and delivery of the best medical devices to patients and their care givers.

On top of a wave of new mobile devices and more compliance, some serious change is in the wings in Web services as well.

The Web application execution model is going to go through an inflection point in the next two years transitioning from stateless HTTP, heterogeneous stacks on clients and servers and message passing in the user interface (HTTP query strings) to WebSocket and HTML5 and running the application natively on the end point appliance rather than via a browser communicating to a Web server.

That’s why we are in for interesting times I believe.

Drivers
There are 4 key drivers for improving software security of medical devices, some exogenous, like security, others product-oriented like ease of use and speed of operation.  Note that end-user concerns for data security don’t seem to be a real market driver.

  1. Medical device quality (robustness, reliability,usability, ease of installation, speed of user interaction)
  2. Medical device safety (will the device kill the patient if the software fails, or be a contributing factor to damaging the patient)
  3. Medical device availability (will the device become unavailable to the user because of software bugs, security vulnerabilities that enable denial of service attacks)
  4. Patient privacy (HIPAA – aka – data security, does the device store ePHI and can this ePHI be disclosed as a result of malicious attacks by insiders and hackers on the device)

Against the backdrop of these 4 drivers, I see 4 key verticals: embedded devices, mobile applications, implanted devices and Web applications.

Verticals

Embedded devices (Device connected to patient)

  1. Operating systems, Windows vs. Linux
  2. Connectivity and integration into enterprise hospital networks: guidelines?
  3. Hardening the application verus bolting on security with anti-virus and network segmentation

Medical applications on mobile consumer devices (Device held in patient hand)

  1. iPhone and Android – for example, Epocrates for Android
  2. Software vulnerabilities that might endanger patient health
  3. Is the Apple Store, Android Market a back door for medical device software with vulnerabilities?
  4. Application Protocols/message passing methods
  5. Use of secure tokens for data exchange
  6. Use of distributed databases like CouchDB to store synchronized data in a head end data provider and in the mobile device The vulnerability is primarily patient privacy since a distributed setup like this probably increases total system reliability rather than decreasing it. For the sake of discussion, CouchDB is already installed on 10 million devices world wide and it is a given that data will be pushed out and stored at the end point hand held application.

Implanted devices (Device inside patient)

  1. For example ICD (implanted cardiac defibrillators)
  2. Software bugs that results in vulnerabilities that might endanger patient health
  3. Design flaws (software, hardware, software+hardware) that might endanger patient health
  4. Vulnerability to denial of service attacks, remote control attacks when the ICD is connected for remote
  5. programming using GSM connectivity

Web applications  (Patient interacting with remote Web application using a browser)

  1. Software vulnerabilities that might endanger patient health because of a wrong diagnosis
  2. Application Protocols/message passing methods
  3. Use of secure tokens for data exchange
  4. Use cloud computing as service delivery model.

In addition, there are several “horizontal” areas of concern, where I believe the FDA may be involved or getting involved

  1. Software security assessment standards
  2. Penetration testing
  3. Security audit
  4. Security metrics
  5. UI standards
  6. Message passing standards between remote processes
Tell your friends and colleagues about us. Thanks!
Share this