Tag Archives: data breach

Free risk assessment of your web site

With all the news about credit card breaches, there are probably a lot of people scurrying about trying to figure out the cheapest and fastest way to reduce the risk of some Saudi hacker stealing credit cards or mounting a DDOS attack on their web site.

I have written here, here and here about how to reduce the risk of a data breach of a web site.

Not to rain on the media party, but the actual cost to a online marketer of a hacker breaching a web site or defacing the web site could be very low since card-holders are covered by the credit card issuers and as long as the online commerce site continues operation, a temporary revenue dip might be offset by additional visits to the publicity.

Then again, the cost of a data breach to your operation could be very high, especially if you scrimp on security.

So – what is the right answer?

The right answer is the right security for your web site at the right cost to your pocket, not what Symantec says or what Microsoft says but what your risk assessment says.

In order to implement the most cost-effective security for your web site, you need to do a risk assessment that takes into consideration the value of your assets, the probability of attacks,  current vulnerabilities of your web site and operation (don’t forget that trusted insiders may be the more significant vulnerability in your operation) and possible countermeasures, including the cost of said countermeasures.

Sounds complex, right?

Actually – performing a threat analysis of  your web site can be a fairly straightforward exercise using the free risk assessment software provided by PTA Technologies.

You can download the free risk assessment software here and start improving your security today.

Any questions – feel free to reach out to the professional software security consultants in Israel at Software Associates.

Tell your friends and colleagues about us. Thanks!
Share this

Insecurity by compliance

If a little compliance creates a false sense of security then a lot of compliance regulation creates an atmosphere of feeling secure, while in fact most businesses and Web services are in fact very insecure.

Is a free market democracy doomed to suffer from privacy breaches – by definition?

My father is a retired PhD in system science from UCLA who worked for many years in the defense industry in Israel and California.  At age 89 he is sharp, curious and wired, with an iPad and more connected and easily accessible on the Net than most people are on their phone.

He sent me this item which turned out to be yet another piece of Internet spam and urban legend that has been apparently circulating the Net for over 10 years and has resurfaced just in time for the US Presidential elections.

A democracy is always temporary in nature; it simply cannot exist as a permanent form of government….The average age of the world’s greatest civilizations from the beginning of history, has been about 200 years.During those 200 years, these nations always progressed through the following sequence:From bondage to spiritual faith;
From spiritual faith to great courage;
From courage to liberty;
From liberty to abundance;
From abundance to complacency;
From complacency to apathy;
From apathy to dependence;
From dependence back into bondage

I told my Dad that it looks and smells like spam.  A quick read shows that it is a generalization from a sample of one.  The Roman Empire lasted about 500 years. The Ottoman Empire lasted over 700 years. The British Empire lasted about 200 years from 1783 to 1997 (withdrawal from the Falklands).  The Russian Empire lasted 200 years and the Soviets lasted less than 80. The Byzantine over 1000 and so on… See http://listverse.com/2010/06/22/top-10-greatest-empires-in-history/.

Rumors of the downfall of American democracy are premature, even though the US is more of a service economy than a manufacturing economy today than it was 200 years ago.

The US has shifted over the past 40 years from manufacturing and technology innovation to technology innovation, retail, outsourcing and financial services.    An obvious observation is Apple, with most of it’s manufacturing jobs outside the US, a net worth of a not-so-small country and perhaps, the most outstanding consumer technology innovator in the world. Another, and more significant example is Intel, one of the world’s technology leaders with a global operation from Santa Clara to Penang to China to Haifa and Jerusalem.  World class companies like Intel and Apple are a tribute to US strengths and vitality not weaknesses. In comparison, excluding Germany, Poland and a handful of other European countries, the EU is on the edge of bankruptcy.

In this period of time, has the US improved it’s information security in the face of rapidly increasing connectivity,  mobile devices and apps and emerging threats such as APT (advanced persistent threats)?

Apparently not.

 In the sphere of privacy and information security, the US leads in data security breaches while the EU leads in data security and privacy. The EU has strong, uniform data security regulation, whereas the US has a quilt-work of hundreds of privacy and security directives where each government agency has it’s own system for data security compliance and each state has it’s own legislation (albeit generally modeled after California) for privacy compliance.

The sheer volume and fragmented state of US data security and privacy regulation is practically a guarantee that most of the regulation will not be properly enforced.

On the other hand, the unified nature of EU data security directives makes it easier to enforce since everyone is on the same page.

We would argue that a free market, American style economy results on more technology innovation and economic vitality but also creates a chaotic regulatory environment where the breach of 300 million US credit cards in less than 10 years is an accepted norm. The increase in compliance regulation by the Obama administration does not impress me as a positive step in improving security.

As my colleague, John P. Pironti, president of risk and information security consulting firm IP Architects, said in an interview:

The number-one thing that scares me isn’t the latest attack, or the smartest guy in the street, it’s security by compliance, for example with PCI DSS 2.0

Security by compliance, he said, doesn’t do a company any favors, especially because attackers can reverse-engineer the minimum security requirements dictated by a standard to look for holes in a company’s defense.

In that case, if a little compliance creates a false sense of security then a lot of compliance regulation will create an atmosphere of feeling secure, while in fact most businesses and Web services are in fact very insecure.

Tell your friends and colleagues about us. Thanks!
Share this

The root cause of credit card data breaches in Israel

In my previous post – “The Israeli credit card breach”  I noted that there are  5 fundamental reasons why credit cards are stolen in Israel. None have to do with terror; 4 reasons are cultural and the 5th is everyone’s problem: “confusing compliance with security.

After reading the excellent article  by Sarah Leibowitz-Dar in the Maariv weekend edition, I realized that there is 1 constraint in Israel for improving data security:

בועז גוטמן, מקים המפלג לפדעי מחשב במשטרת ישראל.”

יש היום במשטרה חוקרי מחשב טובים שיודעים לקרוא ולכתוב אנגלית

Boaz Gutman, former Israeli police officer who started the computer crimes unit says that Israeli Police have good police officers who know how to read and write English.  If we had 30 instead of 20 we would be able to handle the case load

That one (1) constraint for improving data security in Israel and preventing credit card breaches is quite simply that most Israelis, including members of Knesset, the Police and Army simply do not understand English.

English after all, is not Israelis’ native tongue.   Israelis all use the Hebrew interfaces on their cell phones, use the Hebrew interface in Microsoft Office and send messages to each other on Facebook in Hebrew.

If Israelis spoke English fluently or at least understood English fluently they would be aware that there is a whole wide world out there where credit cards are stolen and Web sites need to be protected.

But no, we are like a small group of Jews living in a Russian shtetl and we do not know that there is an America out there.

Here we have Ms. Leibowitz and a bunch of  other Israeli journalists getting worked up over a fairly elementary hacking event resulting in the leakage of 14,000 credit cards from Israeli  Web sites.

If they would read English, they would know that in the past 6 years over 300 million credit cards have leaked in America.

In other words, your credit card is already out there. And life just goes on.

Tell your friends and colleagues about us. Thanks!
Share this

How to reduce risk of a data breach

Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation and compliance check lists have little direct value in protecting customers.

Software Associates specializes in helping medical device and healthcare vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

The first question any customer asks us regarding HIPAA compliance is how little he can spend. Not how much he should spend. This means we need simple and practical strategies to reduce the risk of data breaches.

There are 2 simple strategies to reduce the risk of data breach, one is technical, one is management:

  1. Use real time detection of security events to  directly protect your customers
  2. Build your security portfolio around specific threat scenarios (e.g a malicious employee stealing IP, a business partner obtaining access to confidential commercial information, a software update exposing PHI etc…) and use the threat scenarios to drive your service and product acquisition process.

Use real-time detection to directly protect your customers

Systems like ERM, SIM and Enterprise information protection are enterprise software applications that serve the back-office business of security delivery; things like log analysis and saving on regulatory documentation. Most of these systems excel at gathering and searching large volumes of data while providing little evidence as to the value of the data or feedback into improving the effectiveness of the current security portfolio.

Enterprise IT security capabilities do not have  a direct relationship with improving customer security and privacy even if they do make the security management process more effective.

This not a technology challenge but a conceptual challenge: It is impossible to achieve a meaningful machine analysis of  security event data in order to improve customer security and privacy using data that was uncertain to begin with, and not collected and validated using standardized evidence-based methods

Instead of log analysis we recommend real-time detection of events. Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation.

  1. Use DLP (data loss prevention) and monitor key digital assets such as credit cards and PHI for unauthorized outbound transfer.  In plain language – if you detect credit cards or PHI in plain text traversing your network perimeter or removable devices, then you have just detected a data breach in real time, far cheaper and faster than mulling through your log files after discovering 3 months later that a Saudi hacker stole 14,000 credit cards from an unpatched server.
  2. Use your customers as early warning sensors for exploits. Provide a human 24×7 hotline that answers on the 3d ring for any customer who thinks they have been phished or had their credit card or medical data breached.  Don’t put this service in the general message queue and never close the service.   Most security breaches become known to a customer when they are not at work.

Build your security portfolio around specific threat scenarios

Building your security portfolio around most likely threat scenarios makes sense.

Nonetheless, current best practices are built around compliance checklists (PCI DSS 2.0, HIPAA security rule, NIST 800 etc…) instead of most likely threat scenarios.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control item that is not rooted in a probable threat scenario.

When we audit a customer for HIPAA compliance or perform a software security assessment of an innovative medical device, we think in terms of “threat scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance.

In current regulatory compliance based systems like PCI DSS or HIPAA, when an auditor records an encounter with the customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.
Tell your friends and colleagues about us. Thanks!
Share this

What is the best way for a business to prevent data breaches?

Let’s start with the short version of the answer – use your common sense before reading vendor collateral. I think PT Barnum once said “There is a sucker born every minute” in the famous Cardiff Giant hoax – (although some say it was his competitor, Mr. George Hull.

Kachina Dunn wrote how Microsoft got security right. No Joke, Microsoft Got This Security Question Right

The gist of the post is that the Microsoft UAC-User Account Control feature in Windows Vista was deliberately designed to annoy users and increase security awareness; which is a good thing. The post got me thinking about the role of security vendors in mitigating data breach events.

Ms. Dunn quotes Carl Weinschenk in an online interview of a security vendor (Mr. Weinschenk is a professional journalist colleague of Ms. Dunn on the staff of IT Business Edge)

“Positive Networks surveyed IT security pros at small companies and enterprises, 20 percent had experienced a personal data breach — and 20 percent had also experienced a data breach in their companies. The consensus among those IT pros was that stronger security, specifically two-factor, was necessary but not present within their IT departments. And the breaches just keep happening.”

Data breaches just keep on happening

Of course data breaches keep on happening because data vulnerabilities continue to be unmitigated.

Most security breaches are attacks by insiders and most attackers are trusted people that exploit software system vulnerabilities (bugs, weak passwords, default configurations etc…) . Neither security awareness nor UAC are effective security countermeasures for trusted insider attacks that exploit system vulnerabilities – premeditated or not.

Two-factor authentication is necessary

As a matter of fact, two-factor authentication is a not an effective security countermeasure for internally launched attacks on data performed by authenticated users (employees, outsourcing contractors and authorized agents of the company). It is understandable that vendors want to promote their products – Positive Networks and RSA are both vendors of two-factor authentication products and both have vested interests in attempting to link their products to customer data security breach pain.

Unfortunately for the rest of us, the economics of the current security product market are inverse to the needs of the customer organizations. Security vendors like Positive Networks and RSA do not have economic incentive in reducing data breaches and mitigating vulnerabilities, since that would reduce their product and service revenue.

Actually, in real life –  the best marketing strategy for companies like RSA, Positive Networks and Symantec is to stimulate market demand with threat indicators and place the burden of proof of effectiveness of their security countermeasures on the end user customers. If the customers don’t buy – it’s their fault and if they do buy but remain vulnerable, we can always blame overseas hackers.

White listing applications is an effective tactic

At this year’s RSA conference, Microsoft officials spoke of layering “old-school (but effective) offensive tactics like white-listing applications”.  White-listing a vulnerable application doesn’t mitigate the risk of an authorized user using the application to steal data or abuse access rights.

One would certainly white list the Oracle Discover application since Oracle is a trusted software vendor. Users with privileges can use Oracle Discover to access the database and steal data. Since Oracle Discover generally transmits the password in clear text on the network, we have an additional vulnerability in the application.

Application/database firewalls like Imperva do not have the technical capability to detect or mitigate this exploit and therefore are not an effective security countermeasure.

None of the vendor marketing collateral and FUD, riding the wave of compliance and Facebook, IT security franchises built around standards like PCI DSS etc are replacements for a practical threat analysis of your business.

Your business, any business, be it small, medium or global enterprise needs to perform a practical threat analysis of vulnerabilities (human, technical and software), threats to the most sensitive assets and ascertain the right, cost-effective countermeasures dictated by economic constraints.

Tell your friends and colleagues about us. Thanks!
Share this

Message queuing insecurity

I met with Maryellen Ariel Evans last week. She was in Israel on vacation and we had coffee on the Bat Yam boardwalk.   Maryellen is a serial entrepreneur; her latest venture is a security product for IBM Websphere MQ Series. She’s passionate about message queue security and I confess to buying into the vision.

She has correctly put her finger on a huge, unmitigated threat surface of transactions that are transported inside the business and between business units using message queuing technology. Message queuing is a cornerstone of B2B commerce and in a highly interconnected system, there are lots of entry points all using similar or same technology – MQ Series or the TIB.

While organizations are busy optimizing their firewalls and load balancers, attackers can tap in, steal the data on the message bus and use it as a springboard to launch new attacks.  It is conceivable that well placed attacks on  message queues in an intermediary player (for example a payment clearing house) could result in the inability of the processor to clear transactions but also serve as an entry point into upstream and downstream systems.  A highly connected stem of networked message queues is a convenient and vulnerable entry point from which to launch attacks; these attacks can and do cascade.

If these attacks cascade, the entire financial system could crash.

Although most customers are still fixated on perimeter security, I believe that Maryellen has a powerful value proposition for message queuing customers in the supply chains of key industries that rely on message interchange: banking, credit cards, health care and energy.

 

 

Tell your friends and colleagues about us. Thanks!
Share this

Microsoft gives source code to Chinese government

Sold down the river. A phrase meaning to be betrayed by another. Originated during the slave trade in America. Selling a slave “down the river” would uproot the slave from their from spouses, children, parents, siblings and friends. For example:

“I can’t believe that Microsoft gave their source code to the Chinese in a pathetic attempt to get them to buy more MS Office licenses.  Boy-were we sold down the river!”

In the euphemistically worded press release Microsoft and China Announce Government Security Program Agreement, we learn that China joins over 30 other countries as recipients of  access to Windows operating system source code. I bet all that yummy, ecumenical, international  cooperation gave someone at the BSA warm and fuzzy feelings. Either that or Ballmer told them to keep quiet.

Hold on.  That announcement was in 2003.

Fast forward to 2011.  Searching on Google for “chinese attacks on US on US” yields 57 million hits. After the RSA breach, China is linked to attacks on US Defense contractors and US Congresswoman condemns attack on change.org

In 2011, Steve Ballmer is saying that  China is doing 5 percent of the revenue that it should be doing because  of pirated software. See the article  Microsoft’s Chinese revenue 5% of what it could be

The BSA (Business Software Alliance), an industry lobby group, has some interesting figures to fuel Ballmer’s comments:

  • Four of five software programs installed on PCs are pirated
  • This amounts to “commercial theft” of close to $8 billion a year
  • Piracy in 2010 cost the software industry $59 billion in revenue

I would not take BSA numbers at face value. The BSA estimates are guesses multiplied several times without providing any independent empirical data. They start off by assuming that each unit of copied software represents a direct loss of sale for Microsoft, a false assertion.

If it were true, then the demand for software would be independent of price and perfectly inelastic.

A drop in price usually results in an increase in the quantity demanded by consumers. That’s called price elasticity of demand. The demand for a product becomes inelastic when the demand doesn’t change with price. A product with no competing alternative is generally inelastic. Demand for a unique antibiotic, for example is highly inelastic. A patient will pay any price to buy the only drug that will kill their infection.

If software demand was perfectly inelastic, then everyone would pay in order to avoid the BSA enforcement tax. The rate of software piracy would be 0. Since piracy rate is non-zero, that proves that the original assertion is false. (Argument courtesy of the Wikipedia article on price elasticity of demand ).

See my essay on the economics of software piracy.

Back to Microsoft and their highly ineffective strategy to sell more licenses in China.

Clearly, Microsoft’s strategy to induce the Chinese to buy more Microsoft software licenses by sharing Windows source code has not gotten any traction in the past 8 years.

Au contraire, from a software engineering perspective, it is a fair assumption that having access to Windows source code has made it easier for Chinese cyber attackers to write attack code to penetrate and compromise US defense contractors, critical infrastructure and activist groups like change.org – who all still use  highly vulnerable Windows monoculture products.

This is where we need to explain to the people who drink Microsoft Koolade about the difference between “controlled access” to source code with countries who are  potential enemies with the notion of Open source – where everyone and anyone can look at the source code – where lots of eyeballs help the developers make the operating system more robust.

From a security perspective, the number of eyeballs looking at Linux make it more secure than Windows.

But more significantly, from a commercial perspective, note how abortive Microsoft strategy really is in this case study from  the Harvard Business School on Red Flag Software.

In 2005, just five years after its formal launch, Beijing-based Red Flag Software was the world’s second-largest distributor of the Linux operating system and was expecting its first annual profit. On a unit basis, Red Flag led the world in desktops (PCs) shipped with Linux and was No. 4 in installed servers. On a revenue basis, Red Flag was fourth overall. Within China, Red Flag held just over half of the Linux market and ran key applications for the postal system, large state-owned enterprises, and more than a million PCs. The Chinese government supported Linux as an alternative to Microsoft’s Windows operating system to avoid royalty payments to foreign firms and dependence on foreign technology.

Since the Chinese government have been open about their support of Linux for years, it certainly makes the release of Windows source code look like a very bad idea.  I would hope that this does not go unnoticed in US Congress.

Tell your friends and colleagues about us. Thanks!
Share this

HIPAA and cloud security

In almost every software security assessment that we do of a medical device, the question of HIPAA compliance and data security arises.  The conversation often starts with a client asking the question – “I hear that Amazon AWS is HIPAA compliant?  Isn’t that all I need?

Well – not exactly. Actually, probably not.

As Craig Balding pointed out on his blog post Is Amazon AWS Really HIPAA Compliant Today? there are some basic issues with AWS itself.

There is no customer accessible AWS API call audit log
In other words, you have no way to know if, when and from where (source IP) your AWS key was used to make API calls that may affect the security posture of your AWS resources (an exception is S3, but only if you turn on logging (off by default)).

There is no way to restrict the source IP address from which the AWS API key can be used.
The AWS API interface can be used from any source IP at any time (and as above, you have no audit trail for EC2 API calls).  This is equivalent of exposing your compute and storage management API to the entire planet.

Each AWS account is limited to a single key – unauthorized disclosure of the key results in total breakdown of security

It only gets worse.
Web services and storage are just a small part of  data security.

Even if Amazon AWS was perfect in terms of it’s data security countermeasures – there would still be plenty of opportunity for a data breach of PHI.

There are multiple attack vectors from the perspective of HIPAA compliance and PHI data security.  The following schematic gives you an idea of how an attacker can steal PHI, figure (inspired of my colleague Michel Godet) using any combination of no less than 15 attack vectors to abuse and steal PHI:

There are potential data security vulnerabilities in the client layer, transmission layer, platform layer (Operating system) and cloud services (Amazon AWS in our example).

Note that the vulnerabilities for a PHI data breach can not only happen inside any layer but in particular there are vulnerabilities in the system interfaces between layers.

Let’s take a specific example.

Consider a remote medical diagnostic service that collects information, transmits it over secure channels (https for the sake of argument) to a centralized facility for processing and diagnosis.  The entire transmission stream can be secure but if the processing and diagnosis facility uses Microsoft IIS as an interface, it is possible to attack the IIS Web server, create denial of service and exploit IIS7 and Windows operating system vulnerabilities in order to gain access to the machine itself, the data in motion and possibly gain access and compromise the internal network.

A discussion of HIPAA compliance needs to include a comprehensive threat analysis of the entire supply chain of data processing and not just limit itself to the cloud services that store electronic medical records.

For further reading, see the below resources on HIPAA compliance with Amazon Web services and work that Software Associates has done on threat modeling.

 

Tell your friends and colleagues about us. Thanks!
Share this

The Microsoft monoculture as a threat to national security

This is probably a topic for a much longer essay, but after two design reviews this week with medical device vendor clients on software security issues, I decided to put some thoughts in a blog post.

Almost 8 years ago, Dan Geer, Rebecca Bace,Peter Gutmann, Perry Metzger, Charles Pfleeger, John Quarterman and Bruce Schneier wrote a report titled: CyberInsecurity: The Cost of Monopoly How the Dominance of Microsoft’s Products Poses a Risk to Security.

The report from a stellar cast of information security experts and thought leaders shows that the complexity and dominance of Microsoft’s Windows operating system in US Federal agencies makes the US government prone to cyber attack – a national security threat.

This was in September 2003.

Now fast forward to a congressional hearing on May 25, 2011 by the Committee on Oversight and Government Reform on “Cybersecurity: Assessing the Immediate Threat to the United States Listen to the youtube video – you will note the concern on potential damage to citizens due to virus infecting government PCs breaching personal information.

So the US government is still running Microsoft Windows and is still vulnerable to data security breaches. It seems that the Microsoft lobbying machine has been “successful” over the past 8 years on the Beltway, if you call threats to national security a success.

One of the commonly used canards by Microsoft monoculture groupies is that all operating systems have vulnerabilities and Windows is no better nor worse than Linux or OS/X. If “you” patch properly everything will be hunky-dory. There are a number of reasons why this is fallacious,  to quote the report:

  • Microsoft is a near-monopoly controlling the overwhelming majority of systems. This means that the attack surface is big, on a US national  level.
  • Microsoft has a high level of user-level lock-in; there are strong disincentives to switching operating systems.
  • This inability of consumers to find alternatives to Microsoft products is exacerbated by tight integration between applications and operating systems, and that integration is a long-standing practice.
  • Microsoft’s operating systems are notable for their incredible complexity and complexity is the first enemy of security.
  • The near universal deployment of Microsoft operating systems is highly conducive to cascade failure; these cascades have already been shown to disable critical infrastructure.
  • After a threshold of complexity is exceeded, fixing one flaw will tend to create new flaws; Microsoft has crossed that threshold.
  • Even non-Microsoft systems can and do suffer when Microsoft systems are infected.
  • Security has become a strategic concern at Microsoft but security must not be permitted to become a tool of further monopolization.

As a  medical device security and compliance expert, I am deeply concerned about medical devices that use Windows. If Windows is a threat to national security because it’s used in Federal government offices, Windows is really a bad idea when used in medical devices in hospitals.

I’m concerned about the devices themselves (the FDA classifies Web applications as medical devices also if the indications are medical-related) and the information management systems: the customer support, data collection, analysis management applications that are ubiquitous to networked medical devices.

There are two reasons why the FDA should outlaw Windows in medical devices and their information management systems.

Reason number 1 to ban Windows from medical devices is complexity. We know that the first sin of the 7 deadly sins of software development is making the software complex.  Complexity is the enemy of security because with complex software, there are more design flaws, more software defects and more interfaces where vulnerabilities can arise.

Similar to the history of data security breaches of retail systems, the medical device software industry is (or may soon be) facing a steeply increasing curve of data security and patient safety events due to the Microsoft monoculture.  We are not in Kansas anymore – not credit cards being breached, but entire hospital networks infected by Microsoft Windows viruses and patient monitoring devices that stop working because they got blue screens of death.  Since 300 million credit cards have been breached, it is a reasonable assumption that your card and mine is out there. The damage to your credit card being breached is minimal.  But, if your child was on a patient monitor that went offline due to a Microsoft Windows virus and a critical condition was not detected in time; it’s the difference between life and death.

The complexity and vulnerabilities of Windows technologies are simply not appropriate in the medical device space when you look at the complexity and weight of the components, the SQL injection vulnerabilities provided courtesy of naive ASP.NET programmers and the ever present threat of Windows viruses and malware propagated  by USB sticks and technician notebooks.

The Microsoft monoculture breeds a generation of programmers that are scared of the command line, unable to comprehend what happens behind the GUI and lured by the visual beauty of the development tools.  When a programmer uses a component and doesn’t know it works (see Visual Studio ) and shleps around a shitload of piping in his project, then the energies go into implementing a cute GUI instead of thinking about code threats.

This is on a grander scale, a rerun of Microsoft Powerpoint, where you spend 80% of your time in the application’s GUI instead thinking about and then just stating your message.

Reason number 2 to ban Microsoft Windows from medical devices is more subtle and related to systems management.   The Microsoft monoculture has bred a particular kind of thinking and system management best practices based on Windows servers and Windows PCs running in the office.  This IT system management strategy assumes that PCs are just personal devices that someone has to patch and that they will eventually get infected and or breached and or get a BSOD.

Unlike an office, a hospital is a highly heterogeneous and hostile environment. The system management strategy for network medical devices must be different.

Medical device vendors need to assess their software security with the design objective being a device that runs forever and serves the mission of the doctors and patients.

Medical devices are real time embedded systems living on a hospital network. They should be fail safe, not vulnerable to viruses and should not have to rebooted every few days.

Yes – it’s a tall bill and a lot of people will have to learn how to write code in embedded Linux.

But, there is no alternative, if we want to prevent the medical device industry from suffering the ignominy of the credit card industry.

 

Tell your friends and colleagues about us. Thanks!
Share this

Why Microsoft shops have to worry about security

I am putting together a semester-long, hands-on security training course for a local college.   The college asking me for the program showed me a proposal they got from a professional IT training company for a 120 hour information security course. They are trying to figure how to decide, so they send me the competing proposal and lo and behold, 92 out of 120 hours is about certifying people for Checkpoint firewalls and Microsoft ISA server. Here is what I told the college:

This course focuses on two Checkpoint courses CCSA and CCSE – which counts for 80 hours out of a total of 120.   Then they spend another 12 hours on Microsoft ISA server. The course only spends 8 hours on Information security management and 8 hours on application security.   From a marketing perspective, the course brochure looks slick. But not more than that.

Because of courses like this – companies have so many data breaches. After the course, the students  will know  a few buzz words and how to click through the Checkpoint UI, but they won’t understand anything about hacking software.

If you want to understand data security you have to get down into the dirt and roll up your sleeves instead of learning how to click through the Checkpoint user interface. Microsoft system administrators in particular, need to understand security and how to think about threat response and mitigation, because their thought processes have been seriously weakened by the Microsoft monoculture. They need to think about network , data security and software security threats and how to tie it all together with a practical threat analysis and Information security management approach. They can always train on Checkpoint afterwards….

This reminds me of what Paul Graham writes in his article Beating the averages

The first thing I would do… was look at their job listings… I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening– that’s starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

So – if you are a real hacker, look for companies with security administrators who are certified for Microsoft ISA server and you will have nothing to worry about. But if  your targets security administrators  are facile with Wireshark, Ratproxy and Fiddler and Metasploit, then you should be really worried.

Tell your friends and colleagues about us. Thanks!
Share this