Category Archives: Risk management

skin mounted medical devices

Bionic M2M: Are Skin-mounted M2M devices – the future of eHealth?

In the popular American TV series that aired on ABC in the 70s, Steve Austin is the “Six million Dollar Man”, a former astronaut with bionic implants. The show and its spinoff, The Bionic Woman (Lindsay Wagner playing a former tennis player who was rebuilt with bionic parts similar to Austin after a parachuting accident) were hugely successful.

Modern M2M communication has expanded beyond a one-to-one connection and changed into a system of networks that transmits data to personal appliances using wireless data networks.

M2M networks are much much more than remote meter reading.

The fastest growing M2M segment in Germany, with an average annual growth of 47 percent, will be from consumer electronics with over 5 M2M SIM-cards. The main growth driver is “tracking and tracing”. (Research by E-Plus )

What would happen if the personal appliance was part of the person?

Physiological measurement and stimulation techniques that exploit interfaces to the skin have been of interest for over 80 years, beginning in 1929 with electroencephalography from the scalp.

A new class of electronics based on transparent, flexible 50micron silicon film laminates onto the skin with conformal contact and adhesion based on van der Waals interaction. See: Epidermal Electronics John Rogers et al. Science 2011.

This new class of device is mechanically invisible to the user, is accurate compared to traditional electrodes and has RF connectivity.  The thin 50 micron film serve as temporary support for manual mounting of these systems on the skin in an overall construct that is directly analogous to that of a temporary transfer tattoo, as can be seen in the above picture.

Film mounted devices can provide high-quality signals with information on all phases of the heartbeat, EMG (muscle activity) and EEG (brain activity). Using silicon RF diodes, devices can provide short-range RF transmission at 2Ghz.  Note the antenna on the device.

After mounting it onto the skin, one can wash away the PVA and peel the device back with a pair of tweezers.  When completely removed, the system collapses on itself because of its extreme deformability and skin-like physical properties.

Due to their inherent transparent, unguarded, low cost and mass-deployed nature, epidermal mounted medical devices invite new threats that are not mitigated by current security and wireless technologies.

Skin-mounted devices might also become attack vectors themselves, allowing a malicious attacker to apply a device to the spine, and deliver low-power stimuli to the spinal cord.

How do we secure epidermal electronics devices on people?

Let’s start with some definitions:

  • Verification means is the device built/configured for its intended use (for example measuring EMG activity and communicating the data to NFC (near field communications) device.
  • Validation means the ability to assess the security state of the device, whether or not it has been compromised.
  • RIMs (Reference Integrity Measurements) enable vendors/healthcare providers define the desired target configurations of devices, for example, is it configured for RF communications

There are 3 key threats when it comes to epidermal electronics:

  1. Physical attacks: Reflashing before application to the skin in order to modify  intended use.
  2. Compromise of credentials: brute force attacks as well as malicious cloning of credentials.
  3. Protocol attacks against the device: MITM on first network access, DoS, remote reprogramming

What are the security countermeasures against these threats?  We can consider a traditional IT security model and a trusted computing model.

Traditional IT security model?

Very large numbers of low-cost, distributed devices renders an  access-control security model inappropriate. How would a firewall on an epidermal electronics device enforce intended use, and manage access-control policies? What kind of policies would you want to manage? How would you enforce installation of the internal firewall during the manufacturing process?

Trusted computing model?

A “trusted computing model”  may be considered as an alternative security countermeasure to access control and policy management.

An entity can be “trusted” if it predictably and observably behaves in the expected manner for its intended use. But what does “intended use” mean in the case of epidermal electronics that are used for EKG, EEG and EMG measurements on people?

Can the traditional, layered, trusted computing models used in the telecommunications world be used to effectively secure cheap, low-cost, epidermal electronics devices?

In an M2M trusted computing model there are 3 methods:  autonomous validation, remote validation and semi-autonomous validation. We will examine each and try and determine how effective each model is as a security countermeasure for the key threats of epidermal electronics.See: “Security and Trust for M2M Communications” – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein

Autonomous validation

This is essentially the trust model used for smart cards, where the result of local verification is true or false.

Autonomous validation does not depend on the patient herself or the healthcare provider. Local verification is assumed to have occurred before the skin-mounted device attempts communication or performs a measurement operation.

Autonomous validation makes 3 fundamental assumptions – all 3 are wrong in the case of epidermal electronics:

  1. The local verification process is assumed to be perfectly secure since the results are not shared with anyone else, neither the patient nor the healthcare provider.
  2. We assume that the device itself is completely trusted in order to enforce security policies.
  3. We assume that a device failing self-verification cannot deviate from its “intended use”.

Device-based security can be broken and cheap autonomous skin-mounted devices can be manipulated – probably much easier than cell-phones since for now at least, they are much simpler. Wait until 2015 when we have dual core processors on a film.

In addition, autonomous validation does not mitigate partial compromise attacks (for example – the device continues to measure EMG activity but also delivers mild shocks to the spine).

Remote validation

Remote validation has connectivity, scalability and availability issues. It is a probably a very bad idea to rely on network availability in order to remotely validate a skin-mounted epidermal electronics device.

In addition to the network and server infrastructure required to support remote validation, there would also be a huge database of RIMs, to enable vendors and healthcare providers define the target configurations of devices.

Run-time verification is meaningless if it is not directly followed by validation, which requires frequent handshaking with central service providers, which in turn increases traffic and creates additional vulnerabilities, such as side-channel attacks.

Remote validation of personally-mounted devices compromises privacy since the configuration may be virtually unique for a particular person and interception of validation messages could reveal the identity based on location even without deccrypting payloads.

Discrimination by vendors also becomes possible, as manipulation and control of the RIM databases could lock out other applications/vendors.

Semi-Autonomous Validation

Semi-autonomous validation divides verification and enforcement between the device and the healthcare provider.

In semi-autonomous validation, the device verifies itself locally and then sends the results in a network message to the healthcare provider who can decide if he needs to notify the user/patient if the device has been compromised or does not match the intended use.

Such a system needs to ensure authentication, integrity, and confidentiality of messages sent from epidermal electronics devices to the healthcare provider.

RIM certificates are a key part of semi-autonomous validation and would be signed by a trusted third party/CA.

Semi-autonomous validation also allows for more granular delegation of control to the device itself or the healthcare provider – depending on the functionality.

Summary

Epidermal electronics devices are probably going to play a big part in the future of healthcare for monitoring vital signs in a simple, cheap and non-invasive way.  These are medical devices, used today primarily for measuring vital signs that are directly mounted on the skin and not a Windows PC or Android smart phone that can be rebooted if there is a problem.

As their computing capabilities develop, current trusted computing/security models will be inadequate for epidermal electronics devices and attention needs to be devoted as soon as possible in order to build a security (probably semi-autonomous) model that will mitigate threats by malicious attackers.

 References

  1. Security and Trust for M2M Communications – Inhyok Cha, Yogendra Shah, Andreas U. Schmidt, Andreas Leicher, Mike Meyerstein
  2. Epidermal Electronics John Rogers et al. Science 2011.
Tell your friends and colleagues about us. Thanks!
Share this

Debugging security

There is an interesting analogy between between debugging software and debugging the security of your systems.

As Brian W. Kernighan and Rob Pike wrote in “The Practice of Programming

As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

In programming, it is faster to examine the contents of a couple of variables than to single-step through entire sections of code.

Collecting security logs is key to information security management not only for understanding what and why an event happened but also in order  to  prove regulatory compliance with regulations such as the HIPAA security rule. The business requirements are that   security logs  should be both relevant and effective.

  1. Relevant content of audit controls:  For example, providing a  detailed trace of an application whenever it elevates privilege in order to execute a system level function.
  2. Effective audit reduction and report generation:  Given the large amount of data that must be analyzed in security  logs, its crucial that critical events are separated from normal traffic and that concise reports can be produced in real-time to help understand  what happened, why it happened and how it was mediated and how to mitigate similar risks in the future.

In security log analysis, it is faster and definitely more effective for a security analyst to examine the contents of a few real time events than to process gigabytes or terabytes of security logs (the equivalent of stepping through or placing watch points in sections of of a sub-modules with  hundreds or thousands of lines of code.

When you have to analyze security logs, it is easy to get lost in details of complicated data and flows of events and find yourself drifting off into all kinds of directions even as the bells go on in the back of your mind that you are chasing ghosts in a futile and time-consuming exercise of investigation and security event debugging.

In order to understand this better, consider another analogy, this time from the world of search engines.

Precision and recall are key to effective security log analysis and effective software debugging.

In pattern recognition and information retrievalprecision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. When a program for recognizing the dogs in a scene correctly identifies four of the nine dogs but mistakes three cats for dogs, its precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. See Precision and recall in the Wikipedia.

In other words – it doesn’t really matter if you have to analyze a program with 100,000 lines of code or a log file with a terabyte of data – if you have good precision and good recall.

The problem is however, that the more data you have, the more difficult it is to achieve high precision and recall and that is why real-time events (or  debugging statements) are more effective in day-to-day security operations.

 

Tell your friends and colleagues about us. Thanks!
Share this

Treat passwords like cash

How much personal technology do you carry around when you travel?  Do you use one of those carry-on bags with your notebook computer on top of the carry-on?

A friend who is a commercial pilot had his bag swiped literally behind his back while waiting on line to check-in to a 4 star Paris hotel. The hotel security cameras show the thief moving quickly behind his back, quietly taking the bag and calmly walking off.

Is your user password 123456?

The Wharton School at UPenn recently posted an article – is your password 123456?

As the article notes – “Hack attacks have recently hit government agencies, news sites and retailers ranging from the U.S. Justice Department and Gawker to Sony and Lockheed Martin, as hackers become more sophisticated in their ability to steal customers’ identities and personal information.”

But, you don’t need sophisticated hack attacks to know that many people use simple minded passwords like 123456 and thieves use simple techniques like grab and run.

So – why don’t we all use strong passwords?

Every Web site and business application you use has a  different algorithm and password policy.  For users, who need to maintain strong passwords using 25 different policies on 25 different systems and web sites, it’s impossible to maintain a strong password policy without making some compromises.

The biggest vulnerability is using your corporate password on an online porn site.  Since adult sites are routinely subject to attack and cheesier, more marginal adult sites – (mind you we’re not talking Penthouse.com or Playboy.com perish the thought) are frequently unwitting malware distribution platforms.

Here are 5 rules for safe password management :

  1. Use technical aids to manage your passwords.  Consider using Keepass password management
  2. Match password  strength to asset value. In other words – use a complex combination of letters and numbers for online banking and a simple easy to remember password for Superball news.
  3. Don’t reuse.   Don’ use the same strong password on more than one sites.
  4. Make passwords easy to remember but hard to guess.  Adopt mnemonics – like 4Tshun KukZ that you can remember
  5. Maintain physical security of your passwords.  Treat your passwords like you treat the cash in your wallet.  If you have to write passwords down, put them on a piece of paper in your wallet and treat that piece of paper like a $100 bill,  make sure you don’t lose that wallet.

 

Tell your friends and colleagues about us. Thanks!
Share this

Clinical trials in the cloud

Ben Baumann from Akaza and Open Clinica fame, recently blogged about clinical trials in the cloud.  Ben is pitching the relatively new offering from Akaza called Open Clinica Optimized hosting that offers quick startup using validated Open Clinica instances and resources on-demand on a SAS-70 compliant platform.

As Ben noted that in the clinical research field, putting together such an offering is not trivial. Open Clinica is the worlds fastest growing clinical trials software with an interesting Open Source business model of community-supported Open Source and revenue from enterprise licensing, cloud services and training.

Software Associates specializes in helping medical device vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.  We have been working with a regulatory affairs consulting client for over 3 years now, using the Open Clinica application for managing  large multi-center, international clinical trials using Rackspace hosting and more recently using Rackspace Cloud.

I can attest that running multi-center clinical trails in the cloud is neither for the faint of heart nor weak of stomach.  Past the security, compliance and regulatory issues – there is also the issue of performance.

Although resources are instantly scalable on-demand in the cloud, resources are not a substitute for secure software that runs fast.

As I noted in a previous essay “The connection between application performance and security in the cloud“, slow applications require more hardware, more database replication, more load-balancing and more firewalls. More is not always better, and more layers of infrastructure increase the threat surface of the application with more attack points on the interfaces and more things that can go wrong during software updates and system maintenance.

If there is a design or implementation flaw in a cloud application for clinical trials management that results in the front-end Web server making 10,000 round trips to the back-end database server to render a matrix of 100 subjects, then throwing more hardware at the application will be a fruitless exercise.

If we do a threat analysis on the system, we can see that our No. 1 attacker is the software itself.

In that case, the application software designers have to go back to the drawing board and redesign the software and get that number down to 1 or 2 round trips.

The effort will be well worth it in your next bill from your cloud service provider.

 

Tell your friends and colleagues about us. Thanks!
Share this

Insecurity by compliance

If a little compliance creates a false sense of security then a lot of compliance regulation creates an atmosphere of feeling secure, while in fact most businesses and Web services are in fact very insecure.

Is a free market democracy doomed to suffer from privacy breaches – by definition?

My father is a retired PhD in system science from UCLA who worked for many years in the defense industry in Israel and California.  At age 89 he is sharp, curious and wired, with an iPad and more connected and easily accessible on the Net than most people are on their phone.

He sent me this item which turned out to be yet another piece of Internet spam and urban legend that has been apparently circulating the Net for over 10 years and has resurfaced just in time for the US Presidential elections.

A democracy is always temporary in nature; it simply cannot exist as a permanent form of government….The average age of the world’s greatest civilizations from the beginning of history, has been about 200 years.During those 200 years, these nations always progressed through the following sequence:From bondage to spiritual faith;
From spiritual faith to great courage;
From courage to liberty;
From liberty to abundance;
From abundance to complacency;
From complacency to apathy;
From apathy to dependence;
From dependence back into bondage

I told my Dad that it looks and smells like spam.  A quick read shows that it is a generalization from a sample of one.  The Roman Empire lasted about 500 years. The Ottoman Empire lasted over 700 years. The British Empire lasted about 200 years from 1783 to 1997 (withdrawal from the Falklands).  The Russian Empire lasted 200 years and the Soviets lasted less than 80. The Byzantine over 1000 and so on… See http://listverse.com/2010/06/22/top-10-greatest-empires-in-history/.

Rumors of the downfall of American democracy are premature, even though the US is more of a service economy than a manufacturing economy today than it was 200 years ago.

The US has shifted over the past 40 years from manufacturing and technology innovation to technology innovation, retail, outsourcing and financial services.    An obvious observation is Apple, with most of it’s manufacturing jobs outside the US, a net worth of a not-so-small country and perhaps, the most outstanding consumer technology innovator in the world. Another, and more significant example is Intel, one of the world’s technology leaders with a global operation from Santa Clara to Penang to China to Haifa and Jerusalem.  World class companies like Intel and Apple are a tribute to US strengths and vitality not weaknesses. In comparison, excluding Germany, Poland and a handful of other European countries, the EU is on the edge of bankruptcy.

In this period of time, has the US improved it’s information security in the face of rapidly increasing connectivity,  mobile devices and apps and emerging threats such as APT (advanced persistent threats)?

Apparently not.

 In the sphere of privacy and information security, the US leads in data security breaches while the EU leads in data security and privacy. The EU has strong, uniform data security regulation, whereas the US has a quilt-work of hundreds of privacy and security directives where each government agency has it’s own system for data security compliance and each state has it’s own legislation (albeit generally modeled after California) for privacy compliance.

The sheer volume and fragmented state of US data security and privacy regulation is practically a guarantee that most of the regulation will not be properly enforced.

On the other hand, the unified nature of EU data security directives makes it easier to enforce since everyone is on the same page.

We would argue that a free market, American style economy results on more technology innovation and economic vitality but also creates a chaotic regulatory environment where the breach of 300 million US credit cards in less than 10 years is an accepted norm. The increase in compliance regulation by the Obama administration does not impress me as a positive step in improving security.

As my colleague, John P. Pironti, president of risk and information security consulting firm IP Architects, said in an interview:

The number-one thing that scares me isn’t the latest attack, or the smartest guy in the street, it’s security by compliance, for example with PCI DSS 2.0

Security by compliance, he said, doesn’t do a company any favors, especially because attackers can reverse-engineer the minimum security requirements dictated by a standard to look for holes in a company’s defense.

In that case, if a little compliance creates a false sense of security then a lot of compliance regulation will create an atmosphere of feeling secure, while in fact most businesses and Web services are in fact very insecure.

Tell your friends and colleagues about us. Thanks!
Share this

How to reduce risk of a data breach

Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation and compliance check lists have little direct value in protecting customers.

Software Associates specializes in helping medical device and healthcare vendors achieve HIPAA compliance and improve the data and software security of their products in hospital and mobile environments.

The first question any customer asks us regarding HIPAA compliance is how little he can spend. Not how much he should spend. This means we need simple and practical strategies to reduce the risk of data breaches.

There are 2 simple strategies to reduce the risk of data breach, one is technical, one is management:

  1. Use real time detection of security events to  directly protect your customers
  2. Build your security portfolio around specific threat scenarios (e.g a malicious employee stealing IP, a business partner obtaining access to confidential commercial information, a software update exposing PHI etc…) and use the threat scenarios to drive your service and product acquisition process.

Use real-time detection to directly protect your customers

Systems like ERM, SIM and Enterprise information protection are enterprise software applications that serve the back-office business of security delivery; things like log analysis and saving on regulatory documentation. Most of these systems excel at gathering and searching large volumes of data while providing little evidence as to the value of the data or feedback into improving the effectiveness of the current security portfolio.

Enterprise IT security capabilities do not have  a direct relationship with improving customer security and privacy even if they do make the security management process more effective.

This not a technology challenge but a conceptual challenge: It is impossible to achieve a meaningful machine analysis of  security event data in order to improve customer security and privacy using data that was uncertain to begin with, and not collected and validated using standardized evidence-based methods

Instead of log analysis we recommend real-time detection of events. Historical data in  log files  has little intrinsic value in the here-and-now process of event response and mediation.

  1. Use DLP (data loss prevention) and monitor key digital assets such as credit cards and PHI for unauthorized outbound transfer.  In plain language – if you detect credit cards or PHI in plain text traversing your network perimeter or removable devices, then you have just detected a data breach in real time, far cheaper and faster than mulling through your log files after discovering 3 months later that a Saudi hacker stole 14,000 credit cards from an unpatched server.
  2. Use your customers as early warning sensors for exploits. Provide a human 24×7 hotline that answers on the 3d ring for any customer who thinks they have been phished or had their credit card or medical data breached.  Don’t put this service in the general message queue and never close the service.   Most security breaches become known to a customer when they are not at work.

Build your security portfolio around specific threat scenarios

Building your security portfolio around most likely threat scenarios makes sense.

Nonetheless, current best practices are built around compliance checklists (PCI DSS 2.0, HIPAA security rule, NIST 800 etc…) instead of most likely threat scenarios.

PCI DSS 2.0 has an obsessive preoccupation with anti-virus.  It does not matter if you have a 16 quad-core Linux database server that is not attached the Internet with no removable device nor Windows connectivity. PCI DSS 2.0 wants you to install ClamAV and open the server up to the Internet for the daily anti-virus signature updates. This is an example of a compliance control item that is not rooted in a probable threat scenario.

When we audit a customer for HIPAA compliance or perform a software security assessment of an innovative medical device, we think in terms of “threat scenarios”, and the result of that thinking manifests itself in planning, penetration testing, security countermeasures, and follow-up for compliance.

In current regulatory compliance based systems like PCI DSS or HIPAA, when an auditor records an encounter with the customer, he records the planning, penetration testing, controls, and follow-up, not under a threat scenario, but under a control item (like access control). The next auditor that reviews the  compliance posture of the business  needs to read about the planning, testing, controls, and follow-up and then reverse-engineer the process to arrive at which threats are exploiting which vulnerabilities.

Other actors such as government agencies (DHS for example) and security researchers go through the same process. They all have their own methods of churning through the planning, test results, controls, and follow-up, to reverse-engineer the data in order to arrive at which threats are exploiting which vulnerabilities

This ongoing process of “reverse-engineering” is the root cause for a series of additional problems:

  • Lack of overview of the the security threats and vulnerabilities that really count
  • No sufficient connection to best practice security controls, no indication on which controls to follow or which have been followed
  • No connection between controls and security events, except circumstantial
  • No ability to detect and warn for negative interactions between countermeasures (for example – configuring a firewall that blocks Internet access but also blocks operating system updates and enables malicious insiders or outsiders to back-door into the systems from inside the network and compromise  firewalled services).
  • No archiving or demoting of less important and solved threat scenarios (since the data models are control based)
  • Lack of overview of security status of a particular business, only a series of historical observations disclosed or not disclosed.  Is Bank of America getting better at data security or worse?
  • An excess of event data that cannot possibly be read by the security and risk analyst at every encounter
  • Confidentiality and privacy borders are hard to define since the border definitions are networks, systems and applications not confidentiality and privacy.
Tell your friends and colleagues about us. Thanks!
Share this

The political power of social media

Clay Shirky writes on Foreign Affairs this week

Arguing for the right of people to use the Internet freely is an appropriate policy for the United States, both because it aligns with the strategic goal of strengthening civil society worldwide and because it resonates with American beliefs about freedom of expression

By switching from an instrumental to an environmental view of the effects of social media on the public sphere, the United States will be able to take advantage of the long-term benefits these tools promise.

Oooh – I just love this stuff “resonates with American beliefs” and “environmental view of the effects of social media on the public sphere

“Some ideas are so stupid only intellectuals believe them.”
George Orwell

Twitter and Facebook are communication tools. Not values.

It is the height of foolishness to assert that a communications tool like Facebook and Twitter is a substitute for values. Sure it makes it easier for 80,000 people to attend demonstrations someone else is funding, but don’t forget the agendas of the people funding the demonstrations.

The US will not be able to “to take advantage of the long-term benefits these tools promise” unless it takes a moral and value position, clearly delineating the basic dos ( for starters – honor your parents, honor freedom of religion) and don’ts (not killing your citizens, not raping your women, not chopping off hands of thieves, not funding Muslim terrorists, not holding the world at gun-point over the price of oil).

There is no evidence that social media changes government policy

Look at Egypt. Look at Israel. Look at Wall Street.

Social media hype is escapism from dealing with fundamental issues

Let’s assume that the US has an agenda and responsibility to make the world a better place.

Green / clean energy.  Healthy people.

I think we can all agree these are  good thing for the world. Did social media play any kind of role at all in the blunders of  the Obama administration in their energy or healthcare initiatives? Does the administration have a good record or a bad record with these initiatives?

Solyndra is an illustration of how a major Obama contributor took half a billion in loan guarantees and walked away without exposure.   The factory employed about 150 people and stimulated the pockets of a small number of wealthy people.   And, do not forget, Solyndra is kids stuff compared to the $80 Billion in real money that the US government squandered on Afghan electrification projects with no oversight on the cost-plus contractors that delivered zip to Afghanistan.

Mr. Obama and his yea-sayers like Clay Shirkey need the hifalutin talk about the importance of social media and free speech, to deflect voter attention from  rewards to their campaign contributors, financial service institutions, government contractors and Beltway insiders and winning the next Presidential election.

Is the objective improving the health of Americans or is the objective giving gifts of $44,000 to US doctors so that they can go out and buy some software from one of the 705 companies that have certified to HHS requirements for e-prescribing? WTF does e-prescription software have to do with treating chronic patients?

Even giving President Obama credit for having some good ideas – once you have a big, centralized, I’ll run everything, decide everything, make everyone comply kind of government – you get all kinds of nonsense like Solyndra, Afghan electrification projects, health care software subsidies and … Bar Lev lines,  multi-billion sheqel security fence projects and the funneling of funds from the PA to Israeli businessmen allied to Israeli ex-generals who sell gasoline to Palestinian terror organizations and security services to Palestinian banks.

In the Middle East – even while vilifying Bush, the Obama administration continues the Bush doctrine of not going after the real bad guys who fund terror (the Saudis),  while wasting thousands of American lives (in Iraq and Afghanistan) and blowing over 80 billion dollars in tax payer money on boondoogles like the Iragi and Afghan electrification projects.

Obama praise for the Arab Spring is chilling in its double-talk about democracy (just last month in Tunisia) as Libya, Egypt and their neighbors transition into Islamic fundamentalism rule amidst blatantly undemocratic violence.

In Israel, I would not blame any US President for problems our own doing no more than I would credit Facebook with the 2011 Summer of Love on Rothschild which was no more than an exercise in  mass manipulation by professional political lobbyists and people like Dafne Leaf who were too busy with their liberal agendas to serve their country.

Israeli leaders have been on a slippery downhill slope of declining morals since Sabra and Shatila in 1985.

And for that – we cannot blame any single President or Prime Minister no more than we can credit Facebook with remembering friends’ birthdays –  but only blame ourselves for putting up with the lack of values and morals of our leaders.

Tell your friends and colleagues about us. Thanks!
Share this

Disaster recovery planning

This article describes a plan and implementation process for disaster recovery planning. The secret to success in our experience is to involve the local response team from the outset of the project.

Copyright 2006 D.Lieberman. This work is licensed under the Creative Commons Attribution License

The disaster recovery plan is designed to assist companies in responding quickly and effectively to a disaster in a local office and restore business as quickly as possible. In our experience, participation in the planning and implementation process is more important than the process itself and helps ensure that the local response teams understand what they need to do and that resources they need will be available.

Keywords

  • DRP – disaster recovery plan
  • BIT  business impact timeline
  • ERT emergency response team
  • BIA  business impact assessment
  • Countermeasures  physical or procedural measures we take in order to mitigate a threat
  • PRT primary response time; how long it takes (or should take) to respond (not resolve)
  • RRP  recovery and restore plan; recovery from the disaster and restore to original state

DR planning is not about writing a procedure, getting people to sign up and then filing it away somewhere. In the BIT (business impact timeline) we see a continuum of actions before and after an incident. In the pre-incident phase, the teams are built, plans are written, and preparedness is maintained with training and audit. After an incident, the team responds, recovers, restores service and assesses effectiveness of the plan.

drp_1.gif

T=ZERO is the time an incident happens. Even though one hopes that disaster will never strike, refresher training should be conducted every 6 months because of employee turnover and system changes and self-audits conducted by the ERT every 3 months.

Building the DR plan

Build the ERT

Assign a 2-person team in each major office (for small offices with one or two people, then the employee will do it himself) to be the ERT. The people in the ERT need to have both technical and social skills to handle the job. Technical skills means being able to call an IT vendor and being able to help the vendor diagnose a major issue such as an unrecoverable hard disk crash on an office file and print server. Social skills means staying cool under pressure and following procedure in major events such as fire, flooding or terror attack.

In addition to an ERT in each office, one ERT will be designated as “response manager”. The response manager is a more senior person (with a backup person) that will command the local teams during crisis, maintain the DRP documentation and provide escalation.

The local response team becomes involved and committed to the DRP by planning their responses to incidents and documenting locations of resources they need in order to respond and restore service.

DR Planning Pre-incident activities

Kickoff call

The purpose of the call is to introduce the DRP process and set expectations for the local ERT. Two days before the call, the local team will receive a PowerPoint presentation describing DRP, the implementation process and the BIA worksheet. At the end of the call, the team will take a commitment to fill out the worksheet and prepare for a review session on the phone one week later.

Business Impact Assessment (BIA)

In the BIA, the team lists possible incidents that might happen and assesses the impact of a disaster on the business. For example there are no monsoons in Las Vegas but there might be an earthquake (Vegas is surrounded by tectonic faults and number 3 in the US for seismic activity) and an earthquake could put a customer service center in Vegas out of business for several days at least.

Recover and Restore

Recovery is about the ERT having detailed and accessible information about backups – data, server, people and alternative office space. Within 30 days after a disaster, full service should be restored by the ERT working with local vendors and the response manager.
It may also be useful using http://www.connected.com for backup of data on the distributed PC’s and notebooks.

DR Plan Review

The purpose of the call is to allow each team to present their worksheet and discuss appropriate responses with the global response manager. Two days before the call, the teams will send in their BIA worksheet. The day after the call the revised DRP will be posted.

Filling out the DRP worksheets

There are two worksheets the BIA worksheet (which turns into the primary response checklist) and the RRP (recover and restore plan) worksheet, which contains a detailed list of how to recover backup resources and restore service.

Filling out the BIA worksheet.

In the BIA worksheet, the team lists possible incidents and assesses the impact of a disaster on the business. In order to assess the impact of a disaster on the business we grade incidents using a tic-tac-toe matrix.

drp_2.gif

The team will mark the probability and impact rating for an incident going across a row of the matrix. A risk might have probability 2 and impact 5 making it a 7, while another risk might have probability 1 and impact 3 making it a 4. Countermeasures would be implemented for the 7 risk before being implemented for the 4 risk.

BIA worksheet step by step
  • Add, delete and modify incidents to fit your business
  • Grade business impact using the “tic-tac-toe” matrix for each incident.
  • Set a primary response time (how quickly the ERT should respond not resolve)
  • Establish escalation path  escalate to local service providers and response manager within a time that matches the business impact. Escalate to local vendor immediately and escalate to response manager according to following guidelines:
    • Risk > 6 within 15
    • Risk <= 6 and > =4 within 60
    • Risk < 4 within 2 hours.

drp_3.gif

Filling out the RRP worksheet.

In the RRP worksheet, the team documents in detail how to locate and restore backups and how to access servers (in the network and physically).

drp_4.gif

Maintaining the DR plan

DR exercises

Once every 6 months, the response manager will run an unannounced exercise, simulating an emergency. In a typical DR exercise the local ERT will be required to:

  • Respond to a single emergency (for example earthquake)
  • Verify contents of RRP check list
  • Physically locate backups

 

Self-Audit

After completion of the ER plan the local response team needs to perform periodic self-audits. A member of the local ERT will schedule an audit once every 3 months and notify the response manager by email regarding the date.

  • The audit should take about 1 hour and will check documentation and backup readiness
  • Documentation readiness
    • Make sure telephone numbers of critical suppliers posted at entrance to office. Make sure numbers are current by calling.
    • Read primary response sheet
    • Wallet-sized cards with emergency phone numbers and procedures, to be carried by all employees.
    • Onboard list who is in the office today and who is traveling or on vacation
  • Backup readiness
    • Local backup files/tapes
Tell your friends and colleagues about us. Thanks!
Share this

Security and the theory of constraints

Security management is tricky.  It’s not only about technical controls and good software development practice. It’s also about management responsibility.

If you remember TOC ( Theory of Constraints, invented by Dr. Eli Goldratt about 40 years ago) there is only 1 key constraint that limits system (or company) performance to achieve it’s goal.

So – what is that 1 key constraint for achieving FDA Premarket Notification (510k) and/or HIPAA compliance success for your medical device on a tight schedule and budget.

That’s right boys and girls – it’s the Business unit manager

Consider 3 cases of companies who are developing medical devices and need to achieve FDA Premarket Notification (510k) and/or HIPAA compliance for their product.   We will see that there are 3 generic “scenarios” that threaten the project.

A key developer leaves and the management waits until the last minute

In this scenario, the person responsible for the software security and compliance quits. The business unit manager waits until the last minute to replace him and in the end realizes that they need a contractor. External consultants (like us) start wading through reams of documentation, interviewing people and reconstructing an understanding of the systems and scope before we even start our first piece of threat analysis and write our first piece of code.

The mushroom theory of management

In this scenario, there are gobs of unknowns because the executive staff did not, could not or would not reveal all their cards in a particularly risky and complex development project that is not reaching a critical milestone.  The business unit manager calls in an outsider to evaluate and/or take over. After 6 weeks – you may sort of think you have most of the cards on the table. But – then again, maybe not. You might get lucky and achieve great progress because the engineers are ignoring the product manager and doing a great job. Miracles sometimes happen but don’t bet on it.

We’re in transition

In scenario 3, a new CEO is brought in after a putsch in the board and things come to a standstill as the executive staff started getting used to the new boss and the line staff start getting used to new directives and the programmers stop wondering if they will still have a job.

Truth be told – only the first scenario is really avoidable.  If your executive staff runs things by the mushroom theory of management or you get into management transition mode – basically, anything can happen.  And that’s why consultants like us are busy.

Tell your friends and colleagues about us. Thanks!
Share this

The Tao of GRC

I have heard of military operations that were clumsy but swift, but I have never seen one that was skillful and lasted a long time. Master Sun (Chapter 2 – Doing Battle, the Art of War).

The GRC (governance, risk and compliance) market is driven by three factors: government regulation such as Sarbanes-Oxley, industry compliance such as PCI DSS 1.2 and growing numbers of data security breaches and Internet acceptable usage violations in the workplace. $14BN a year is spent in the US alone on corporate-governance-related IT spending .

It’s a space that’s hard to ignore.

Are large internally-focused GRC systems the solution for improving risk and compliance? Or should we go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies .

This article introduces a practical approach that will help the CISOs/CSOs in any sized business unit successfully improve compliance and reduce information value at risk. We call this approach “GRC 2.0” and base it on 3 principles.

1.    Adopt a standard language of GRC
2.    Learn to speak the language fluently
3.    Go green – recycle your risk and compliance

GRC 1.0

GRC (Governance, Risk and Compliance) was first coined by Michael Rasmussen.  GRC products like Oracle GRC Suite and Sword Achiever, cost in the high six figures and enable large enterprises to automate the workflow and documentation management associated with costly and complex GRC activities.

GRC – an opportunity to improve business process

GRC regulation comes in 3 flavors: government legislation, industry regulation and vendor-neutral security standards.  Government legislation such as SOX, GLBA, HIPAA and EU Privacy laws were enacted to protect the consumer by requiring better governance and a top-down risk analysis process. PCI DSS 2.0; a prominent example of Industry regulation, was written to protect the card associations by requiring merchants and processors to use a set of security controls for the credit card number with no risk analysis.  The vendor-neutral standard, ISO27001 helps protect information assets using a comprehensive set of people, process and technical controls with an audit focus.

The COSO view is that GRC is an opportunity to improve the operation:

“If the internal control system is implemented only to prevent fraud and comply with laws and regulations, then an important opportunity is missed…the same internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.”

GRC 2.0

The COSO position makes sense, but in practice it’s difficult to attain process improvement through enterprise GRC management.

Unlike ERP, GRC lacks generally accepted principles and metrics. Where finance managers routinely use VaR (value at risk) calculations, information security managers are uncomfortable with assessing risk in financial measures. The finance department has quarterly close but information security staffers fight a battle that ebbs and flows and never ends. This creates silos – IT governance for the IT staff and consultants and a fraud committee for the finance staff and auditors.

GRC 1.0 assumes a fixed structure of systems and controls.  The problem is that, in reducing the organization to passive executives of defense rules in their procedures and firewalls, we ignore the extreme ways in which attack patterns change over time. Any control policy that is presumed optimal today is likely to be obsolete tomorrow. Learning about changes must be at the heart of day-to-day GRC management.

A fixed control model of GRC is flawed because it disregards a key feature of security and fraud attacks – namely that both attackers and defenders have imperfect knowledge in making their decisions. Recognizing that our knowledge is imperfect is the key to solving this problem. The goal of the CSO/CISO should be to develop a more insightful approach to GRC management.

The first step is to get everyone speaking the same language.

Adopt a standard language of GRC – the threat analysis base class

We formalize this language using a threat analysis base class which (like any other class), has attributes and methods. Attributes have two sub-types – threat entities and people entities.

Threat entities

Assets have value, fixed or variable in Dollar, Euro, and Rupee etc.  Examples of assets are employees and intellectual property contained in an office.

Vulnerabilities are weaknesses or a lacking in the business. For example – a wood office building with a weak foundation built in an earthquake zone.

Threats exploit vulnerabilities to cause damage to assets. For example – an earthquake is a threat to the employees and intellectual property stored on servers in the building.

Countermeasures have a cost, fixed are variable and mitigate the vulnerability. For example – relocating the building and using a private cloud service to store the IP.

People entities

Business decision makers encounter vulnerabilities and threats that damage company assets in their business unit. In a process of continuous interaction and discovery, risk is part of the cost of doing business.

Attackers create threats and exploit vulnerabilities to damage the business unit. Some do it for the notoriety, some for the money and some do it for the sales channel.

Consultants assess risk and recommend countermeasures. It’s all about the billable hours.

Vendors provide security countermeasures. The effectiveness of vendor technologies is poorly understood and often masked with marketing rhetoric and pseudo-science.

Methods

The threat analysis base class prescribes 4 methods:

  • SetThreatProbability -estimated annual rate of occurrence of the threat
  • SetThreatDamageToAsset – estimated damage to asset value in a percentage
  • SetCountermeasureEffectiveness – estimated effectiveness of the countermeasure in a percentage.
  • GetValueAtRisk

Speak the language fluently

A language with 8 words is not hard to learn, it’s easily accepted by CFO, CIO and CISO since these are familiar business terms.

The application of our 8 word language is also straightforward.

Instances of the threat analysis base class are “threat models” – and can be used in the entire gamut of GRC activities:  Sarbanes-Oxley, which requires a top down risk analysis of controls, ISO27001 – controls are countermeasures that map nicely to vulnerabilities and threats (you bring the assets) and PCI DSS 1.2 – the PAN is an asset, the threats are criminals who collude with employees to steal cards and the countermeasures are specified by the standard.

You can document the threat models in your GRC system (if you have one and it supports the 8 attributes). If you don’t have a GRC system, there is an excellent free piece of software to do threat modeling – available at http://www.ptatechnologies.com

Go green – recycle your threat models

Leading up to the Al Qaida attack on the US in 9/11, the FBI investigated, the CIA analyzed but no one bothered to discuss the impact of Saudis learning to fly but not land airplanes.

This sort of GRC disconnect in organizations is easily resolved between silos, by the common, politically neutral language of the threat analysis base class.

Summary

Effective GRC management requires neither better mathematical models nor complex enterprise software.  It does require us to explore new threat models and go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies that may threaten our business.  If you follow the Tao of GRC 2.0 – it will be more than a fulfillment exercise.

Tell your friends and colleagues about us. Thanks!
Share this