Tag Archives: security management

Debugging security

There is an interesting analogy between between debugging software and debugging the security of your systems.

As Brian W. Kernighan and Rob Pike wrote in “The Practice of Programming

As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.

In programming, it is faster to examine the contents of a couple of variables than to single-step through entire sections of code.

Collecting security logs is key to information security management not only for understanding what and why an event happened but also in order  to  prove regulatory compliance with regulations such as the HIPAA security rule. The business requirements are that   security logs  should be both relevant and effective.

  1. Relevant content of audit controls:  For example, providing a  detailed trace of an application whenever it elevates privilege in order to execute a system level function.
  2. Effective audit reduction and report generation:  Given the large amount of data that must be analyzed in security  logs, its crucial that critical events are separated from normal traffic and that concise reports can be produced in real-time to help understand  what happened, why it happened and how it was mediated and how to mitigate similar risks in the future.

In security log analysis, it is faster and definitely more effective for a security analyst to examine the contents of a few real time events than to process gigabytes or terabytes of security logs (the equivalent of stepping through or placing watch points in sections of of a sub-modules with  hundreds or thousands of lines of code.

When you have to analyze security logs, it is easy to get lost in details of complicated data and flows of events and find yourself drifting off into all kinds of directions even as the bells go on in the back of your mind that you are chasing ghosts in a futile and time-consuming exercise of investigation and security event debugging.

In order to understand this better, consider another analogy, this time from the world of search engines.

Precision and recall are key to effective security log analysis and effective software debugging.

In pattern recognition and information retrievalprecision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. Both precision and recall are therefore based on an understanding and measure of relevance. When a program for recognizing the dogs in a scene correctly identifies four of the nine dogs but mistakes three cats for dogs, its precision is 4/7 while its recall is 4/9. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. See Precision and recall in the Wikipedia.

In other words – it doesn’t really matter if you have to analyze a program with 100,000 lines of code or a log file with a terabyte of data – if you have good precision and good recall.

The problem is however, that the more data you have, the more difficult it is to achieve high precision and recall and that is why real-time events (or  debugging statements) are more effective in day-to-day security operations.


Tell your friends and colleagues about us. Thanks!
Share this

Security and the theory of constraints

Security management is tricky.  It’s not only about technical controls and good software development practice. It’s also about management responsibility.

If you remember TOC ( Theory of Constraints, invented by Dr. Eli Goldratt about 40 years ago) there is only 1 key constraint that limits system (or company) performance to achieve it’s goal.

So – what is that 1 key constraint for achieving FDA Premarket Notification (510k) and/or HIPAA compliance success for your medical device on a tight schedule and budget.

That’s right boys and girls – it’s the Business unit manager

Consider 3 cases of companies who are developing medical devices and need to achieve FDA Premarket Notification (510k) and/or HIPAA compliance for their product.   We will see that there are 3 generic “scenarios” that threaten the project.

A key developer leaves and the management waits until the last minute

In this scenario, the person responsible for the software security and compliance quits. The business unit manager waits until the last minute to replace him and in the end realizes that they need a contractor. External consultants (like us) start wading through reams of documentation, interviewing people and reconstructing an understanding of the systems and scope before we even start our first piece of threat analysis and write our first piece of code.

The mushroom theory of management

In this scenario, there are gobs of unknowns because the executive staff did not, could not or would not reveal all their cards in a particularly risky and complex development project that is not reaching a critical milestone.  The business unit manager calls in an outsider to evaluate and/or take over. After 6 weeks – you may sort of think you have most of the cards on the table. But – then again, maybe not. You might get lucky and achieve great progress because the engineers are ignoring the product manager and doing a great job. Miracles sometimes happen but don’t bet on it.

We’re in transition

In scenario 3, a new CEO is brought in after a putsch in the board and things come to a standstill as the executive staff started getting used to the new boss and the line staff start getting used to new directives and the programmers stop wondering if they will still have a job.

Truth be told – only the first scenario is really avoidable.  If your executive staff runs things by the mushroom theory of management or you get into management transition mode – basically, anything can happen.  And that’s why consultants like us are busy.

Tell your friends and colleagues about us. Thanks!
Share this

Moving your data to the cloud – sense and sensibility

Data governance  is a sine qua non to protect your data in the cloud. Data governance is of particular importance for the cloud service delivery model which is philosophically different from the traditional IT product delivery model.

In a product delivery model, it is difficult for a corporate IT group to quantify asset value and data security value at risk over time due to changes in staff, business conditions, IT infrastructure, network connectivity and software application changes.

In a service delivery model, payment is made for services consumed on a variable basis as a function of volume of transactions, storage or compute cycles. The data security and compliance requirements can be negotiated into the cloud service provider service level agreement.  This makes quantifying the costs of security countermeasures relatively straightforward since the security is built into the service and renders the application of practical threat analysis models more accessible then ever.

However – this leaves the critical question of data asset value and data governance. We believe that data governance is a primary requirement for moving your data to the cloud and a central data security countermeasure in the security and compliance portfolio of a cloud customer.

With increasing numbers of low-priced, high-performance SaaS, PaaS and IaaS cloud service offerings,  it is vital that organizations start formalizing their approach to data governance.  Data governance means defining the data ownership, data access controls, data traceability and regulatory compliance, for example PHI (protected health information as defined for HIPAA compliance).

To build an effective data governance strategy for the cloud, start by asking and answering 10 questions – striking the right balance between common sense and  data security requirements:

  1. What is your most valuable data?
  2. How is that data currently stored – file servers, database servers, document management systems?
  3. How should that data  be maintained and secured?
  4. Who should have access to that data?
  5. Who really has access to that data?
  6. When was the last time you examined your data security/encryption polices?
  7. What do your programmers know about data security in the cloud?
  8. Who can manipulate your data? (include business partners and contractors)
  9. If leaked to unauthorized parties how much would the damage cost the business?
  10. If you had a data breach – how long would it take you to detect the data loss event?

A frequent question from clients regarding data governance strategy in the cloud is “what kind of data should be retained in local IT infrastructure?”

A stock response is that obviously sensitive data should remain in local storage. But instead, consider the cost/benefit of storing the data in an infrastructure cloud service provider and not disclosing those sensitive data assets to trusted insiders, contractors and business partners.

Using a cloud service provider for storing sensitive data may actually reduce the threat surface instead of increasing it and give you more control by centralizing and standardizing data storage as part of your overall data governance strategy.

You can RFP/negotiate robust data security controls in a commercial contract with cloud service providers – something you cannot easily do with employees.

A second frequently asked question regarding data governance in the cloud is “How can we protect our unstructured data from a data breach?”

The answer is that it depends on your business and your application software.

Although analysts like Gartner have asserted that over 80% of enterprise data sets are stored in unstructured files like Microsoft Office – this is clearly very dependent on the kind of business you’re in. Arguably, none of the big data breaches happened by people stealing Excel files.

If anything, the database threat surface is growing rapidly. Telecom/cellular service providers have far more data (CDRs, customer service records etc…) in structured databases than in Office and with more smart phones, Android tablets and Chrome OS devices – this will grow even more. As hospitals move to EMR (electronic medical records), this will also soon be the case in the entire health care system where almost all sensitive data is stored in structured databases like Oracle, Microsoft SQL Server, MySQL or PostgreSQL.

Then. there is the rapidly growing  use of  MapReduce/JSON database technology used by Facebook and Digg: CouchDB (with 10 million installations) and MongoDB that connect directly to Web applications. These noSQL databases  may be vulnerable to some of the traditional injection attacks that involve string catenation. Developers are well-advised to use native APIs for building safe queries and patch frequently since the technology is developing rapidly and with large numbers of eyeballs – vulnerabilities are quickly being discovered and patched. Note the proactive approach the the Apache Foundation is taking towards CouchDB security and a recent (Feb 1, 2011) version release for a CouchDB cross-site scripting vulnerability.

So – consider these issues when building your data governance strategy for the cloud and start by asking and answering the 10 key questions for cloud data security.

Tell your friends and colleagues about us. Thanks!
Share this