Tag Archives: PCI DSS

The Tao of GRC

I have heard of military operations that were clumsy but swift, but I have never seen one that was skillful and lasted a long time. Master Sun (Chapter 2 – Doing Battle, the Art of War).

The GRC (governance, risk and compliance) market is driven by three factors: government regulation such as Sarbanes-Oxley, industry compliance such as PCI DSS 1.2 and growing numbers of data security breaches and Internet acceptable usage violations in the workplace. $14BN a year is spent in the US alone on corporate-governance-related IT spending .

It’s a space that’s hard to ignore.

Are large internally-focused GRC systems the solution for improving risk and compliance? Or should we go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies .

This article introduces a practical approach that will help the CISOs/CSOs in any sized business unit successfully improve compliance and reduce information value at risk. We call this approach “GRC 2.0” and base it on 3 principles.

1.    Adopt a standard language of GRC
2.    Learn to speak the language fluently
3.    Go green – recycle your risk and compliance

GRC 1.0

GRC (Governance, Risk and Compliance) was first coined by Michael Rasmussen.  GRC products like Oracle GRC Suite and Sword Achiever, cost in the high six figures and enable large enterprises to automate the workflow and documentation management associated with costly and complex GRC activities.

GRC – an opportunity to improve business process

GRC regulation comes in 3 flavors: government legislation, industry regulation and vendor-neutral security standards.  Government legislation such as SOX, GLBA, HIPAA and EU Privacy laws were enacted to protect the consumer by requiring better governance and a top-down risk analysis process. PCI DSS 2.0; a prominent example of Industry regulation, was written to protect the card associations by requiring merchants and processors to use a set of security controls for the credit card number with no risk analysis.  The vendor-neutral standard, ISO27001 helps protect information assets using a comprehensive set of people, process and technical controls with an audit focus.

The COSO view is that GRC is an opportunity to improve the operation:

“If the internal control system is implemented only to prevent fraud and comply with laws and regulations, then an important opportunity is missed…the same internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.”

GRC 2.0

The COSO position makes sense, but in practice it’s difficult to attain process improvement through enterprise GRC management.

Unlike ERP, GRC lacks generally accepted principles and metrics. Where finance managers routinely use VaR (value at risk) calculations, information security managers are uncomfortable with assessing risk in financial measures. The finance department has quarterly close but information security staffers fight a battle that ebbs and flows and never ends. This creates silos – IT governance for the IT staff and consultants and a fraud committee for the finance staff and auditors.

GRC 1.0 assumes a fixed structure of systems and controls.  The problem is that, in reducing the organization to passive executives of defense rules in their procedures and firewalls, we ignore the extreme ways in which attack patterns change over time. Any control policy that is presumed optimal today is likely to be obsolete tomorrow. Learning about changes must be at the heart of day-to-day GRC management.

A fixed control model of GRC is flawed because it disregards a key feature of security and fraud attacks – namely that both attackers and defenders have imperfect knowledge in making their decisions. Recognizing that our knowledge is imperfect is the key to solving this problem. The goal of the CSO/CISO should be to develop a more insightful approach to GRC management.

The first step is to get everyone speaking the same language.

Adopt a standard language of GRC – the threat analysis base class

We formalize this language using a threat analysis base class which (like any other class), has attributes and methods. Attributes have two sub-types – threat entities and people entities.

Threat entities

Assets have value, fixed or variable in Dollar, Euro, and Rupee etc.  Examples of assets are employees and intellectual property contained in an office.

Vulnerabilities are weaknesses or a lacking in the business. For example – a wood office building with a weak foundation built in an earthquake zone.

Threats exploit vulnerabilities to cause damage to assets. For example – an earthquake is a threat to the employees and intellectual property stored on servers in the building.

Countermeasures have a cost, fixed are variable and mitigate the vulnerability. For example – relocating the building and using a private cloud service to store the IP.

People entities

Business decision makers encounter vulnerabilities and threats that damage company assets in their business unit. In a process of continuous interaction and discovery, risk is part of the cost of doing business.

Attackers create threats and exploit vulnerabilities to damage the business unit. Some do it for the notoriety, some for the money and some do it for the sales channel.

Consultants assess risk and recommend countermeasures. It’s all about the billable hours.

Vendors provide security countermeasures. The effectiveness of vendor technologies is poorly understood and often masked with marketing rhetoric and pseudo-science.

Methods

The threat analysis base class prescribes 4 methods:

  • SetThreatProbability -estimated annual rate of occurrence of the threat
  • SetThreatDamageToAsset – estimated damage to asset value in a percentage
  • SetCountermeasureEffectiveness – estimated effectiveness of the countermeasure in a percentage.
  • GetValueAtRisk

Speak the language fluently

A language with 8 words is not hard to learn, it’s easily accepted by CFO, CIO and CISO since these are familiar business terms.

The application of our 8 word language is also straightforward.

Instances of the threat analysis base class are “threat models” – and can be used in the entire gamut of GRC activities:  Sarbanes-Oxley, which requires a top down risk analysis of controls, ISO27001 – controls are countermeasures that map nicely to vulnerabilities and threats (you bring the assets) and PCI DSS 1.2 – the PAN is an asset, the threats are criminals who collude with employees to steal cards and the countermeasures are specified by the standard.

You can document the threat models in your GRC system (if you have one and it supports the 8 attributes). If you don’t have a GRC system, there is an excellent free piece of software to do threat modeling – available at http://www.ptatechnologies.com

Go green – recycle your threat models

Leading up to the Al Qaida attack on the US in 9/11, the FBI investigated, the CIA analyzed but no one bothered to discuss the impact of Saudis learning to fly but not land airplanes.

This sort of GRC disconnect in organizations is easily resolved between silos, by the common, politically neutral language of the threat analysis base class.

Summary

Effective GRC management requires neither better mathematical models nor complex enterprise software.  It does require us to explore new threat models and go outside the organization to look for risks we’ve never thought about and discover new links and interdependencies that may threaten our business.  If you follow the Tao of GRC 2.0 – it will be more than a fulfillment exercise.

Tell your friends and colleagues about us. Thanks!
Share this

Why less log data is better

Been a couple weeks since I blogged – have my head down on a few medical device projects and a big PCI DSS audit where I’m helping the client improve his IT infrastructure and balance the demands of the PCI auditors.

Last year I gave a talk on quantitative methods for estimating operational risk of information systems in the annual European GRC meeting in Lisbon – you can see the presentation below.

As a I noted in my talk, one of the crucial phases in estimating operational risk is data collection: understanding what threats, vulnerabilities you have and understanding not only what assets you have (digital, human, physical, reputational) but also how much they’re worth in dollars.

Many technology people interpret data collection as some automatic process that reads/scans/sniffs/profiles/processes/analyzes/compresses log files, learning and analyzing the data using automated  algorithms like ANN (adaptive neural networks).

The automated log profiling tool will then automagically tell you where you have vulnerabilities and using “an industry best practice database of security countermeasures”,  build you a risk mediation plan. Just throw in a dash of pie charts and you’re good to go with the CFO.

This was in fashion about 10 years ago (Google automated audit log analysis and you’ll see what I mean) for example this reference on automated audit trail analysis,  Automated tools are good for getting a quick indication of trends, and  tend to suffer from poor precision and recall that  improve rapidly when combined with human eyeballs.

The PCI DSS council in Europe (private communication) says that over 80% of the merchants/payment processors with data breaches  discovered their data breach  3 months or more after the event. Yikes.

So why does maintaining 3 years of log files make sense – quoted from PCI DSS 2.0

10.7 Retain audit trail history for at least
one year, with a minimum of three
months immediately available for
analysis (for example, online, archived,
or restorable from back-up).
10.7.a Obtain and examine security policies and procedures and
verify that they include audit log retention policies and require
audit log retention for at least one year.
10.7.b Verify that audit logs are available for at least one year and
processes are in place to immediately restore at least the last
three months’ logs for analysis

Wouldn’t it be a lot smarter to say –

10.1 Maintain a 4 week revolving log with real-time exception reports as measured by no more than 5 exceptional events/day.

10.2 Estimate the financial damage of the 5 exceptional events in a weekly 1/2 meeting between the IT manager, finance manager and security officer.

10.3 Mitigate the most severe threat as measured by implementing 1 new security countermeasure/month (including the DLP and SIEM systems you bought last year but haven’t implemented yet)


I’m a great fan of technology, but the human eye and brain does it best.

Tell your friends and colleagues about us. Thanks!
Share this

10 guidelines for a security audit

What exactly is the role of an information security auditor?  In some cases, such as compliance  by Level 1 and 2 merchants with PCI DSS 2.0,  external audit is a condition to PCI DSS 2.0 compliance.   In the case of ISO 27001, the audit process is a key to achieving ISO 27001 certification (unlike PCI and HIPAA, ISO regards certification, not compliance as the goal).

There is a gap between what the public expects from an auditor and how auditors understand their role.

Auditors look at transactions and controls. They’re not the business owner and the more billable hours, the better.

The “reasonable person” assumes that the role of the security auditor is to uncover vulnerabilities, point out ways to improve security and produce a report that will enable the client to comply with relevant compliance regulation. The “reasonable person” might add an additional requirement of a “get out of jail free card”, namely that the auditor should produce a report that will stand up to legal scrutiny in times of a data security breach.

Auditors don’t give out “get out of jail” cards and audit is not generally part of the business risk management.

The “reasonable person” is a legal fiction of the common law representing an objective standard against which any individual’s conduct can be measured. As noted in the wikipedia article on the reasonable person:

This standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law. The standard also has a presence in contract law, though its use there is substantially different.

Enron, and the resulting Sarbanes-Oxley legislation resulted in significant changes in accounting firms’ behavior,but judging from the 2009 financial crisis from Morgan Stanley to AIG, the regulation has done little to improve our confidence in our auditors. The numbers of data security breaches are an indication that the situation is similar in corporate information security.  We can all have “get out of jail” cards but data security audits do not seem to be mitigating new risk from tablet devices and mobile apps. Neither am I aware of a PCI DSS certified auditor being detained or sued for negligence in data breaches at PCI DSS compliant organizations such as Health Net where 9 data servers that contained sensitive health information went missing from Health Net’s data center in Rancho Cordova, California. The servers contained the personal information of 1.9 million current and former policyholders, compromising their names, addresses, health information, Social Security numbers and financial information.

The security auditor expectation gap has sometimes been depicted by auditor organizations as an issue to be addressed  by educating users to the audit process. This is a response not unlike the notion that security awareness programs are effective data security countermeasures for employees that willfully steal data or bring their personal device to work.

Convenience and greed tend to trump awareness and education in corporate workplaces.

Here are 10 guidelines that I would suggest for client and auditor alike when planning and executing a data security audit engagement:

1. Use an engagement letter every time. Although the SAS 83 regulation makes it clear that an engagement letter must be used, the practical reason is that an engagement letter sets the mutual expectations, reduces risk of litigation and by putting mutual requirements on the table – improves client-auditor relationship.

2.Plan. Plan carefully who needs to be involved, what data needs to be collected and require input from C-level executives to  group leaders and the people who provide customer service and manufacture the product.

3. Make sure the auditor understands the client and the business.  Aside from wasted time, most of the famous frauds happened where the auditors didn’t really understand the business.   Understanding the business will lead to better quality audit engagements and enable the auditor and audit manager to be peers in the boardroom not peons in the hallway.

4. Speak to your predecessor.   Make sure the auditor talks to the people who came before him.  Speak with the people in your organization who did the last data security audit.   Even if they’ve left the company – it is important to understand what they did and what they thought could have been improved.

5. Don’t tread water. It’s not uncommon to spend a lot of time collecting data, auditing procedures and logs and then run out of time and billable hours, missing the big picture which is” how badly the client organization could be damaged if they had a major data security breach”. Looking at the big picture often leads to audit directions that can prevent disasters and  subsequent litigation.

6. Don’t repeat what you did last year.  Renewing a 2,000 hour audit engagement that regurgitates last years security check list will not reduce your threat surface.  The objective is not to work hard, the object is to reduce your value at risk, comply and …. get your “get out of jail card”.

7. Train the client to fish for himself.   This is win-win for the auditor and client. Beyond reducing the amount of work onsite, training client staff to be more self sufficient in the data collection and risk analysis process enables the auditor to better assess client security and risk staff (one of the requirements of a security audit) and improves the quality of data collected since client employees are the closer to actual vulnerabilities and non-compliance areas than any auditor.

As I learned with security audits at telecom service providers and credit card issuers, the customer service teams know where the bodies are buried, not a wet-behind-the-ears auditor from KPMG.

8. Follow up on incomplete or unsatisfactory information.  After a data security breach, there will be litigation.  During litigation, you can always find expert testimony that agrees with your interpretation of information but

The problem is not interpreting the data but acting on unusual or  missing data.  If your ears start twitching, don’t ignore your instincts. Start unraveling the evidence.

9. Document the work you do.  Plan the audit and document the process.  If there is a peer review, you will have the documentation showing the procedures that were done.  Documentation will help you improve the next audit.

10. Spend some time evaluating your client/auditor.   At the end of the engagement, take a few minutes and interview your auditor/client and ask performance review kinds of questions like: What do think your strengths are, what are your weaknesses?  what was succesful in this audit?  what do you consider a failure?   How would you grade yourself on a scale of 10?

Perhaps the biggest mistake we all make is not carefully evaluating the potential we have to meet our goals as audit, risk and security professionals.

A post-audit performance review will help us do it better next time.

Tell your friends and colleagues about us. Thanks!
Share this

Credit card security in the cloud

While the latest version of Payment Card Industry (PCI) Data Security Standard (DSS) 2.00 is an improvement,  the scope of system component connectivity is not well-defined:

A “system component” is part of the cardholder data environment (CDE) if one of two conditions is met:

  1. The system component stores, processes, or transmits cardholder data, or
  2. The system component is “connected” to another system component (condition 1)

PCI DSS 2.0 however does not explicitly define what system application “connectivity” means. This is a curious oversight, since the PCI DSS and PA DSS standards are so detailed. Connectivity is the root vulnerability of credit card theft – without connectivity to the systems that store the credit card data, there would never be a data security breach. PCI DSS 2.0 does go into a detailed explanation of what a system component means, in the section: “Scope of Assessment for Compliance with PCI DSS Requirements”:

System components” are defined as any network component, server, or application that is included in or connected to the cardholder data environment. “System components” also include any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors. The cardholder data environment is comprised of people, processes and technology that store, process or transmit cardholder data or sensitive authentication data. Network components include but are not limited to firewalls, switches, routers, wireless access points, network appliances, and other security appliances. Server types include, but are not limited to the following: web, application, database, authentication, mail, proxy, network time protocol (NTP), and domain name server (DNS).

Now that we understand what a system component is – what kind of connectivity needs to be addressed in the credit card data security requirements?  Obviously, the standard was written by system administrators and not programmers because the notion of interprocess communications is ignored.  Once we are running online transaction applications in the cloud, the notion of public networks becomes an antiquated given.

I  submit  that application process connectivity must be more rigorously defined in order to reduce data security vulnerabilities in the cloud.  I propose testing 4 conditions of Layer 7 application process connectivity regardless of network Layer 3 connectivity (be it customer premise LAN,  VLAN, WiFi network, public Internet, X.25, VPN or whatever).

I believe that the appropriate place for these conditions would be in the PA DSS (Payment Application Data Security Standard) that is used as a guide for software security assessments of payment processing applications.

  1. SaaS Web applications that transmit credit card information Web services,  REST or SOAP, JSON or any other form of serialization using the HTTPS protocol regardless of port number.
  2. SaaS application processes that exchange credit card information using remote messaging such as RPC, TCP/IP sockets
  3. End point client processes that receive credit card information when communicating to a remote server using the RDP (remote desktop protocol)
  4. Any process that receives or transmits data to a virtualized process in the cloud – i.e software  that processes credit card data that runs on a virtual machine.
  5. All messages exchanged between two application processes will be encrypted using strong cryptography
Tell your friends and colleagues about us. Thanks!
Share this

Compliance, security and Wikileaks

This is an essay I wrote in 2004.  There is nothing here that doesn’t still ring true, especially with the latest round of Wikileaks disclosures. I wrote then and I still hold that  compliance and and data security technology cannot protect an organization from a data breach. The best security countermeasures  for protecting a company’s digital assets and individuals’ private information are uncompromising ethics and honest management.

On security and compliance

It’s impossible to ignore the fact that compliance (like it or not) is a driver for companies to invest in improving their software and data security past running a firewalls and anti-virus. While compliance drives companies into taking action, do compliance activities actually result in implementing and sustaining strong data security  management and technology countermeasures?  We will see that the answer is generally no.

There is plethora of compliance regulations. There is regulation for  Privacy(HIPAA/HHS), for Children: (Children’s Online Privacy Protection Act (COPPA) for Credit Card holders: (FCRA), for merchants (PCI DSS), for Public entities (Sarbanes-Oxley), for Insurance (State laws) , for Securities trading (SEC), for Telecom (New York State Public Service Commission rulings) and many many more.

Looking at the wide variety of regulations and standards we can see that compliance really comes in only 3 flavors:

  1. Governance regulation such as HIPAA and SOX.  Government compliance regulation is focussed on customer protection and requires a top down risk analysis process.
  2. Industry compliance regulation such as PCI DSS that focuses on protecting the card association supply chain, doesn’t require risk analysis and mandates a fixed control set (if you think that best-practice security control sets are a good idea, then stop and consider the abysmal failure of the Maginot line in WWII and the Bar Lev line in the Yom Kippur war in 1973).
  3. Vendor-neutral standards such as ISO 27001 that focuses on data and system protection, doesn’t require risk analysis nor consider asset values although it does provide what is arguable the most comprehensive set of controls.

Well-meaning as the regulators may be, there are two fundamental flaws in the security-by-compliance model:

  1. You can comply without being secure and use compliance as a fig-leaf for lack of data security
  2. You can invest in software and data security without being compliant

…We don’t invest in data loss prevention technology because it’s a criminal offense when one of our employee breaches critical filings. We feel the legal deterrent is sufficient.
IT Manager – Securities and Exchange Commission in a Middle East country

Privacy regulation trends in the US and Europe

Government-regulated privacy-protection of information is a natural response rooted in the field of telecommunications, since countries either own the telecom business outright or tightly regulate their industry. This has largely led to a view of electronic privacy as an issue of citizen rights versus state legislation and monopoly.

In the information age, privacy has two dimensions – intrusion and data breach:

  • Protection against intrusion by unwanted information or criminals; similar to the constitutional protection to be secure in one’s home.
  • Protection against data breach by controlling information flows about an individual’s or a business’s activities; for example preventing identify theft or protecting a company’s trade secrets.

Regulation has moved in two major directions–centralized general protection and decentralized ad-hoc protection. The EEC (European Economic Community ) has pursued the former, and passed comprehensive data protection laws with coordination on information collection and data flows. The United States, in contrast, has dealt with issues on a case-by-case basis (health-care, credit cards, corporate governance etc…) resulting in a variety of ad hoc federal and state legislation.

A synthesis of the European and the American approaches is to formulate a set of broad rules for vertical industry. This was the direction taken by the New York Public Service Commission on the issue of telecommunications privacy. However, U.S. privacy legislation remains considerably less strict than European law in the regulation of private databases. Two Representatives in the House Select Committee on Homeland Security are calling for a Privacy Czar. The Privacy Czar would be responsible for privacy policies throughout the federal government as well as ensuring private technology does not erode public privacy.

“Right now, there’s no one at home at the White House when it comes to privacy. There’s no political official in the White House who has privacy in their title or as part of their job description. Congress should take the lead here because this administration has not,” says Peter Swire, an Ohio State University law professor and former chief privacy officer in the Clinton administration in an interview with Wired back in 2006 – and in the Obama administration has anything changed?
(http://www.wired.com/news/privacy/0,1848,63542,00.html )

Horizontal applications

Sarbanes Oxley: enforcing corporate governance

The Sarbanes-Oxley Act (SOX) has had a major impact on US corporate governance SOX was a response to the accounting scandals and senior management excesses at some public companies in recent years. It requires compliance with a comprehensive reform of accounting procedures for public corporations to promote and improve the quality and transparency of financial reporting by both internal and external independent auditors. SOX regulation is enforced by the Public Company Accounting Oversight Board (“the Board”).

SOX Section 404 – “Management Assessment Of Internal Controls ” is indirectly relevant to data breach. It requires an “internal control” report in the annual report which states management responsibility and assesses effectiveness of internal controls. Companies are also required to disclose whether they have adopted a code of ethics for senior financial officers and the contents of that code.

SOX Section 409 – “Real Time Disclosure” implies that a significant data breach event be disclosed on “a rapid and current basis”. SOX also increases the penalties for mail and wire fraud increased from 5 to 10 years and creates a crime for tampering with a record or otherwise impeding any official proceeding.

HSS/HIPAA: enforcing patient privacy

Each time a patient sees a doctor, is admitted to a hospital, goes to a pharmacist or sends a claim to a health plan, a record is made of their confidential health information. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) gave Congress 3 years to pass health privacy legislation. In May 2003 – the HHS (Dept of Health and Human services implemented federal protections for the privacy of individual health information under the Privacy Rule, pursuant to HIPAA. Because of limitations of HIPPA, the rule is far from seamless and will require a lot more work in the US Congress by both parties to ensure privacy of personal health information.

My conclusion on all of this is:

  • SOX has been a strong driver for sales of  IT  products and services, but it’s totally unclear that the billions spent by corporate America on compliance has actually done much to improve customer protection.

Vertical Industries

Securities: Did we leave the cat guarding the cream?

Annette L. Nazareth, market regulation director at the U.S. Securities and Exchange Commission, outlined proposals at a securities industry conference in New York on May 21 calling for stock exchanges, as the Associated Press put it, “to abide by most of the requirements they set for companies they list.”
(http://www.sec.gov./news/speech/spch052104aln.htm )

Wow.

Insurance Industry: Federal versus free market

October 2003, witnesses before the Senate Commerce committee testified regarding insurance industry regulations. The committee analyzed the current US system, which relies on state law, and examined proposals for improving industry regulation. One of the central issues was whether or not the federal government should play a larger role in insurance industry regulation. Also discussed was the need to provide protection for consumers without forcing unnecessary regulations on insurance companies. Some senators expressed concerns about high insurance rates.

Conclusion

If you’re a vendor of IT products and services, it has become increasingly difficult to sell security with rising complexity of attacks and countermeasures and decision makers who find it difficult to understand what works and what doesn’t.

What will happen to the B2C security industry is hard to say. Perhaps the Intel McAfee acquisition is a sign of things to come where security becomes a  B2B  industry  like safety manufacturers for the aerospace and automotive industries.

Until security becomes built-into the cloud, my best suggestion for a business is don’t leave your ethics at home and don’t wait for the government to tell you what you learned from your parents at age 5 – put your toys away and don’t steal from the other kids.

Tell your friends and colleagues about us. Thanks!
Share this

Will smart phones replace credit cards?

A recent post “Can smartphones replace credit cards” wonders whether or not consumers are ready to  trade in their plastic for their cell-phone.

Mobile payment technology has been around for about 10 years and it has not really taken off in a big way – although there are niche applications.  In Tel Aviv for example, you can buy drinks in vending machines with your cell phone and pay for parking.

Clearly it’s not a technology barrier to entry but a cultural barrier to entry.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

The top 2 responses to data security threats

How does your company mitigate the risk of data security threats?

Is your company management adopting a policy of “It’s other peoples money”?

In a recent thread on LinkedIn – Jody Keyser shared some quotes from David Vose’s book on risk, reliability and computerized risk modeling:  Risk Analysis a quantitative guide.

The responses to correctly identified and evaluated risks are many but generally fall into one of the following categories:

– Cancel Project
– Eliminate ( do it another way)
– Transfer (insure back to back contract)
– Share (with partner or contractor )
– Reduce (take a less risky approach)
– Add a contingency (increase budget, deadline etc.,to allow for possibility of risk)
– Collect more data to better understand risk
– Do nothing (cost is just too dang high)
– Increase ( maybe the plan is too cautious )

In my experience – when it comes to data security, data loss prevention, DLP projects – the top 2 responses to data security threats are “accept the risk” followed by “cancel the project” in a close second place.

The other alternatives are almost all non-starters. The question is – why?

Eliminating risk by changing the business process is often not an option or too much trouble for employees. For example – consider the process of transferring documents to external contractors – even though it’s trivial to encrypt documents inside a Zip file and share the password – most companies don’t make it part of their security procedure and those that do require encryption of documents sent to external business partners, don’t deploy DLP monitoring to ensure compliance with the encryption policy.

There are multiple reasons for data security risk being accepted by business managers.  Most are related to cost, complexity, changing business requirements and a tacit disbelief in effectiveness of technology in preventing data theft and fraud.

The reasons for accepting data security risk are related to  the difference between being secure and feeling secure.  Since most companies don’t monitor data flows, they don’t know how many sensitive digital assets are being leaked to the competition – ergo they don’t have the empirical data to analyze their data security threats and measure data security risks in terms of dollar threat to the business.  This would lead to enable a business to deploy data security countermeasures and be secure at an acceptable cost. It would also enable them to measure the cost effectiveness of their data security technology and challenge their innate beliefs and skepticism.

However – the company management already feel secure because they have delegated that part of  the business to the information security folks and reading the papers tells them that customers (not the business management) pay the cost of a data security breach.

As a kid growing up in South Jersey – when there was the occasional report of an urban boondoggle or million dollar NASA toilets – my Dad (who worked for RCA on defense projects and knew about these things) would always use the expression – “Other peoples money” or if it was closer to home – “Pa’s rich and Ma don’t care”…which is really close to home this year for Americans as President Obama takes the US to an unprecedented $1.35 trillion budget deficit in  2010.

Tell your friends and colleagues about us. Thanks!
Share this

Choosing endpoint DLP agents

There is a lot to be said for preventing data loss at the point of use but if you are considering endpoint DLP (data loss prevention), I recommend against buying and deploying an integrated DLP/Anti-virus end-point security agent.  This is for 4 reasons:
  • Bloatware/system resource consumption – if you’re concerned with anti-virus system resource usage, imagine layering another 100MB of software, another 20MB of data security rules and loads of network traffic for management just for the luxury of getting a good deal from Symantec on a piece of integrated software that IT doesn’t know how to manage anyhow.
  • Software vulnerabilities – if you have issues with the anti-virus – you don’t want them affecting your data flows via the DLP agent. Imagine a user uninstalling  the anti-virus and impacting the DLP agent.
  • Diversity – the strong anti-virus products have weak DLP agents – which means that the advantage of a single management platform is spurious. Having strong anti-virus software on your Windows PCs from a vendor like McAfee complements having strong data loss prevention from a company like Verdasys.
  • Not a good fit for the organization – IT manage the Anti-virus,   Security manage the data security and never the twain shall meet.
Tell your friends and colleagues about us. Thanks!
Share this

Learning about change and changing your security

Reading through the trade press, DLP vendor marketing collateral and various forums on information security,  the conventional wisdom is that the key threat to an organization is trusted insiders. This is arguable – since it depends on your organization, the size of the business and type of operation.   However –

This is certainly true at a national security level where trusted insiders that committed espionage have caused considerable damage.  MITRE Corporation – Detecting Insider Threat Behavior

There are three core and interrelated problem in modern data security:

  1. Systems are focussed on rule-breaking (IDS, DLP, firewalls, procedures) – yet malicious insider can engage in data theft and espionage without breaking one of the IDS/IPS/DLP rules.
  2. The rules are static (standards such as ISO 27001 or PCI DSS 1.x) or slow-moving at best (yearly IT Governance audit)
  3. Ignore collusion between insiders and malicious outsiders whether for espionage purposes (a handler who manipulates an employee) or for criminal purposes (stealing customer data for resale).

You may say – fine, let’s spend more time observing employee behavior and educate supervisors for tell-tale signs of change that may indicate impending involvement in a crime.

However – malicious outsiders (criminals, competitors, terrorists…) that may exploit employees in order to obtain confidential data is just another vulnerability in a whole line of business vulnerabilities.  Any vulnerability must be considered within the context of a threat model – the organization has assets that are damaged by threats that exploit vulnerabilities that are mitigated by countermeasures.   The organization needs to think literally  outside the box and at least attempt to identify new threats and vulnerabilities.

The issue is not that employees can be bought or manipulated, the issue is that government and other hierarchical organizations use a fixed system of security controls.  In reducing the organization’s security to passive executives of defense rules in their procedures and firewalls, we ignore the extreme ways in which attack patterns change over time. Any control policy that is presumed optimal today is likely to be obsolete tomorrow.  It is a fair assumption that an organization that doesn’t change data security procedures frequently – will provide an insider with  enough means, opportunity and social connectivity to game the system and once he or she has motivation – you have a crime.

Learning about change and changing your security systems must be at the heart of day-to-day security management.

Tell your friends and colleagues about us. Thanks!
Share this