Tag Archives: Websense

How can we convince our VP that a network-based DLP makes sense?

My colleague, Michel Godet – sent me a link to an article that Mike Rothman recently wrote.

Michel  (rightly) thinks that it supports the approach that we have been pushing in Europe for over a year now, to justify data security technology investments by using Value at Risk calculations.

Mike’s article – building a business case for security is good. I agree with most of what he writes (I would have commented but searchsecurity doesn’t allow commenting on their Ask The Security Expert: Questions & Answers articles.

So – I will use my own blog to post a couple of my comments (I should probably ping Mogull on this too but I lost  his email)

1) I agree that if you can’t get past the first energy barrier of concern with information protection than you are a non-starter for DLP ( or any data security technology for that matter – it must fit the business needs – otherwise it’s like trying to sell a trombone to a violinist.  Total waste of time

However – once you get past the first road block, the business problem for security investment is:

What is your value at risk, what are the right security countermeasures and are they cost-effective.? Not – what are the vendors selling this quarter.

There is no reason in the world why data security should be any different than any other IT investment.

2) I totally disagree that looking only at a network-based DLP product is inherently limiting. Just because a few vendors like Websense and Symantec, have integrated end point and gateway products doesn’t  makes them cost-effective data security countermeasures, ensure success of the project or prevent the next data breach.

Let me submit  two counter-examples:

A) Suppose all your sensitive data is in the cloud – then maybe network DLP is a good fit

B) Suppose all your endpoints are in the cloud – then maybe endpoint DLP is a good fit

C) Suppose all your sensitive data is on notebooks – then maybe encryption is the right countermeasure to data loss.

The answer is that you have to measure stuff – measure your people, process and system vulnerabilities and where your assets are headed. After that you need to estimate your  VaR and only THEN start thinking about the people, process and technology countermeasures

BTW – I’ve been saying this for years

October 28, 2004 –  A guide to buying extrusion prevention products

March 17, 2005 – How to justify Information security spending

Now if only we could find a way to monetize being right.

Tell your friends and colleagues about us. Thanks!
Share this

Free agent DLP from Sophos

Sophos anti-virus

Sophos has announced that they will soon include endpoint data loss prevention functionality in their anti-virus software. Developed in-house, Sophos will have an independent offering – unlike Websense, RSA, Symantec, Trend Micro and McAfee (who all purchased DLP technology) and have integrated it into their product lines with various levels of success (or not).

The Sophos move to include agent DLP functionality for free is a breath of fresh air in a data security industry long known for long-winded, heavy-handed, clumsy and frequently amateurish attempts at exploiting the waves of data breaches into a franchise that would drive sales of products purchased from visionary DLP startups.

Sophos is known to be independent and may not be inclined to partner with other pure-play  data security vendors like the network DLP company – Fidelis Security Systems. They may not have to partner if the play works well.

Beyond strategic speculation, the Sophos move should give customers a very good reason to ask why they should spend $80-150 for a Verdasys Digital Guardian agent, or $40-80 for  McAfee agent DLP software.

If Sophos can do a solid job on detecting and preventing loss of digital assets such as credit cards or sensitive Microsoft Office files at the point of use, then free looks like an awfully good value proposition.

With the recent deal that Trend Micro did at Israel Railroads for almost free ($10/seat) for 2500 seats (Trend can’t be making money on that transaction); but free or almost-free is not a bad penetration strategy if it gets your agent on every desktop in the enterprise and you get footprint and recurring service revenue for anti-virus.

I know I will be taking a close look when the software is released.

Tell your friends and colleagues about us. Thanks!
Share this

The Americanization of IT Research

The Burton Group have released the results of their research that concludes that Symantec (Vontu), RSA (Tablus) and Websense (Port Authority) are the leading DLP vendors.

Burton’s choice is indicative of the Americanization of the information security space, where government compliance regulation and large security vendor marketing agendas appear to drive US customer security decisions. (Note that compliance is not equivalent to security  for several fundamental reasons as I noted in my post Compliance is the new security standard)

Outside the US, the story is a bit different.

We hardly encounter RSA in EMEA as a DLP solution – RSA Security have the largest development group dedicated to data loss prevention and that counted for a lot in the Burton study. I’m not sure why. Great software today is usually written by small teams, I would not equate number of programmers with quality of software.

I recently met Bill Nagel from Forrester and he told me that in a seminar that Forrester ran recently (September 09) in Holland – none of the CISO’s at the seminar were planning a DLP implementation this year and only 20% were considering a DLP implementation in 2010.

Clients I speak with in EMEA are less interested in enterprise information protection (although the advantages are patently clear, the technology is patently not there yet…) and more interested in exploring tactical solutions like DLP “Lite” – monitoring SMTP and HTTP channels for data security violations and using that information to enforce business process and improve employee behavior.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

DLP – a Disturbing Lack of Process?

Please do not disturb, we are testing DLP technologyTed Ritter has suggested that we rename DLP a Disturbing Lack of Process

Indeed DLP is not a well-defined term – since so many vendors (Kaspersky anti-virus, McAfee anti-virus, Symantec anti-virus, Trend Micro Provilla, CA Backup…you name it) have labeled their products “Data loss prevention” products in an attempt to turn the tide of data breaches into a  franchise that will help them grow sales volume.

I disagree however – that DLP might be renamed as a “Disturbing lack of process” . Not even as a joke.

I do not think that lack of business process is the issue. Any company still afloat today has  business processes designed to help them take orders, add value and make money. They understand by themselves that they must protect  their intellectual property from theft and abuse.

The question is not lack of process but whether or not security is being used to help enforce business process in the relevant areas of product safety, customer service, employee workplace security and information protection in business-to-business relationships.

In a profitable company, the business processes are aligned with company strategy to one degree or another. Good companies like Intel are strong on business strategy, process and execution while government organizations tend to be strong on strategy (President Obama) and regulation (FISMA) and short on execution (Obama Nobel Peace Prize).  This is true in most countries, maybe Germany, Singapore and Japan do a better job than most.

I think we are doing most businesses an injustice by asserting that they have a “disturbing lack of process”- instead we should focus on the question of where and how security fits into the business strategy and how it can help enforce relevant processes in the areas of customer protection and privacy, customer service, employee security and privacy and information protection with business partners.

An approach that uses data security for process enforcement automatically aligns data security with company strategy (assuming that the business processes support the company strategy, we may assume an associative relationship).

Using data security for process enforcement also simplifies DLP implementations since the number of business processes and their data models is far smaller than the number of data types and data records in the organization. Easier to enumerate is easier to protect.

It is indeed immensely easier to describe a 7 step customer service process and use DLP to enforce it than try and perform e-Discovery on 10 Terabyte of customer data contained in databases and workstations.

The 3 basic tenets of information security are data confidentiality, integrity and availability. DLP addresses the confidentiality requirement, leaving integrity and availability to other technologies and procedures that are deployed in the enterprise.

The key  to effective enterprise information protection is making information security part of enterprise business processes – for example:

  • Confidentiality: not losing secret chemical formulas to the competition. (Note that credit card numbers on their own, are not confidential information according to any of the US state privacy laws. A single credit card number without additional PII is neither secret nor of much use).
  • Integrity: not enabling traders to manipulate forex pricing for personal advantage.
  • Availability: protecting servers from DDOS attacks.

DLP is having an uphill battle because (in the US at least), DLP technologies are point solutions deployed for privacy compliance rather than for business process enforcement and enterprise information protection.

DLP technology is best used as a process enforcement tool not as a compliance trade off;  unlike PCI DSS 1.2 section 6.6 that mandates a Web application firewall or a software security assessment of your web applications. It is easier (but perhaps more expensive) to buy a piece of technology and check off Section 6.6) than fix the bugs in your software – or … enforce your business processes.

Tell your friends and colleagues about us. Thanks!
Share this

Data security for SMB

Yesterday, I gave a talk at our Thursday security Webinar about data security for SMB (small to mid-sized businesses).

I’ve been thinking about DLP solutions for SMB for a couple of years now; the market didn’t seem mature or perhaps SMB customer awareness was low, but with the continued wave of data security breaches, everyone is aware.  The DLP vendors like Verdasys, Fidelis and Vontu (now Symantec) have focused traditionally on Global 1000 companies, but Infowatch is now preparing a product specifically tailored for the SMB market business requirements and pocket.  There are about 10 million SMBs in the world so this would be appear to be a fertile market for both attackers and defenders.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Is PCI DSS a failure?

A recent Ponemon survey found 71% of companies don’t consider PCI as strategic though 79% had experienced a breach. Are these companies assuming that a data security breach is cheaper than the security?

How should we understand the Ponemon survey.  Is PCI DSS a failure in the eyes of US companies?

Let’s put aside the technical weaknesses, political connotations and commercial aspects of the PCI DSS certification franchise for a second.

Consider two central principles of security – cost of damage and goodness of fit of countermeasures

a) The cost of a data security breach versus the cost of the security countermeasures IS a bona-fide business question. If the cost of PCI certification is going to be 1M for your business and your current Value at Risk is only 100k – then PCI certification is not only not strategic, it is a bad business decision.

b) Common sense says that your security countermeasures should fit your business not a third-party checklist designed by a committee and obsolete by the time it was published.

The fact the Ponemon study shows that 71% of businesses surveyed don’t see PCI as strategic is an indication that 71% have this modicum of common sense. The other 29% are either naive, ignorant or work for a security product vendor.

Common sense is a necessary but not sufficient condition
If you want to satisfy the two principles you have to prove 2 hypotheses:
Data loss is currently happening.

  • What data types and volumes of data leave the network?
  • Who is sending sensitive information out of the company?
  • Where is the data going?
  • What network protocols have the most events?
  • What are the current violations of company AUP?

A cost effective solution exists that reduces risk to acceptable levels.

  • What keeps you awake at night?
  • Value of information assets on PCs, servers & mobile devices?
  • What is the value at risk?
  • Are security controls supporting the information behavior you want (sensitive assets stay inside, public assets flow freely, controlled assets flow quickly)
  • How much do your current security controls cost?
  • How do you compare with other companies in your industry?
  • How would risk change if you added, modified or dropped security controls?

If PCI is a failure, it is  not because it doesn’t prevent credit card theft (there is no such animal as a perfect set of countermeasures) but PCI is a failure because it does not force a business to use it’s common sense and ask these practical, common-sense business questions

Danny Lieberman
Join me every Thursday for an online discussion of best practices – Register now

Tell your friends and colleagues about us. Thanks!
Share this

Trusted insider threats, fact and fiction

mindless IT research

Richard Stiennon is a well known and respected IT analyst – he has a blog called IT Harvest.

A recent post had to do with Trusted insider threats.Despite the length of the article, I believe that the article has a number of fundamental flaws:

  • Overestimating  the value of identity and access management in mitigating trusted insider threats
  • Lacking  empirical data to support the claim that “the insider threat actually outweighs the threats from cyber criminals, hackers and the malware”
  • Missing a basic management issue of accountability

The role of identity and access management in preventing trusted insider security violations

Stiennon writes that IAM (Identity and access management) “is the single most valuable defense you have against the insider threat.”. I beg to disagree – and I will attempt to explain by using the model of a crime.

Like any other crime, in order to steal or disclose assets, a person needs a combination of means, opportunity, and intent

IAM provides the means for the trusted insider. Companies issue users legitimate user accounts with the rights to access certain data, applications, databases and file services. Insiders have knowledge of how the system works, the business processes, the company culture and how people interact. They know who manages the rights management systems and who grants systems permissions. With the right knowledge and social connections, means can be obtained even if they were not originally granted by design in the IAM system.

A trusted insider is an employee who is motivated by self-interest, influenced by personal preferences, social context, corporate culture and her aversion to risk taking compared with the premium gained by stealing data.   There is little in the traditional access control model to mitigate any of these threats once access has been granted.

In 100 percent of the cases we investigated in our data security practice – the client’s permissions systems were working properly, the trusted insiders involved all had been granted appropriate rights, they did not perform any elevation of privilege exploits – they took data that they had appropriate access to. Directors of new product development, system managers, sales managers – each and every one that took and/or abused data did so with appropriate permissions.

Lacking empirical data

“While often overlooked, the insider threat actually outweighs the threats from cyber criminals, hackers and the random malware that most organizations concentrate on”

Stiennon doesn’t bring any evidence for this populistic statement. As a research analyst, I would expect some independent numbers behind the statement. Au contraire Richard – according to our data security practice of over 5 years in Europe and the Middle East (and according to the Verizon Business report, the past 2 years),  insider events are a rare, high-impact event that are a complex interplay of agents ( criminals, competitors, business partners) and vulnerabilities (human and application software).

Missing a basic management issue of accountability
Stiennon talks about HR and IT. The truth is that there is a fundamental management disconnect between HR and IT (HR hires but has no accountability when an employee is involved in a security breach and gets fired) IT has some of the data and almost never shares it with HR. I suggest higher levels of HR accountability and involvement in data security together with their audit, IT and information security management colleagues.

I wrote about the great IT-management divide last year in my post on the 7th anniversary of the Al Queda attack on the US

Missing a basic management issue related to trusted insiders
Tell your friends and colleagues about us. Thanks!
Share this

Sharing security information

fragmentationI think fragmentation of knowledge is a root cause of data breaches.

It’s almost a cliche to say that the  security and compliance industry has done a poor job in preventing data breaches of over 245 million personal records in the past 5 years.

It is apparent that government regulation is  ineffective in preventing identity theft and major data loss events.

Given: direct data security countermeasures go a long way;  data loss prevention and network surveillance work well inside a  feedback loop to improve security of systems, increase employee awareness and support management accountability.

However: I believe that even if every business deployed Fidelis XPS Extrusion Prevention system or Verdays Digital Guardian or Websense Data Security suite – we would still have major data loss events.

This is because a major data loss event has three characteristics:

1.Appears as a complete surprise to the organization
2.Has a major impact to the point of maiming or destroying the company
3.Event, after it has appeared, is ‘explained’ by human hindsight.

The root cause of the surprise is, in most cases, a lack of knowledge – not knowing what is the current range of data security threat scenarios in the wild or not even knowing what are the top 10 in your type of business.

The root cause of the lack of knowledge is fragmentation of knowledge.

Every business from SME to Global 2000 deals with security issues and amass their own best practices and knowledge base of how to protect their information.  But, the knowledge is fragmented, since business organizations don’t share their loss data, and the dozens or maybe hundreds of vendor web sites that do disclose and categorize attacks don’t provide the business context of a loss event.

Fragmentation leads to waste and duplication, as well as frustrating, expensive and sometimes dangerous experiences for companies facing a data loss event.

So what’s the solution?

With our clients, we see growing evidence that the more organized a company is with their security operation – having a single security organization responsible for digital assets, physical security, permissions management and compliance – the better security they deliver. What’s more, they may be able to reduce value at risk at lower costs due to higher levels of competence, knowledge and economy of scale.

The concept of sharing best practices  and  aggregating support so that companies of all sizes can access knowledge and support resources is not new, it’s a common theme in  industrial safety and Free Open Source worlds – to name two. I imagine that there are a few more examples I am not familiar with.

But what’s in it for security professionals? In addition to the satisfaction and prestige in helping colleagues, how about learning from the biggest and best practioners in the world; having access to resources to improve your own systems and procedures and having the ability to analyze the history of a data loss event from disclosure to analysis to remediation? How about having peers with a common goal of providing the best security for customers?

It’s time for policymakers and large commercial organizations to support organized security knowledge sharing systems, starting with compensation to employees and independent consultants that rewards high-quality, coordinated, customer-centric security  across the full continuum of security, not just point technology solutions or professional regulatory services. And it’s time for firms to recognize that sharing some data may be worth the benefits to them and their customers.

That’s my opinion. I’m Danny Lieberman.

Tell your friends and colleagues about us. Thanks!
Share this

Detecting structured data loss

Loss of large numbers of credit cards is no longer news – DLP (data loss prevention) technologies are an excellent way of obtaining real time monitoring capability without changing your network and enterprise applications systems.

Typically when companies are considering a DLP (data loss prevention ) solution – they start by looking at the offerings from security vendors like Fidelis Security, Verdasys, Mcafee, Symantec, Infowatch or Websense.

As tempting as it may seem to lean back and listen to vendor  pitches and learn from them (since after all, it is their specialty),   I’ve found that when this happens you become preoccupied with evaluating security technology instead of evaluating  business information value.

By starting an evaluation of security countermeasures with an assessment of asset value and focusing on mitigation of threats to the highest value assets in the business process we dramatically reduce the number of data loss signals we need to detect and process.

By focusing on a small number of important signals (for example file  transfer of over 500  credit card records over FTP channels) we reduce the number of signals that the security team need to process and dramatically improve the signal to noise ratio.

With fewer data loss signals to process – the data security team can focus on continuous improvement and refinement of the DLP signatures and the Data loss incident response process.

As we will see later in this post – it’s important to select appropriate methods use for data loss signal detection in order to obtain high signal to noise ration.

A common data security use case is protecting Microsoft Office documents on personal workstations from being leaked to competitors. In 2003 Gartner estimated that business users spend 30 to 40 percent of their time managing documents. In a related vein, Merrill Lynch estimated that over 85 percent of all business information exists as unstructured data .

The key question for enterprise information protection is value – not quantity.

Ask yourself – what is your most valuable asset and where is it stored?

For a company developing automated vision algorithms, the most valuable assets would be inside unstructured files stored in engineers’ workstations – working design documents and software code. For a customer service business the most valuable assets are in structured datasets stored in database servers and data warehouses.

The key asset for a customer service business (retail, e-Commerce sites, insurance companies, banks, cellular providers, telecommunications service providers  and government agencies) is customer data.

Customer data stored in large structured databases includes  billing information, customer contract information, CDRs (call detail records), payment transactions and more.   Customer data stored in operational databases is vulnerable due to the large numbers of users who access and handle the data – users who are not only salaried employees but also contractors and business partners.

Due to the high levels of external network connectivity to agents and customers using on-line insurance portals, one of the most important requirements for an insurance company is the ability to protect customer data  in different formats and multiple inbound/outbound network channels.

This is important  both from a privacy compliance (complying with EU and American privacy regulation)  and  from a business security perspective (protecting the data from being stolen by competitors).

Fidelis XPS Smart Identity Profiling provides a powerful way  to automatically identify and protect  policy holders information without having to scan databases and files in order to  generate fingerprints.

Fidelis XPS operates on real-time network traffic (up to 2.5gigabit traffic ) and implements multiple layers of content interception and decoding that “peels off” common compression, aggregation, file formats and encoding schemes, and extracts the actual content in a form suitable for detection and prevention of data leakage.

Smart Identity Profiling

Unlike keyword scanning and digital fingerprinting, Smart Identity Profiling can capture essential characteristics of a document or a structured data set but tolerates some significant variance that is common in database updates and document lifetime: editing, branching into several independent versions, sets of similar documents, etc. It can be considered as the successor to both keyword scanning and fingerprinting, combining the power of both techniques.

Keyword Scanning is a simple, relatively effective and user-friendly method of document classification. It is based on a set of very specific words, matched literally in the text. Dictionaries used for scanning include words inappropriate in communication, code words for confidential projects, products, or processes, and other words that can raise the suspicion independently of the context of their use. Matching can be performed by a single-pass matcher based on a setwise string matching algorithm.

As anybody familiar with Google can attest, the signal-to-noise ratio of keyword searches varies from good to unacceptable, depending on the uniqueness of the keywords themselves and the exactness of the mapping between the keywords and concepts they are supposed to capture.

Digital Fingerprinting (DF) is a technique designed to pinpoint the exact replica of a certain document or data file with the rate of false positives approaching zero. The methods used are calculations of message digests by a secure hash algorithm (SHA-1 and MD5 are popular choices).  Websense uses PreciseID (a sliding hash algorithm that is a variation on the DF technique – which is more robust than DF for unstructured data, but still requires frequent update of the signature and is unsuitable for protecting information in very large customer databases due to the amount of computation required and the need to access customer data and store the signatures which creates an additional data security vulnerability.

Here is an example of a Fidelis XPS Smart Identity Profile that illustrates the simplicity and power of XPS.

# MCP3.0 Profile
# name: InsurancePolicyHolders
# comments: Policy Holders
# threshold: 0
pattern:    MemoNo    P[A-Z][A-Z]
pattern:    BusinessUnitName    PZUInternational
pattern:    ControlNo    d{9}
pattern:    PolicyNo    4d{7}
use:    DateOfPolicy(PolicyNo,Date,Name,Phone,e_mail):Medium
use:    Medication(PolicyNo,Drug_Name,Name,Phone):Medium
use:    NamePhonePolicyNo(BusinessUnitName,PolicyNo,Name,Phone):Medium
------------------------------
prob: DateOfPolicy 0.200 0.200 0.200 0.200 0.200
prob: Medication 0.201 0.398 0.201 0.201
prob: NamePhonePolicyNo 0.000 0.333 0.333 0.333

As you can see in the above example – Smart Identity Profiling uses tuples of data fields – for example, the DateOfPolicy tuple which contains 5 fields – PolicyNo,Date,Name,Phone and e_mail address.  Although the probability of not detecting a single field might be fairly high, the probability of not detecting a given tuple of 5 fields is the multiple of 5 probabilities  – for example if the miss probability of a single field is 70% then the probability of missing the entire tuple is only 16.8%.

SIP (Smart Identity Profiling) is used successfully in Fidelis XPS appliances at gigabit deployments at large insurance companies like PBGC and telecommunication service providers like 013 and Netia.

Tell your friends and colleagues about us. Thanks!
Share this