Tag Archives: Fidelis Security

Free agent DLP from Sophos

Sophos anti-virus

Sophos has announced that they will soon include endpoint data loss prevention functionality in their anti-virus software. Developed in-house, Sophos will have an independent offering – unlike Websense, RSA, Symantec, Trend Micro and McAfee (who all purchased DLP technology) and have integrated it into their product lines with various levels of success (or not).

The Sophos move to include agent DLP functionality for free is a breath of fresh air in a data security industry long known for long-winded, heavy-handed, clumsy and frequently amateurish attempts at exploiting the waves of data breaches into a franchise that would drive sales of products purchased from visionary DLP startups.

Sophos is known to be independent and may not be inclined to partner with other pure-play  data security vendors like the network DLP company – Fidelis Security Systems. They may not have to partner if the play works well.

Beyond strategic speculation, the Sophos move should give customers a very good reason to ask why they should spend $80-150 for a Verdasys Digital Guardian agent, or $40-80 for  McAfee agent DLP software.

If Sophos can do a solid job on detecting and preventing loss of digital assets such as credit cards or sensitive Microsoft Office files at the point of use, then free looks like an awfully good value proposition.

With the recent deal that Trend Micro did at Israel Railroads for almost free ($10/seat) for 2500 seats (Trend can’t be making money on that transaction); but free or almost-free is not a bad penetration strategy if it gets your agent on every desktop in the enterprise and you get footprint and recurring service revenue for anti-virus.

I know I will be taking a close look when the software is released.

Tell your friends and colleagues about us. Thanks!
Share this

The Americanization of IT Research

The Burton Group have released the results of their research that concludes that Symantec (Vontu), RSA (Tablus) and Websense (Port Authority) are the leading DLP vendors.

Burton’s choice is indicative of the Americanization of the information security space, where government compliance regulation and large security vendor marketing agendas appear to drive US customer security decisions. (Note that compliance is not equivalent to security  for several fundamental reasons as I noted in my post Compliance is the new security standard)

Outside the US, the story is a bit different.

We hardly encounter RSA in EMEA as a DLP solution – RSA Security have the largest development group dedicated to data loss prevention and that counted for a lot in the Burton study. I’m not sure why. Great software today is usually written by small teams, I would not equate number of programmers with quality of software.

I recently met Bill Nagel from Forrester and he told me that in a seminar that Forrester ran recently (September 09) in Holland – none of the CISO’s at the seminar were planning a DLP implementation this year and only 20% were considering a DLP implementation in 2010.

Clients I speak with in EMEA are less interested in enterprise information protection (although the advantages are patently clear, the technology is patently not there yet…) and more interested in exploring tactical solutions like DLP “Lite” – monitoring SMTP and HTTP channels for data security violations and using that information to enforce business process and improve employee behavior.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Data security for SMB

Yesterday, I gave a talk at our Thursday security Webinar about data security for SMB (small to mid-sized businesses).

I’ve been thinking about DLP solutions for SMB for a couple of years now; the market didn’t seem mature or perhaps SMB customer awareness was low, but with the continued wave of data security breaches, everyone is aware.  The DLP vendors like Verdasys, Fidelis and Vontu (now Symantec) have focused traditionally on Global 1000 companies, but Infowatch is now preparing a product specifically tailored for the SMB market business requirements and pocket.  There are about 10 million SMBs in the world so this would be appear to be a fertile market for both attackers and defenders.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Is PCI DSS a failure?

A recent Ponemon survey found 71% of companies don’t consider PCI as strategic though 79% had experienced a breach. Are these companies assuming that a data security breach is cheaper than the security?

How should we understand the Ponemon survey.  Is PCI DSS a failure in the eyes of US companies?

Let’s put aside the technical weaknesses, political connotations and commercial aspects of the PCI DSS certification franchise for a second.

Consider two central principles of security – cost of damage and goodness of fit of countermeasures

a) The cost of a data security breach versus the cost of the security countermeasures IS a bona-fide business question. If the cost of PCI certification is going to be 1M for your business and your current Value at Risk is only 100k – then PCI certification is not only not strategic, it is a bad business decision.

b) Common sense says that your security countermeasures should fit your business not a third-party checklist designed by a committee and obsolete by the time it was published.

The fact the Ponemon study shows that 71% of businesses surveyed don’t see PCI as strategic is an indication that 71% have this modicum of common sense. The other 29% are either naive, ignorant or work for a security product vendor.

Common sense is a necessary but not sufficient condition
If you want to satisfy the two principles you have to prove 2 hypotheses:
Data loss is currently happening.

  • What data types and volumes of data leave the network?
  • Who is sending sensitive information out of the company?
  • Where is the data going?
  • What network protocols have the most events?
  • What are the current violations of company AUP?

A cost effective solution exists that reduces risk to acceptable levels.

  • What keeps you awake at night?
  • Value of information assets on PCs, servers & mobile devices?
  • What is the value at risk?
  • Are security controls supporting the information behavior you want (sensitive assets stay inside, public assets flow freely, controlled assets flow quickly)
  • How much do your current security controls cost?
  • How do you compare with other companies in your industry?
  • How would risk change if you added, modified or dropped security controls?

If PCI is a failure, it is  not because it doesn’t prevent credit card theft (there is no such animal as a perfect set of countermeasures) but PCI is a failure because it does not force a business to use it’s common sense and ask these practical, common-sense business questions

Danny Lieberman
Join me every Thursday for an online discussion of best practices – Register now

Tell your friends and colleagues about us. Thanks!
Share this

Trusted insider threats, fact and fiction

mindless IT research

Richard Stiennon is a well known and respected IT analyst – he has a blog called IT Harvest.

A recent post had to do with Trusted insider threats.Despite the length of the article, I believe that the article has a number of fundamental flaws:

  • Overestimating  the value of identity and access management in mitigating trusted insider threats
  • Lacking  empirical data to support the claim that “the insider threat actually outweighs the threats from cyber criminals, hackers and the malware”
  • Missing a basic management issue of accountability

The role of identity and access management in preventing trusted insider security violations

Stiennon writes that IAM (Identity and access management) “is the single most valuable defense you have against the insider threat.”. I beg to disagree – and I will attempt to explain by using the model of a crime.

Like any other crime, in order to steal or disclose assets, a person needs a combination of means, opportunity, and intent

IAM provides the means for the trusted insider. Companies issue users legitimate user accounts with the rights to access certain data, applications, databases and file services. Insiders have knowledge of how the system works, the business processes, the company culture and how people interact. They know who manages the rights management systems and who grants systems permissions. With the right knowledge and social connections, means can be obtained even if they were not originally granted by design in the IAM system.

A trusted insider is an employee who is motivated by self-interest, influenced by personal preferences, social context, corporate culture and her aversion to risk taking compared with the premium gained by stealing data.   There is little in the traditional access control model to mitigate any of these threats once access has been granted.

In 100 percent of the cases we investigated in our data security practice – the client’s permissions systems were working properly, the trusted insiders involved all had been granted appropriate rights, they did not perform any elevation of privilege exploits – they took data that they had appropriate access to. Directors of new product development, system managers, sales managers – each and every one that took and/or abused data did so with appropriate permissions.

Lacking empirical data

“While often overlooked, the insider threat actually outweighs the threats from cyber criminals, hackers and the random malware that most organizations concentrate on”

Stiennon doesn’t bring any evidence for this populistic statement. As a research analyst, I would expect some independent numbers behind the statement. Au contraire Richard – according to our data security practice of over 5 years in Europe and the Middle East (and according to the Verizon Business report, the past 2 years),  insider events are a rare, high-impact event that are a complex interplay of agents ( criminals, competitors, business partners) and vulnerabilities (human and application software).

Missing a basic management issue of accountability
Stiennon talks about HR and IT. The truth is that there is a fundamental management disconnect between HR and IT (HR hires but has no accountability when an employee is involved in a security breach and gets fired) IT has some of the data and almost never shares it with HR. I suggest higher levels of HR accountability and involvement in data security together with their audit, IT and information security management colleagues.

I wrote about the great IT-management divide last year in my post on the 7th anniversary of the Al Queda attack on the US

Missing a basic management issue related to trusted insiders
Tell your friends and colleagues about us. Thanks!
Share this

Sharing security information

fragmentationI think fragmentation of knowledge is a root cause of data breaches.

It’s almost a cliche to say that the  security and compliance industry has done a poor job in preventing data breaches of over 245 million personal records in the past 5 years.

It is apparent that government regulation is  ineffective in preventing identity theft and major data loss events.

Given: direct data security countermeasures go a long way;  data loss prevention and network surveillance work well inside a  feedback loop to improve security of systems, increase employee awareness and support management accountability.

However: I believe that even if every business deployed Fidelis XPS Extrusion Prevention system or Verdays Digital Guardian or Websense Data Security suite – we would still have major data loss events.

This is because a major data loss event has three characteristics:

1.Appears as a complete surprise to the organization
2.Has a major impact to the point of maiming or destroying the company
3.Event, after it has appeared, is ‘explained’ by human hindsight.

The root cause of the surprise is, in most cases, a lack of knowledge – not knowing what is the current range of data security threat scenarios in the wild or not even knowing what are the top 10 in your type of business.

The root cause of the lack of knowledge is fragmentation of knowledge.

Every business from SME to Global 2000 deals with security issues and amass their own best practices and knowledge base of how to protect their information.  But, the knowledge is fragmented, since business organizations don’t share their loss data, and the dozens or maybe hundreds of vendor web sites that do disclose and categorize attacks don’t provide the business context of a loss event.

Fragmentation leads to waste and duplication, as well as frustrating, expensive and sometimes dangerous experiences for companies facing a data loss event.

So what’s the solution?

With our clients, we see growing evidence that the more organized a company is with their security operation – having a single security organization responsible for digital assets, physical security, permissions management and compliance – the better security they deliver. What’s more, they may be able to reduce value at risk at lower costs due to higher levels of competence, knowledge and economy of scale.

The concept of sharing best practices  and  aggregating support so that companies of all sizes can access knowledge and support resources is not new, it’s a common theme in  industrial safety and Free Open Source worlds – to name two. I imagine that there are a few more examples I am not familiar with.

But what’s in it for security professionals? In addition to the satisfaction and prestige in helping colleagues, how about learning from the biggest and best practioners in the world; having access to resources to improve your own systems and procedures and having the ability to analyze the history of a data loss event from disclosure to analysis to remediation? How about having peers with a common goal of providing the best security for customers?

It’s time for policymakers and large commercial organizations to support organized security knowledge sharing systems, starting with compensation to employees and independent consultants that rewards high-quality, coordinated, customer-centric security  across the full continuum of security, not just point technology solutions or professional regulatory services. And it’s time for firms to recognize that sharing some data may be worth the benefits to them and their customers.

That’s my opinion. I’m Danny Lieberman.

Tell your friends and colleagues about us. Thanks!
Share this

Is data loss prevention possible?

I recently saw an article on Computerweekly that asks – “Is data loss prevention possible?”

I think that a more relevant question is “Is information protection possible?”

The  author correctly identifies that it’s easier to access data (and leak it) than to modify or delete data.  However, the notion that data is out of control in the corporate world is an over-reaction and does a mis-justice to most businesses.

Data is out of control in the corporate world…I think… the only way that we can have influence on the likelihood of (data loss) occuring is through a couple of fundamental controls, namely

1. Reduce and limit access to data

2. Control the “copyability” of data

Companies already manage access and control “copyability”. This is not new, nor is it effective against the threat of a major data loss event.

Organizations from SME and up to Global 2000 use Microsoft networks based on Active Directory with planned (not always well executed) group policies and permissions management.  Controlling access and copyability in the service of business objectives is precisely the objective of these systems.

If you need finer-grained copy protection – there are dozens of endpoint security products – from Checkpoint, Mcafee and Symantec to Controlguard.

If you need finer-grained rights management, there are products like Microsoft DRM and Oracle IRM. Personally, I don’t think that DRM is effective for enterprise information protection. DRM changes the user experience and depends on user behavior, it can be broken and or bypassed and DRM systems are difficult to deploy on a large scale because of the above constraints.

However – permissions and rights access management and lately, removable device management have not prevented major data loss events like Heartland or Hannaford. The reason for this is that once rights are granted – the user is trusted and can move the data anywhere he  or she wants.

We need information protection,  not copy protection; and in a way and at a cost that is a good fit for the business.

Information protection is possible by taking a value-based approach that integrates with the business operation.   Analyze your business requirements and threat scenarios – and only then – consider data loss prevention solutions like  enterprise information protection from Verdasys, agent DLP from Mcafee or a gateway DLP solution from  Fidelis Security.

Tell your friends and colleagues about us. Thanks!
Share this

Preventing document leaks

mylan pharmaceuticalsPharmaceutical manufacturer Mylan has recently sued the Pittsburgh Post-Gazette over a series of stories describing safety issues in the Morgantown, Va., plant.  The basis for the stories were documents leaked by workers in the plant – and although the information on the background to the leak is sparse – an FDA inspection has confirmed that the plant complies with FDA quality and rebulatory requirements.   The interesting aspect of this case is that Mylan has not succeeded in discovering the leakers of the documents.

It sounds like an internal vendetta which has spilled over into the media since Mylan CEO Robert Coury has personal money at stake – about 40% of his total compensation package is in Mylan stock – which in itself is a good thing as it provides a significant performance incentive.

Data leakage of safety and compliance related documents is a commonly overlooked use case in the enterprise information protection space – as Mylan security staff are discovering – it is next to impossible to detect data leakage after the fact – unless you are using a network DLP system like Fidelis XPS or an agent DLP system like Verdasys Digital Guardian or Mcafee Agent DLP.

My guess is that the Mylan CIO is getting a lot of sales calls from DLP vendors this week – offering to help them monitor unauthorized network transfer of internal, confidential documents.

Having said that – there is no indication that the documents were not simply printed and handed to the reporters.  In that case – the only data loss prevention solution that is applicable is agent DLP like Verdasys or Mcafee agent DLP.

Then again – sometimes the best and cheapest data security countermeasures are low-tech – checking bags of employees leaving the plant..

Tell your friends and colleagues about us. Thanks!
Share this

Detecting structured data loss

Loss of large numbers of credit cards is no longer news – DLP (data loss prevention) technologies are an excellent way of obtaining real time monitoring capability without changing your network and enterprise applications systems.

Typically when companies are considering a DLP (data loss prevention ) solution – they start by looking at the offerings from security vendors like Fidelis Security, Verdasys, Mcafee, Symantec, Infowatch or Websense.

As tempting as it may seem to lean back and listen to vendor  pitches and learn from them (since after all, it is their specialty),   I’ve found that when this happens you become preoccupied with evaluating security technology instead of evaluating  business information value.

By starting an evaluation of security countermeasures with an assessment of asset value and focusing on mitigation of threats to the highest value assets in the business process we dramatically reduce the number of data loss signals we need to detect and process.

By focusing on a small number of important signals (for example file  transfer of over 500  credit card records over FTP channels) we reduce the number of signals that the security team need to process and dramatically improve the signal to noise ratio.

With fewer data loss signals to process – the data security team can focus on continuous improvement and refinement of the DLP signatures and the Data loss incident response process.

As we will see later in this post – it’s important to select appropriate methods use for data loss signal detection in order to obtain high signal to noise ration.

A common data security use case is protecting Microsoft Office documents on personal workstations from being leaked to competitors. In 2003 Gartner estimated that business users spend 30 to 40 percent of their time managing documents. In a related vein, Merrill Lynch estimated that over 85 percent of all business information exists as unstructured data .

The key question for enterprise information protection is value – not quantity.

Ask yourself – what is your most valuable asset and where is it stored?

For a company developing automated vision algorithms, the most valuable assets would be inside unstructured files stored in engineers’ workstations – working design documents and software code. For a customer service business the most valuable assets are in structured datasets stored in database servers and data warehouses.

The key asset for a customer service business (retail, e-Commerce sites, insurance companies, banks, cellular providers, telecommunications service providers  and government agencies) is customer data.

Customer data stored in large structured databases includes  billing information, customer contract information, CDRs (call detail records), payment transactions and more.   Customer data stored in operational databases is vulnerable due to the large numbers of users who access and handle the data – users who are not only salaried employees but also contractors and business partners.

Due to the high levels of external network connectivity to agents and customers using on-line insurance portals, one of the most important requirements for an insurance company is the ability to protect customer data  in different formats and multiple inbound/outbound network channels.

This is important  both from a privacy compliance (complying with EU and American privacy regulation)  and  from a business security perspective (protecting the data from being stolen by competitors).

Fidelis XPS Smart Identity Profiling provides a powerful way  to automatically identify and protect  policy holders information without having to scan databases and files in order to  generate fingerprints.

Fidelis XPS operates on real-time network traffic (up to 2.5gigabit traffic ) and implements multiple layers of content interception and decoding that “peels off” common compression, aggregation, file formats and encoding schemes, and extracts the actual content in a form suitable for detection and prevention of data leakage.

Smart Identity Profiling

Unlike keyword scanning and digital fingerprinting, Smart Identity Profiling can capture essential characteristics of a document or a structured data set but tolerates some significant variance that is common in database updates and document lifetime: editing, branching into several independent versions, sets of similar documents, etc. It can be considered as the successor to both keyword scanning and fingerprinting, combining the power of both techniques.

Keyword Scanning is a simple, relatively effective and user-friendly method of document classification. It is based on a set of very specific words, matched literally in the text. Dictionaries used for scanning include words inappropriate in communication, code words for confidential projects, products, or processes, and other words that can raise the suspicion independently of the context of their use. Matching can be performed by a single-pass matcher based on a setwise string matching algorithm.

As anybody familiar with Google can attest, the signal-to-noise ratio of keyword searches varies from good to unacceptable, depending on the uniqueness of the keywords themselves and the exactness of the mapping between the keywords and concepts they are supposed to capture.

Digital Fingerprinting (DF) is a technique designed to pinpoint the exact replica of a certain document or data file with the rate of false positives approaching zero. The methods used are calculations of message digests by a secure hash algorithm (SHA-1 and MD5 are popular choices).  Websense uses PreciseID (a sliding hash algorithm that is a variation on the DF technique – which is more robust than DF for unstructured data, but still requires frequent update of the signature and is unsuitable for protecting information in very large customer databases due to the amount of computation required and the need to access customer data and store the signatures which creates an additional data security vulnerability.

Here is an example of a Fidelis XPS Smart Identity Profile that illustrates the simplicity and power of XPS.

# MCP3.0 Profile
# name: InsurancePolicyHolders
# comments: Policy Holders
# threshold: 0
pattern:    MemoNo    P[A-Z][A-Z]
pattern:    BusinessUnitName    PZUInternational
pattern:    ControlNo    d{9}
pattern:    PolicyNo    4d{7}
use:    DateOfPolicy(PolicyNo,Date,Name,Phone,e_mail):Medium
use:    Medication(PolicyNo,Drug_Name,Name,Phone):Medium
use:    NamePhonePolicyNo(BusinessUnitName,PolicyNo,Name,Phone):Medium
------------------------------
prob: DateOfPolicy 0.200 0.200 0.200 0.200 0.200
prob: Medication 0.201 0.398 0.201 0.201
prob: NamePhonePolicyNo 0.000 0.333 0.333 0.333

As you can see in the above example – Smart Identity Profiling uses tuples of data fields – for example, the DateOfPolicy tuple which contains 5 fields – PolicyNo,Date,Name,Phone and e_mail address.  Although the probability of not detecting a single field might be fairly high, the probability of not detecting a given tuple of 5 fields is the multiple of 5 probabilities  – for example if the miss probability of a single field is 70% then the probability of missing the entire tuple is only 16.8%.

SIP (Smart Identity Profiling) is used successfully in Fidelis XPS appliances at gigabit deployments at large insurance companies like PBGC and telecommunication service providers like 013 and Netia.

Tell your friends and colleagues about us. Thanks!
Share this

I want data loss reasons, not numbers

Media reporting of data breach events like the UK NHS, Heartland, Hannaford  and Bank of America has overwhelming focussed on the raw numbers of customer data records that were breached.

Little information is available regarding the root causes – how attackers exploited the system and people vulnerabilities to get the data.

Although US legislation requires disclosure of a data loss event, it does not require disclosure of the root causes of  the event.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this