Tag Archives: ethics

On data retention – when not to backup data?

It is often assumed that the problem of data retention is about how to backup data and then restore it quickly and accurately, if there is a security event or system crash.

But, there are important cases, where the best data retention strategy is not to backup the data at all.

The process of backup is fairly well understood today and there are technologies for backing up data all the way from personal data backups on flash memory drives and Internet backup services to robots and cassette technologies for backing up terabytes of data.

Restoring data from backup is also nominally a fairly straightforward exercise, although I suspect that most businesses with well-oiled backup procedures generally don’t bother testing their backup media to see if they can actually restore the data.

But – there is another dimension to data retention besides backup and restore and that is minimizing the threat surface of sensitive data: PII (personally identifiable information) and  ePHI (protected healthinformation stored in an electronic format).

Let’s take the case of a typical business that has customer data, commercial information and intellectual property related to a development and/or manufacturing process. What is more important in our data retention strategy:  Backup and restore of customer data?  backup and restore of contracts or backup and restore of source code? The only way to  answer this question is to understand how much these assets are worth to the company and how much damage would be incurred if there was a data breach.

For the purpose of asset valuation, we distinguish between customer data without PII and customer data that may have PII.  Let’s consider 4 key assets of a company that designs and manufactures widgets and sells them over the Internet.

1. Customer data that may have some personal identifiers. The company may not deliberately accept and process customer data with attributes that would enable a third party to identify end users but such data may be collected in the course of marketing campaigns or pilot programs and and stored on company computers.  At the end of the marketing campaign, was the data removed?  Probably not. In the case of a data breach  of PII, it does not matter what the original intent was, the liability is  there. The company will pay the cost of the disclosure all the way through investigative audit through possible litigation.

2. Customer data with no personal identifiers.  Best practice is not to store data with PII at all, if the business needs numerical data for statistics, price analysistrend analysis of sales or simulations for new products, the analysis can be done on raw data without any PII.The best security control for PCI DSS and HIPAA is not to store PII at all.

3. Company reputation.  If there was a data breach, chances are company reputation may be tarnished for a while but notoriety is a form of publicity that can always be spun to the company’s advantage.

4. Intellectual property – for example, chemical recipies, algorithms, software engineering and domain expertise. The damage of  IP data loss can be sizable for a business, especially for an SME.  Here – the data retention strategy should focus on highly reliable backup and restore with data loss prevention to block leakage of sensitive digital assets. There is an ethical component to protecting IP and that means making sure that your employees and contractors understand the importance of protecting the business IP.

Note that in the life cycle of a customer data breach, damage first accrues from attacks on the data assets followed by reputational damage as the company gets drawn deeper into damage control, investigation and litigation.

But what about the customer data?

How do you minimize the customer data security threat surface?

In 3 words, your data retention strategy is very simple:

Don’t store PII.

Decide now that sensitive data will be removed from servers and workstations. Make sure that customer data with PII is not backed up.

Tell your friends and colleagues about us. Thanks!
Share this

The importance of data collection in a risk assessment

A risk assessment of a business always starts with data collection. The end objective is identifying and then implementing a corrective action plan that will improve data security in a cost-effective way, that is the right fit for the business.

The question in any risk assessment is how do you get from point A (current state) to point B (cost effective security that is the right fit for your business).

The key to cost-effective security is data collection.  Let’s recall that compliance regulation like PCI DSS 2 and the certifiable information security management standard ISO 27001 are based on fixed control frameworks. It’s easy to turn the risk analysis exercise into a check this/check that exercise, which by definition, is not guaranteed to get you to point B since the standard was never designed for your business. This is where we see the difference between ISO 27001 and ISO 27002.

ISO/IEC 27002 is an advisory standard meant to be applied to any type and size of business according to the particular security risks they face.

ISO/IEC 27001 (Information technology – Security techniques – Information security management systems – Requirements) is a certifiable standard. ISO/IEC 27001 specifies a number of firm requirements for establishing, implementing, maintaining and improving an ISMS (information security management standard), and specifies a set of 133 information security controls. These controls are derived from and aligned with ISO/IEC 27002 – this enables a business to implement the security controls that fit their business,and help them prepare for formal certification to ISO 27001.

Let me explain the importance of data collection by telling a story.

After reading this article in the NY Times  An Annual Report on one mans life, I was reminded about a story I read about Rabbi Joseph Horowitz (the “Alter from Novardok”) (1849–1919), relating his practice of writing a daily report on his life.

One of the things I learned from the musical director of the JP Big Band, Eli Benacot, is the importance of knowing where you are really holding in terms of your musical capabilities.  Many musicians, it turns out, have the wrong self-perception of their capabilities.  Sometimes, one sees a professional musician who is convinced of his proficiency and even within an ensemble he (or she) is incapable of really hearing how poorly they actually play.

Many times we feel secure but are not, or don’t feel secure when we really are. For example – a company may feel secure behind a well-maintained firewall but if employees are bringing smart phones and flash drives to work, this is an attack vector which may result in a high level of data loss risk. On the other hand – some people are afraid of flying and would prefer to drive, when in fact, flying is much safer than driving.

After we collect the data and organize it in a clear way, we then have the ability to understand where we are really holding.  That is the first step to building the correct security portfolio.

So, let’s return to the Rabbi Joseph Horowitz, who wrote a daily and annual report on his life. Here is his insight to implementing change – certainly a startling approach for information technology professionals who are used to incremental, controlled change:

“Imagine this scenario: A person decides that he wants to kasher his kitchen. But he claims, ‘Changing my dishes all at once involves throwing out an entire set and buying a brand new one. That’s quite an expense at one time. I’ll go about the kashering step by step. Today I’ll throw out one plate and replace it with a new one, tomorrow with a second and the next day with a third.’

“Of course, once a new plate is mixed with the old ones, it becomes treife like the rest. To kasher a kitchen, one must throw out all of his old dishes at once.

“The same holds true in respect to changing one’s character traits or way of life. One must change them in an instant because there is no guarantee that the anxieties and pressures that deter him on any given day will not deter him the following day, too, since anxieties and pressures are never ending. ”

(Madreigat Ha’adam, Rav Yosef Yoizel Horowitz).

 

Tell your friends and colleagues about us. Thanks!
Share this

Defining the insider threat

One of the biggest problems facing organizations is lack of rigorous definitions for trusted insider threats, data loss and how to estimate potential damage from a data loss event. With a lack of rigorous definitions for data loss and trusted insider threats, it’s hard to benchmark with other companies and difficult to select a good set of data security countermeasures.

Referring to work done by Bishop – “Defining the trusted insider threat”

An insider can be defined with regard to two primitive actions:

  1. Violation of a security policy using legitimate access, and
  2. Violation of an access control policy by obtaining unauthorized access.

Bishop bases his definition on the notion  “...that a security policy is represented by the access control rules employed by an organization.”

It is enough to take a glancing view at the ISO 27001 information security management standard to realize that a security policy is much more than a set of access control rules.  Security policy includes people policies and procedures,good hiring practices,  acceptable usage policies backed up by top management committment to data governance,audit,  robust outbound data security monitoring (or what is often called “DLP Light”) and incident response.  Information security management is based on asset valuation, measuring performance with security metrics and implementing the right, cost-effective portfolio of security countermeasures.

A definition of trusted insider threats  that is based on access control is therefore necassarily limited.

I would offer a more general definition of a trusted insider threat:

Any attack launched from inside the network by an employee, contractor or visitor that damages or leaks valuable assets by exploiting means (multiple accounts) and opportunity (multiple channels).

Using this definition, we can see that trusted insider threats is a matter of asset value and threat surface – not just access control:

  • For example, employees in an organization that crunches numbers of weather statistics have nothing to gain by leaking crunched data – since the assets have no intrinsic value.
  • For example, employee tendency to click on Microsoft Office documents can turn them into a trusted insider threat regardless of the access controls the organization deploys – as RSA learned recently.

RSA was hacked in the beginning of March 2011 when an employee was spear phished and opened an infected spreadsheet. As soon as the spreadsheet was opened, an advanced persistent threat (APT) — a backdoor Trojan — called Poison Ivy was installed. The attackers then gained free access into RSA’s internal network, with the objective of disclosing data related to RSA’s two-factor authenticators.

RSA is a big company with a big threat surface, lots of assets to attack and lots of employees to exploit.

The attack is similar to APTs used in the China vs. Google attacks from last year. Uri Rivner, the head of new technologies at RSA is quick to point out that that other big companies are being attacked, too:

“The number of enterprises hit by APTs grows by the month; and the range of APT targets includes just about every industry.Unofficial tallies number dozens of mega corporations attacked […] These companies deploy any imaginable combination of state-of-the-art perimeter and end-point security controls, and use all imaginable combinations of security operations and security controls. Yet still the determined attackers find their way in.”

Mitigating the trusted insider threat requires first of all defining whether or not there IS a threat and if so – finding the right security countermeasures to mitigate the risk.  One wonders whether or not RSA eats their own dog food and had deployed a data loss prevention system.  Apparently not.

Tell your friends and colleagues about us. Thanks!
Share this

Threats on personal health information

A recent HIPAA violation in Canada  where an imaging technician accessed the medical records of her ex-husband’s girlfriend comes as no surprise to me. Data leakage of ePHI in hospitals is rampant simply because a) there is a lot of it floating around and b) because of human nature.  Humans being naturally curious, sometimes vindictive and always worried when it comes to the health condition of friends and family will bend the rules to get information.   HIPAA risk and compliance assessments that we’ve been involved with at hospitals in Israel, the US and Australia consistently show that the number one attack vector on PHI is friends and family, not hackers.

Courtesy of my friend Alan Norquist from Veriphyr

Information and Privacy Commissioner Ann Cavoukian ordered a Hospital in Ottawa to tighten rules on electronic personal health information (ePHI) due to the hospital’s failure to comply with the Personal Health Information Protection Act (PHIPA).

The actions taken to prevent the unauthorized use and disclosure by employees in this hospital have not been effective.” – Information and Privacy Commissioner Ann Cavoukian

The problem began when one of the hospital’s diagnostic imaging technologists accessed the medical records of her ex-husband’s girlfriend. At the time of the snooping, the girlfriend was at the hospital being treated for a miscarriage.

Commissioner Cavoukian faulted the hospital for:

  • Failing to inform the victim of any disciplinary action against the perpetrator.
  • Not reporting the breach to the appropriate professional regulatory college.
  • Not following up with an investigation to determine if policy changes were required.

The aggrieved individual has the right to a complete accounting of what has occurred. In many cases, the aggrieved parties will not find closure … unless all the details of the investigation have been disclosed.” – Information and Privacy Commissioner Ann Cavoukian

It was not the hospital but the victim who instigated an investigation. The hospital determined that the diagnostic imaging technologists had accessed the victim’s medical files six times over 10 months.

The information inapprorpriately accessed included “doctors’ and nurses’ notes and reports, diagnostic imaging, laboratory results, the health number of the complainant, contact details … and scheduled medical appointments.” – Information and Privacy Commissioner Report

Sources: 
(a) Privacy czar orders Ottawa Hospital to tighten rules on personal information – Ottawa Citizen, January, 2011

 

Tell your friends and colleagues about us. Thanks!
Share this

10 guidelines for a security audit

What exactly is the role of an information security auditor?  In some cases, such as compliance  by Level 1 and 2 merchants with PCI DSS 2.0,  external audit is a condition to PCI DSS 2.0 compliance.   In the case of ISO 27001, the audit process is a key to achieving ISO 27001 certification (unlike PCI and HIPAA, ISO regards certification, not compliance as the goal).

There is a gap between what the public expects from an auditor and how auditors understand their role.

Auditors look at transactions and controls. They’re not the business owner and the more billable hours, the better.

The “reasonable person” assumes that the role of the security auditor is to uncover vulnerabilities, point out ways to improve security and produce a report that will enable the client to comply with relevant compliance regulation. The “reasonable person” might add an additional requirement of a “get out of jail free card”, namely that the auditor should produce a report that will stand up to legal scrutiny in times of a data security breach.

Auditors don’t give out “get out of jail” cards and audit is not generally part of the business risk management.

The “reasonable person” is a legal fiction of the common law representing an objective standard against which any individual’s conduct can be measured. As noted in the wikipedia article on the reasonable person:

This standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law. The standard also has a presence in contract law, though its use there is substantially different.

Enron, and the resulting Sarbanes-Oxley legislation resulted in significant changes in accounting firms’ behavior,but judging from the 2009 financial crisis from Morgan Stanley to AIG, the regulation has done little to improve our confidence in our auditors. The numbers of data security breaches are an indication that the situation is similar in corporate information security.  We can all have “get out of jail” cards but data security audits do not seem to be mitigating new risk from tablet devices and mobile apps. Neither am I aware of a PCI DSS certified auditor being detained or sued for negligence in data breaches at PCI DSS compliant organizations such as Health Net where 9 data servers that contained sensitive health information went missing from Health Net’s data center in Rancho Cordova, California. The servers contained the personal information of 1.9 million current and former policyholders, compromising their names, addresses, health information, Social Security numbers and financial information.

The security auditor expectation gap has sometimes been depicted by auditor organizations as an issue to be addressed  by educating users to the audit process. This is a response not unlike the notion that security awareness programs are effective data security countermeasures for employees that willfully steal data or bring their personal device to work.

Convenience and greed tend to trump awareness and education in corporate workplaces.

Here are 10 guidelines that I would suggest for client and auditor alike when planning and executing a data security audit engagement:

1. Use an engagement letter every time. Although the SAS 83 regulation makes it clear that an engagement letter must be used, the practical reason is that an engagement letter sets the mutual expectations, reduces risk of litigation and by putting mutual requirements on the table – improves client-auditor relationship.

2.Plan. Plan carefully who needs to be involved, what data needs to be collected and require input from C-level executives to  group leaders and the people who provide customer service and manufacture the product.

3. Make sure the auditor understands the client and the business.  Aside from wasted time, most of the famous frauds happened where the auditors didn’t really understand the business.   Understanding the business will lead to better quality audit engagements and enable the auditor and audit manager to be peers in the boardroom not peons in the hallway.

4. Speak to your predecessor.   Make sure the auditor talks to the people who came before him.  Speak with the people in your organization who did the last data security audit.   Even if they’ve left the company – it is important to understand what they did and what they thought could have been improved.

5. Don’t tread water. It’s not uncommon to spend a lot of time collecting data, auditing procedures and logs and then run out of time and billable hours, missing the big picture which is” how badly the client organization could be damaged if they had a major data security breach”. Looking at the big picture often leads to audit directions that can prevent disasters and  subsequent litigation.

6. Don’t repeat what you did last year.  Renewing a 2,000 hour audit engagement that regurgitates last years security check list will not reduce your threat surface.  The objective is not to work hard, the object is to reduce your value at risk, comply and …. get your “get out of jail card”.

7. Train the client to fish for himself.   This is win-win for the auditor and client. Beyond reducing the amount of work onsite, training client staff to be more self sufficient in the data collection and risk analysis process enables the auditor to better assess client security and risk staff (one of the requirements of a security audit) and improves the quality of data collected since client employees are the closer to actual vulnerabilities and non-compliance areas than any auditor.

As I learned with security audits at telecom service providers and credit card issuers, the customer service teams know where the bodies are buried, not a wet-behind-the-ears auditor from KPMG.

8. Follow up on incomplete or unsatisfactory information.  After a data security breach, there will be litigation.  During litigation, you can always find expert testimony that agrees with your interpretation of information but

The problem is not interpreting the data but acting on unusual or  missing data.  If your ears start twitching, don’t ignore your instincts. Start unraveling the evidence.

9. Document the work you do.  Plan the audit and document the process.  If there is a peer review, you will have the documentation showing the procedures that were done.  Documentation will help you improve the next audit.

10. Spend some time evaluating your client/auditor.   At the end of the engagement, take a few minutes and interview your auditor/client and ask performance review kinds of questions like: What do think your strengths are, what are your weaknesses?  what was succesful in this audit?  what do you consider a failure?   How would you grade yourself on a scale of 10?

Perhaps the biggest mistake we all make is not carefully evaluating the potential we have to meet our goals as audit, risk and security professionals.

A post-audit performance review will help us do it better next time.

Tell your friends and colleagues about us. Thanks!
Share this

3GPP Long Term Evolution – new threats or not?

3GPP Long Term Evolution (LTE), is the latest standard in the mobile network technology tree that produced the GSM/EDGE and UMTS/HSPA network technologies. It is a project of the 3rd Generation Partnership Project (3GPP), operating under a name trademarked by one of the associations within the partnership, the European Telecommunications Standards Institute.

The question is, what will be the data security  impact of LTE deployments? As LTE is IP based and IPv6 becomes more common in the marketplace, will the security requirements of mobile devices become similar to traditional networked devices?  There is already a huge trend  for BYOD or Bring Your Own Device to work, which certainly causes a lot of headaches for information security staffs. Will more bandwidth and flat IP networks of LTE increase the threat surface for corporate IT?

Other than higher performance, LTE features a flat IP network, but I don’t see how that increases the threat surface in any particular way.  The security requirements for mobile networked devices are similar to traditional wired devices but the vulnerabilities are different, namely the potential of unmanaged BYOD tablet/smartphone to be an attack vector back into the enterprise network and to be a channel for data leakage.  The introduction of Facebook smart phones is far more interesting as a new vulnerability to corporate networks than smart phones with a 100MB download and 20MB upload afforded by LTE.

I am not optimistic about the capability of a company to manage employee owned mobile devices centrally and trying to rein in smartphones and tablets with awareness programs.  Instead of trying to do the impossible or the dubious, I submit that enterprise that are serious about mobile data security must take 3 basic steps after accepting that BYOD is a fact of life and security awareness has limited utility as a security countermeasure.

  1. Reorganize physical, phones and information security into a single group with one manager.  This group must handle all data, software IT, physical (facilities) and communications issues with a single threat model driven by the business and updated quarterly. There is no point in pretending that the only phones used by employees are phones installed and operated by the companies telecom and facilities group. That functionality went out the door 10 years ago.
  2. Develop a threat model for the business – this is  key to being able to keep up with rapidly growing threats posed by BYOD.  Update that model quarterly, not yearly.
  3. CEO must take an uncompromising stance on data leaks and ethical employee behavior. It should be part of the company’s objectives, measurable in monetary terms just like increasing sales by 10% etc.

 

Tell your friends and colleagues about us. Thanks!
Share this

Wikileaks and data theft

A colleague of mine, Bill Munroe, is VP Marketing at Verdasys, the first of the agent DLP vendors and the most established of  the independent pure play DLP technology companies. (No. I do not have a business relationship with Verdasys).  Bill has written a paper entitled “Protecting against Wikileaks events and the trusted insider threat” . The paper brings a number of important insights regarding the massive data breach of State Department cables and why Wikileaks is different.

Wikileaks gives a leaker immediate visibility to her/his message. Once Wikileaks publishes the data, it’s  highly visible due to the tremendous conventional media interest in Wikileaks.  I doubt that PFC Manning, if he had a blog somewhere in the long tail of the Internet, would have made such an immediate impact.

Unlike Wikileaks, data theft of intellectual property or credit card data is motivated by the economic gain. In the case of Wikileaks, the motivation is social or political.  With cheap removable storage devices, smart phones, tables, dropbox and wireless network connectivity “employees with personal agendas will be more likely to jeopardize their careers in order to make a passionate statement“.

Network  DLP is a poor security countermeasure against the Wikileaks class of data breach. Network DLP can network-intercept but not analyze obfuscated data (encryption, embedded screenshots, steganography) and is blind to removable media and smart phones. The best technical countermeasure against a leak must be at the point of data use. First described in a 1983 DOD study called “The Trusted Computer System Evaluation Criteria” (TCSEC)  a user end point needs to be “instrumented” in order to identify and intercept content and mitigate threats before they can occur. This requires identification of the trusted user, appropriate content interception and analysis and the ability to tie the results into actionable forensics. Detecting data loss at the end point, is notably Verdasys’s key strength.

However – there are a few  points in the article that need to be addressed:

Insider theft of sensitive data is not new. WikiLeaks is just the latest outlet for the disaffected individual to be amplified in our interconnected world… WikiLeaks is merely the latest enabler of the populist-driven “Robin Hood” syndrome.

I don’t subscribe to the notion that data theft has always been an issue.   20 years ago, we had industrial espionage of trade secrets or national espionage of defense secrets – not the widespread data leaks we see today.  Conditions in 2011 are different then they were in the 80s when my father worked at TRW Defense and Space Systems in Redondo Beach.  Data breaches are driven by motive, means and opportunity – motive: under 30 something people have a sense of entitlement – they have a Blackberry, a nice car, a nice girlfriend, good standard of living, a 250K college education and a sense that they can do whatever they want without paying the price..  means – mobile and removable devices, Web services… opportunity – a leaker is in positions of access. Given the right stimulus (hating Obama,  despising Hilary, liking a bribe from Der Spiegel) they will get to the data, leave their ethics at the door and do the deed. Calling the phenomena “Robin Hood” is too gracious.

Trade secret and IP theft is projected to double again by 2017 with 2008 losses reaching one trillion dollars!

The $1 Trillion number for the financial losses due to IP theft  was mentioned in a McAfee press release (they took  the item off their web site…) and later quoted by President Obama’s in his talk on “aggressively protecting intellectual property”.

Since the 1 trillion number is  the cornerstone of both vendor and political argumentation for protecting IP, the number bears closer scrutiny. We will see that the $1 trillion number is no more than a love for round numbers, not unlike Gordon Browns love for round numbers “Bring 1,000 troops home for Christmas”.

Referring to Bessen and Maurer “Patent  Failure” and other research articles, the empirical data shows a different picture. Global patents held by US firms as of 1999 was $122BN in 1992 dollars.  Even if that number tripled in 20 years that means that the total IP value is 360BN so it’s impossible that 1 Trillion was “lost”.  I will discuss what loss of IP actually means in a moment.

Examining firm level data, we see that worldwide value of patent stocks is only about 1% of market value.   Note that the majority of this value is owned by a small number of large pharmaceutical companies.   Then, we have to net out litigation and IP legal costs from the net patent rents (the above-normal returns) that a company earns from it’s IP.

And to provide a sanity check on how disproportionate the 1 Trillion dollar IP loss number really is, consider that at  GSK (and their numbers are consistent with the other big innovative pharmas) – cost of sales is 26% of expenses, marketing – 31% and R&D 15%.  Now we know 2 things: (a) that the big pharmas account for most of the IP and (b) most of their money is in sales and marketing. If 10 big pharmas with a total of 100BN operating profit had lost a Trillion dollars, they would all be bankrupt by now,  but they are all alive and kicking and selling us everything from Viagra to Remicade.

What does the loss of intellectual property actually mean?  After all, it’s not like losing cash.

In a threat analysis I did for a NASDAQ traded firm with significant IP – I determined together with the CFO and the board that their exposure to IP leakage was about 1% of their market cap – they understood that you cannot “lose” IP – but when it’s leaked it goes to a competitor who may gain a time to market advantage – and that advantage is only temporary.   At another public firm where I did a threat analysis using the same methodology, the CEO and board determined that the exposure to IP theft was negligible since the competitors needed 12-18 months to implement stolen IP and since the firm was operating on a 12 month product release cycle, they were ahead of the competition who were using stolen IP.  In other words – it’s better to innovate than to steal and try to re-implement.  This is particularly true in the software industry where the cost of implementation is far higher than the time and cost to develop the algorithm.

Reading Bill’s article, one would naturally ask, given the magnitude of the problem and the effectiveness of Verdasys technology, why doesn’t every company in the world deploy end point DLP like they deploy a firewall.  I think that the answer lies in the actual magnitude of the financial impact of data leakage.   The State department cables Wikileaks disclosure may or may not have been orchestrated by the Obama administration itself – but arguably, no economic damage and no tangible damage was incurred to the US political image or image of it’s allies.  If  real damage had been done to the US, then Hilary would be keeping Jonathan Pollard company.

I think that Verdasys and other DLP vendors miss one of the key strengths of data loss detection/prevention technology: real time feedback to an organizations users, and the deterrent value.   As Andy Grove once wrote – “a little fear in the workplace is not necessarily a bad thing“.

With increasing consumerization of IT, entitled employees will have even more means at their disposal and even more blurring of business boundaries by sexy personal devices.

What is a company to do?  That leaves us with good management and a corporate culture with employee values of competitiveness that drives value that drives rewards both intangible and tangible for the employee.  If it’s just about the money – then an iPhone is worth a lot more than a $500 bonus but engendering a sense of being involved and influencing the business at all levels – even if it’s just a kind word once a day – will be worth 100 fold that number and go a long way towards mitigating the vulnerability of employee entitlement.

I’d like to conclude with a call to the marketeers at McAfee, Symantec, IBM, Oracle, Websense, Fidelis, Checkpoint and Verdasys. Let’s shift the DLP marketing focus from large federal customers and banks and explain to small to medium sized enterprises how DLP technologies can protect the value of their implementation techniques and intellectual property.

For a 10 man vaccine startup the secret is in the recipe, not in the patents.  For a SME with IP – it’s not the IP licensing value, it’s difference between life and death.  And death trumps money any day of the week.

You can download the paper “Protecting Against WikiLeaks Events and the Insider Threat” on the Verdasys Web site.

Tell your friends and colleagues about us. Thanks!
Share this

Why data security is like sex

We all think about sex – men (most of the time), women (some of time) and teenagers (all the time).

Sex – despite the huge volume of content in the digital and print media, is one of those phenomena that demonstrate an inverse relationship between substance and talk.    The more talk, chances are, the less substance actually going on. The less talk, the higher a probability that something serious is really going on between you and your partner.  When things are cooking for you and your wife/girl friend  you don’t have time to be writing about it on your blog. When things are rough,  you will probably be a bit shy about going into detail on Facebook.  But it’s a lot easier to talk about other people, who’s hot and who’s not.

Just like data security and global terror.  It’s a lot easier to talk about the Middle East and ignore what’s happening in your own backyard.   It’s like  “other peoples money” – something you can spend without worrying too much.

Using this metaphor, the data security industry is like sex.   Lots of talk and press releases about data breaches, plenty of marketing communications written by clueless communications majors just out of school working for Symantec and Mcafee and recycling of Gartner reports ad nauseum.  But – a lot less in the vulnerability and risk mitigation department and generally low levels of willingness to talk about security failures in an organization or what really works.

Since this is part of the human chemistry – I don’t imagine this will change in the near future but for sure we will have a lot of fun, just like great sex.

Tell your friends and colleagues about us. Thanks!
Share this

Using DLP to protect your source code

Dec 10, 2010.

Sergey Aleynikov, a 40-year-old former Goldman Sachs programmer, was found guilty on Friday by a federal jury in Manhattan of stealing proprietary source code from the bank’s high-frequency trading platform. He was convicted on two counts — theft of trade secrets and transportation of stolen property — and faces up to 10 years in prison.

Mr. Aleynikov’s arrest in 2009 drew attention to a business that had been little known outside Wall Street — high-frequency trading, which uses complex computer algorithms to make lightning-fast trades to exploit tiny discrepancies in price. Such trading has become an increasingly important source of revenue for Wall Street firms and hedge funds, and those companies fiercely protect the code underpinning their trading strategies.

See full story on the Goldman Sachs source code theft incident

If you have proprietary algorithms, it can and may be happening to you. Consider three threat scenarios for software assets:

1. Source code  theft by employees

Source code theft by employees can be a major ethical issue that requires good management but also good detection. Designing, developing and deploying successful software is complex business, but unfortunately, most companies don’t know whether or not their source code is leaving their network with some proprietary algorithms. See the story about Chinese source code theft.

The stories of losing source code and having it posted on the Net are fairly common – remember  Windows NT source code and Mainsoft?.  We’ve found, from work with software developers, that many source code leaks are never reported. Frightengly, companies usually have no idea how the source code got out in the first place and call in consultants after the event to detect the origin of the leak .

In order to respond quickly to a suspected disclosure of company IP/source code, it’s crucial to be doing real time detection which is far more important than trying to prevent unauthorized disclosure altogether.

2. Source code leakage from outsourcing

In many cases,  source code leaks from a network run by an outsourcing service provider or from a team of outsourcing contractors connected via a VPN. However, if companies cannot detect problems in their own networks, they are unlikely to find them in others. The result is an outsourcing relationship built on a shaky foundation with no independent monitoring capability to enforce non-disclosure agreements.

This points at a wider problem for software developers everywhere. Whether you collaborate with a partner on development or outsource an entire project, you expose sensitive software and intellectual assets to people people with limited allegiance to your firm.

3. Misuse of Open Source software

This is probably worth an entire article in it’s own right but most developers today incorporate Free Open Source software into their projects. The Open Source license, if it’s GPL for example, may be infectious in the sense that if the GPL requires disclosure of sources, then incorporating GPL-licensed code into your project may require you to do the same (read about copy-left licenses).  In that case – you should start by establishing a policy for using Open Source licenses and monitor usage of Open Source code.

How can DLP (data loss prevention) help you protect your source code?

Data loss prevention (or in this case data loss detection) of software source code is based on three key concepts

  1. A direct approach – prevent valuable software assets from getting out, unlike indirect methods that focus on preventing unwanted users from getting in. Conventional IT security takes an indirect approach by focusing on controlling user and system  behavior through access control and authentication. This indirect method places a heavy burden on your security staff and does not scale well. It won’t get the job done.
  2. Real-time flow monitoring – of software assets over all channels. Since almost everything tunnels over http today, you have to worry about back-channels and not just email
  3. Network audit – DLP should be used in a detection capacity to detect upticks in unusual activity; for example unusually large FTP or SCP file transfers may be a pre-cursor of an employee getting ready to leave the company with large quantities of your source code and proprietary algorithms

When deploying DLP for source code protection, consider technical and  business requirements.

Technical requirements. Commercial DLP products include pre-built fingerprints for identifying C/C++C# source code.  A more substantial requirement  is that the DLP solution you choose should use network DLP technology that is bi-directional on all channels – two possible candidates are Websense DLP and Fidelis Security Systems XPS.

Business requirements start with management commitment to the project, and policy for use of Open Source code and use of code and non-disclosure by contractors.

And finally, a post like this would not be complete without the requisite 7 step checklist:

A 7 Step check list for source code protection for the information security team

  1. Acknowledge that black holes exist (for example: no policy for Open Source licensed code, unclear policy for use of company IP in contractor software development). Fix it by writing the appropriate policies and implementing them.
  2. Get your VP Technologies to agree and budget money.
  3. Identify your company’s business needs for source code protection. Some senior executives don’t care what you do – they only care about sleeping well at night. The more you know about their issues the easier it is to sell them. Don’t improvise.
  4. If you’re not using bi-directional network DLP today, call a DLP vendor on Wednesday and ask them to come into the office on Monday with a box.
  5. Give them one hour to set up on a production network segment. Try a few of your favorite cases; trap a webmail with some top-secret project keywords, fingerprint a SQL query from your data model, steal some C# source code from your own system and upload to github.com
  6. Allocate one day for a hands-on evaluation and have the vendor in a week later to discuss results.
  7. Be patient. Be prepared to refine the rules.

Using DLP technology to monitor movement of source code can be a rewarding exercise in terms of protecting company IP and reputation, however it is not a trivial task. Just remember:

It’s kinda fun to do the impossible

Walt Disney

Tell your friends and colleagues about us. Thanks!
Share this

Credit card shims

Using shims that fit into the ATM machine and read your mag stripe data has been around for a while.  It’s a good way to get the track 2 data but it won’t get your PIN (which if you are in Europe and the Middle East is part of the VISA chip and pin security for credit cards – the PIN is not stored on the card, so it can’t be read by skimming with a slot reader or shimming with a piece of plastic inside the ATM slot).     Now, it seems there is fairly low tech way to capture your PIN by using a flexible keypad overlay on top of the regular ATM keypad as you can see here – this ATM keyboard will steal your PIN

To these rather technical attacks on credit card data, we also have a kind of side attack as recently reported in Paris – where two women waited next to a man on line and waited until he entered his PIN number at the ATM and then dropped their shirts and flashed their boobs – as you can see in this post – stealing money with their boobs

Not bad.

Tell your friends and colleagues about us. Thanks!
Share this