Category Archives: PCI DSS

Encryption, a buzzword, not a silver bullet

Encryption,  buzzword, not a silver bullet for protecting data on your servers.

In order to determine how encryption fits into server data protection, consider 4 encryption components on the server side: passwords, tables, partitions and  inter-tier socket communications.

In these 4 components of a application / database server encryption policy, note that some countermeasures are required (for example one-way hashes of passwords, while other such as encrypting specify table columns may or may not be relevant to a particular application).

1. Encrypted password storage

You must encrypt passwords. It’s surprising to me how many Web sites don’t bother encrypting user passwords – See cases Universal Music Portugal where e-mail addresses and clear-text passwords are dumped on Internet.

What is more surprising is the confusion between encryption and hashing.

Don’t use AES for encrypting passwords in your MySQL or Oracle or MS SQL database.  You’ll end up storing the AES key somewhere in the code and an attacker or malicious insider can read the key by opening up one of your application DLLs in Notepad++ and read that key in a jiffy and breach your entire MySQL database with a single SELECT statement.

Database user passwords should be stored as MD5 hashes, so that a user  (such as a DBA) who has been granted SELECT access to the table (typically called ‘users’)  cannot determine the actual password. Make sure that different instances have different salts and include some additional information in the hash.

If you use MD5 encryption for client authentication, make sure that  the client hashes the password with MD5 before sending the data on the network.

2. Encrypt specific database table columns

The PostgreSQL 9.1 pgcrypto module allows certain fields to be stored encrypted. This is especially useful if some of the data is sensitive for example in the case of ePHI where the Web application needs to comply with the CFR 45 Appendix A Security rule. The client software provides the decryption key and the data is decrypted on the server and then sent to the client.  In most cases the client (a database driver in an MVC application such as Ruby on Rails or CakePHP or ASP.NET MVC is also a server side resource and often lives on the same physical server as the database server. This is not a bad thing.

3. Encrypt entire data partitions

Encrypting entire data partitions has its place.

On Linux, encryption can be layered on top of a file system using a “loopback device”. This allows an entire file system partition to be encrypted on disk, and decrypted by the operating system. Many operating systems support this functionality, including Windows.

Encrypting entire partitions is a security countermeasure for physical attacks, where the entire computer is stolen. Research we did in 2007 indicated that almost 50% of large volume data breaches employed a physical attack vector (stealing a notebook at a hotel checkin desk, hijacking a truck transporting backup tapes to Iron Mountain and smash and grab jobs where thieves know the rent-a-cop walkaround schedule and break in and steal desktop computers.

On the other hand, once the volume is mounted,  the data is visible.

4. Encrypt socket communications between server tiers

SSL has it’s place, although SSL is not a silver bullet countermeasure for Microsoft Windows vulnerabilities and mobile medical devices vulnerabilities as I wrote herehere and here.

SSL connections encrypt all data sent across the network: the password, the queries, and the data returned. In database client-server connections,  relational database systems such as PostgreSQL allow administrators to specify which hosts can use non-encrypted connections (host) and which require SSL-encrypted connections (hostssl). Also, clients can specify that they connect to servers only via SSL. Stunnel or SSH can also be used to encrypt transmissions.

 

Tell your friends and colleagues about us. Thanks!
Share this

Disaster recovery planning

This article describes a plan and implementation process for disaster recovery planning. The secret to success in our experience is to involve the local response team from the outset of the project.

Copyright 2006 D.Lieberman. This work is licensed under the Creative Commons Attribution License

The disaster recovery plan is designed to assist companies in responding quickly and effectively to a disaster in a local office and restore business as quickly as possible. In our experience, participation in the planning and implementation process is more important than the process itself and helps ensure that the local response teams understand what they need to do and that resources they need will be available.

Keywords

  • DRP – disaster recovery plan
  • BIT  business impact timeline
  • ERT emergency response team
  • BIA  business impact assessment
  • Countermeasures  physical or procedural measures we take in order to mitigate a threat
  • PRT primary response time; how long it takes (or should take) to respond (not resolve)
  • RRP  recovery and restore plan; recovery from the disaster and restore to original state

DR planning is not about writing a procedure, getting people to sign up and then filing it away somewhere. In the BIT (business impact timeline) we see a continuum of actions before and after an incident. In the pre-incident phase, the teams are built, plans are written, and preparedness is maintained with training and audit. After an incident, the team responds, recovers, restores service and assesses effectiveness of the plan.

drp_1.gif

T=ZERO is the time an incident happens. Even though one hopes that disaster will never strike, refresher training should be conducted every 6 months because of employee turnover and system changes and self-audits conducted by the ERT every 3 months.

Building the DR plan

Build the ERT

Assign a 2-person team in each major office (for small offices with one or two people, then the employee will do it himself) to be the ERT. The people in the ERT need to have both technical and social skills to handle the job. Technical skills means being able to call an IT vendor and being able to help the vendor diagnose a major issue such as an unrecoverable hard disk crash on an office file and print server. Social skills means staying cool under pressure and following procedure in major events such as fire, flooding or terror attack.

In addition to an ERT in each office, one ERT will be designated as “response manager”. The response manager is a more senior person (with a backup person) that will command the local teams during crisis, maintain the DRP documentation and provide escalation.

The local response team becomes involved and committed to the DRP by planning their responses to incidents and documenting locations of resources they need in order to respond and restore service.

DR Planning Pre-incident activities

Kickoff call

The purpose of the call is to introduce the DRP process and set expectations for the local ERT. Two days before the call, the local team will receive a PowerPoint presentation describing DRP, the implementation process and the BIA worksheet. At the end of the call, the team will take a commitment to fill out the worksheet and prepare for a review session on the phone one week later.

Business Impact Assessment (BIA)

In the BIA, the team lists possible incidents that might happen and assesses the impact of a disaster on the business. For example there are no monsoons in Las Vegas but there might be an earthquake (Vegas is surrounded by tectonic faults and number 3 in the US for seismic activity) and an earthquake could put a customer service center in Vegas out of business for several days at least.

Recover and Restore

Recovery is about the ERT having detailed and accessible information about backups – data, server, people and alternative office space. Within 30 days after a disaster, full service should be restored by the ERT working with local vendors and the response manager.
It may also be useful using http://www.connected.com for backup of data on the distributed PC’s and notebooks.

DR Plan Review

The purpose of the call is to allow each team to present their worksheet and discuss appropriate responses with the global response manager. Two days before the call, the teams will send in their BIA worksheet. The day after the call the revised DRP will be posted.

Filling out the DRP worksheets

There are two worksheets the BIA worksheet (which turns into the primary response checklist) and the RRP (recover and restore plan) worksheet, which contains a detailed list of how to recover backup resources and restore service.

Filling out the BIA worksheet.

In the BIA worksheet, the team lists possible incidents and assesses the impact of a disaster on the business. In order to assess the impact of a disaster on the business we grade incidents using a tic-tac-toe matrix.

drp_2.gif

The team will mark the probability and impact rating for an incident going across a row of the matrix. A risk might have probability 2 and impact 5 making it a 7, while another risk might have probability 1 and impact 3 making it a 4. Countermeasures would be implemented for the 7 risk before being implemented for the 4 risk.

BIA worksheet step by step
  • Add, delete and modify incidents to fit your business
  • Grade business impact using the “tic-tac-toe” matrix for each incident.
  • Set a primary response time (how quickly the ERT should respond not resolve)
  • Establish escalation path  escalate to local service providers and response manager within a time that matches the business impact. Escalate to local vendor immediately and escalate to response manager according to following guidelines:
    • Risk > 6 within 15
    • Risk <= 6 and > =4 within 60
    • Risk < 4 within 2 hours.

drp_3.gif

Filling out the RRP worksheet.

In the RRP worksheet, the team documents in detail how to locate and restore backups and how to access servers (in the network and physically).

drp_4.gif

Maintaining the DR plan

DR exercises

Once every 6 months, the response manager will run an unannounced exercise, simulating an emergency. In a typical DR exercise the local ERT will be required to:

  • Respond to a single emergency (for example earthquake)
  • Verify contents of RRP check list
  • Physically locate backups

 

Self-Audit

After completion of the ER plan the local response team needs to perform periodic self-audits. A member of the local ERT will schedule an audit once every 3 months and notify the response manager by email regarding the date.

  • The audit should take about 1 hour and will check documentation and backup readiness
  • Documentation readiness
    • Make sure telephone numbers of critical suppliers posted at entrance to office. Make sure numbers are current by calling.
    • Read primary response sheet
    • Wallet-sized cards with emergency phone numbers and procedures, to be carried by all employees.
    • Onboard list who is in the office today and who is traveling or on vacation
  • Backup readiness
    • Local backup files/tapes
Tell your friends and colleagues about us. Thanks!
Share this

DRM versus DLP

A common question for a large company that needs to protect intellectual property from theft and abuse is choosing the right balance of technology, process and procedure. It has  been said that the Americans are very rules-based in their approach to security and compliance where the the Europeans are more principles-based.

This article presents a systematic method for selecting and cost-justifying data security technology to protect  intellectual property theft and abuse.

The original presentation was given at the October 2, 2009 DLP-Expert Russia meeting in Istra (just outside of Moscow)

Click here to download the presentation

Tell your friends and colleagues about us. Thanks!
Share this

Using DLP to prevent credit card breaches

I think that Data Loss Prevention is great way to detect and prevent payment card and PII data breaches.

Certainly, all the DLP vendors think so.  Only problem is, the PCI DSS Council doesn’t even have DLP in their standard which pretty much guarantees zero regulatory tail wind for DLP sales to payment card industry players.

I’m actually impressed that Symantec didn’t manage to influence the PCI DSS council to include DLP in the standard. An impressive display of professional integrity and technology blindness.

A while back, we did a software security assessment for a player in the online transaction space.

When I asked the client and auditor what kind of real time data loss monitoring they have in place, just in case, they have a bug in their application and/or one of their business partners or trusted insiders steals data, the answers where like “umm, sounds like a good idea but it is not required by PCI DSS 2.0”

And indeed the client is correct.

PCI DSS 2.0 does not require outbound, real time or any other kind of data loss monitoring.

The phrases “real time” and “data loss” don’t appear in the standard. The authors of the standard like file-integrity monitoring but in an informal conversation with a PCI DSS official in the region, he confessed to not being familiar with DLP.

Here are a few PCI  monitoring requirements.

None of these controls directly protect the the payment card from being breached. They are all indirect controls and very focused on external attackers – not on trusted insiders nor business partners.

  1. Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).
  2. If automated monitoring of wireless networks is utilized (for example, wireless IDS/IPS, NAC, etc.), verify the configuration will generate alerts to personnel.
  3. Deploy file-integrity monitoring tools to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.
  4. Monitor and analyze security alerts and information, and distribute to appropriate personnel.
  5. Verify through observation and review of policies, that designated personnel are available for 24/7 incident response and monitoring coverage for any evidence of unauthorized activity, detection of unauthorized wireless access points, critical IDS alerts, and/or reports of unauthorized critical system or content file changes.

Oh man.

Tell your friends and colleagues about us. Thanks!
Share this

The psychology of data security

Over 6 years after the introduction of the first data loss prevention products, DLP technology has not mainstreamed into general acceptance like firewalls. The cultural phenomenon of companies getting hit by data breaches but not adopting technology countermeasures to mitigate the threat requires deeper investigation but today, I’d like to examine the psychology of data security and data loss prevention.

Data loss has a strange nature that stems from unexpected actions by trusted insiders in an environment assumed to be secure.

Many IT managers are not comfortable with deploying DLP, because it requires admitting to an internal weakness and confessing to  not doing your job. Many CEO’s are not comfortable with DLP as it implies employee monitoring (not to mention countries like Germany that forbid employee monitoring) . As a result, most companies  adopt business controls in lieu of technology controls.  This is not necessarily a mistake, but it’s crucial to implement the business controls properly.

This article will review  four business control activities: human resources,  internal audit, physical security and information security. I will highlight disconnects in each activity and recommend corrective action at the end of the article.

The HR (human resources) department

Ensuring employee loyalty and reliability is a central value for HR, which has responsibility for hiring and guiding the management of employees. High-security organizations, such as defense contractors or securities traders, add additional screening such as polygraphs and security checks to the hiring process. Over time, organizations may sense personality changes, domestic problems or financial distress that indicate increased extrusion risks for employees in sensitive jobs.

Disconnect No. 1: HR isn’t accountable for the corporate brand and therefore doesn’t pay the price when trusted employees and contractors steal data. What can you do?  Make HR part of an inter-departmental team to deal with emerging threats from social media and smart phones.

Internal audit

Data loss prevention is ostensibly part of an overall internal audit process that helps an organization achieve its objectives in the areas of:

  • Operational effectiveness
  • Reliability of financial reporting
  • Compliance with applicable laws and regulations

Internal auditors in the insurance industry say regulation has been their key driver for risk assessment and implementation of preventive procedures and security tools such as intrusion detection. Born in the 1960s and living on in today’s Windows and Linux event logs, log analysis is still the mainstay of the IT audit.  The IT industry has now evolved to cloud computing,  virtualization,Web services and converged IP networks. Welcome to stateless HTTP transactions, dynamic IP addressing and Microsoft Sharepoint where the marketing group can setup their own site and start sharing data with no controls at all. Off-line analysis of logs has fallen behind and yields too little, too late for the IT auditor! According to the PCI Data Security council in Europe – over 30% of companies with a credit card breach discovered the breach after 30 days and 40% after more than 60 days.

Disconnect No. 2: IT auditors have the job, but they have outdated tools and are way behind the threat curve.  What can you do?  Give your internal auditors, real-time network-based data loss monitoring and let them do their job.

Physical security

Physical security starts at the parking lot and continues to the office, with tags and access control. Office buildings can do a simple programming of the gates to ensure that every tag leaving the building also entered the building. Many companies run employee awareness programs to remind the staff to guard classified information and to look for suspicious behavior.

Disconnect No. 3: Perfect physical security will be broken by an iPhone.  What can you do? Not much.

Information security

Information security builds layers of firewalls and content security at the network perimeter, and permissions and identity management that control access by trusted insiders to digital assets, such as business transactions, data warehouse and files.

Consider the psychology behind wall and moat security.

Living inside a walled city lulls the business managers into a false sense of security.

Do not forget that firewalls let traffic in and out, and permissions systems grant access to trusted insiders by definition. For example, an administrator in the billing group will have permission to log on to the accounting database and extract customer records using SQL commands. He can then zip the data with a password and send the file using a private Web mail or ssh account.

Content-security tools based on HTTP/SMTP proxies are effective against viruses, malware and spam (assuming they’re maintained properly). These tools weren’t designed for data loss prevention. They don’t inspect internal traffic; they scan only authorized e-mail channels. They rely on file-specific content recognition and have scalability and maintenance issues. When content security tools don’t fit, we’ve seen customers roll out home-brewed solutions with open-source software such as Snort and Ethereal. A client of ours once  used Snort to nail an employee who was extracting billing records with command-line SQL and stealing the results by Web mail.  The catch is that they knew someone was stealing data – and deployed Snort as a way of collecting incriminating evidence, not as a proactive real-time network monitoring tool.

Disconnect No. 4: Relying on permissions and identity management is like running a retail store that screens you coming in but doesn’t put magnetic tags on the clothes to prevent you from wearing that expensive hat going out. What can do you? Implement real-time data loss audit using passive network monitoring at the perimeter. You’ll get an excellent picture of anomalous data flowing out of your network without the cost of installing software agents on desktops and servers.  The trick is catching and then remediating the vulnerability as fast as you can.  If it’s an engineer sending out design files or a contractor surfing the net from your firewall – fix it now, not 3 months from now.

Conclusion

To correct the disconnects and make data security part of your business, you need to start with CEO-level commitment to data security.  Your company’s management controls should explicitly include data security:

  • Soft controls: Values and behavior sensing
  • Direct controls: Good hiring and physical security
  • Indirect controls: Internal audit
Tell your friends and colleagues about us. Thanks!
Share this

Data discovery and DLP

A number of DLP vendors like Symantec and Websense have been touting the advantages of data discovery – data at rest and data  in motion. Discovery of data in motion is an important part of continuous improvement of data security policies.  However – there are downsides to data discovery.
Discovery is a form of voyeurism – it’s titillating but the fun wears off quickly.

Automated discovery of data at rest is  an unsurmountable  challenge for institution with large quantities of PCs, data and thousands of document formats, most of which are not well-documented and all the application and database server technologies that were ever invented. Smaller companies may find it either unnecessary or not cost-effective.

Discovery of data at rest is also  a double-edged sword.  From a compliance perspective, it’s not only not required by PCI DSS 1.x but it can create exposure issues that no business in their right mind would want to deal with.  Also – why would a business want to buy products and services from a technology vendor vendor and allow them to “discover” their data?

Love to hear your comments and what you think.

Tell your friends and colleagues about us. Thanks!
Share this

Data security and compliance – Best practices

Compliance is about enforcing business process – for example, PCI DSS is about getting the transaction authorized without getting the data stolen. SOX is about sufficiency of internal controls for financial reporting and HIPAA is about being able to disclose PHI to patients without leaks to unauthorized parties.

So where and how does DLP fit into the compliance equation?

Let’s start with COSO recommendations for internal controls:

“If the internal control system is implemented only to prevent fraud and comply with laws and regulations, then an important opportunity is missed…The same internal controls can also be used to systematically improve businesses, particularly in regard to effectiveness and efficiency.”
In the attached presentation – we review data security requirements in compliance regulation, we discuss provable security and show how DLP can serve both as an invaluable measurement tool of security metrics of inbound and outbound business transactions and when required – as a last line of defense for personal account numbers.
Tell your friends and colleagues about us. Thanks!
Share this

The role of user accountability and training in data security

the set of shared attitudes, values, goals, and practices that characterizes an institution, organization or group.

In this article I will show that DLP technology such as Fidelis XPS, Mcafee DLP, Verdasys Digital Guardian, Websense Data Security Suite and Symantec Data Loss Prevention 9 – is a necessary but not sufficient condition for effective data security. I submit that effective data security is a three-legged stool of:

  1. Monitoring – using DLP technology
  2. Training – strengthening of ethical values with training and personal example at all levels of management
  3. Accountability – paying the price when a data loss event happens

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

A great year for data thieves

The Verizon Business Report on data breaches 2009 was released – the data breach investigations report headlines with 285 million data records breached in 2008:

  • 91% of attackers were organized crime
  • 74% of attacks by malicious outsiders
  • 67% of vulnerabilities due to system defects
  • 32% implicated business partners

The report must be particularly disturbing to endpoint DLP vendors focused on preventing data loss by trusted insiders on  PCs (  99.6% of data was breached by  attackers attacking servers…. )

My experience with clients in the past 5 years in the data loss/extrusion prevention business has been focused on discovering internal security vulnerabilities and implementing cost-effective security countermeasures.  Our findings (summarized in our Business Threat Modeling white paper) were based on analyzing empirical data of 167 data loss events points a finger at software defects as a key data loss vulnerability. The Verizon business study appears to suggest that the situation has only gotten much worse – i.e. data breachs are rising as software quality is declining.

A conservative estimate in our research showed that 49% of the events exploited software defects as shown in the below table. Theoretically we can mitigate half of the risk by removing software defects in existing applications. The question, which we  answer in the white paper is how.

Aggregated vulnerability distribution by type
Vulnerability type

Total

Percentage

Accidental disclosure by email

5

3.0%

Human weakness of system users/operators

13

7.8%

Unprotected computers / backup media

67

40.1%

Malicious exploits of system defects

82

49.1%

Grand Total

167

100.0%

The Carnegie Mellon Software Engineering Institute (SEI) reports that 90 percent of all software vulnerabilities are due to well-known defect types (for example using a hard coded server password or writing temporary work files with world read privileges). All of the SANS Top 20 Internet Security vulnerabilities are the result of “poor coding, testing and sloppy software engineering

Tell your friends and colleagues about us. Thanks!
Share this