Category Archives: Compliance

Moving your data to the cloud – sense and sensibility

Data governance  is a sine qua non to protect your data in the cloud. Data governance is of particular importance for the cloud service delivery model which is philosophically different from the traditional IT product delivery model.

In a product delivery model, it is difficult for a corporate IT group to quantify asset value and data security value at risk over time due to changes in staff, business conditions, IT infrastructure, network connectivity and software application changes.

In a service delivery model, payment is made for services consumed on a variable basis as a function of volume of transactions, storage or compute cycles. The data security and compliance requirements can be negotiated into the cloud service provider service level agreement.  This makes quantifying the costs of security countermeasures relatively straightforward since the security is built into the service and renders the application of practical threat analysis models more accessible then ever.

However – this leaves the critical question of data asset value and data governance. We believe that data governance is a primary requirement for moving your data to the cloud and a central data security countermeasure in the security and compliance portfolio of a cloud customer.

With increasing numbers of low-priced, high-performance SaaS, PaaS and IaaS cloud service offerings,  it is vital that organizations start formalizing their approach to data governance.  Data governance means defining the data ownership, data access controls, data traceability and regulatory compliance, for example PHI (protected health information as defined for HIPAA compliance).

To build an effective data governance strategy for the cloud, start by asking and answering 10 questions – striking the right balance between common sense and  data security requirements:

  1. What is your most valuable data?
  2. How is that data currently stored – file servers, database servers, document management systems?
  3. How should that data  be maintained and secured?
  4. Who should have access to that data?
  5. Who really has access to that data?
  6. When was the last time you examined your data security/encryption polices?
  7. What do your programmers know about data security in the cloud?
  8. Who can manipulate your data? (include business partners and contractors)
  9. If leaked to unauthorized parties how much would the damage cost the business?
  10. If you had a data breach – how long would it take you to detect the data loss event?

A frequent question from clients regarding data governance strategy in the cloud is “what kind of data should be retained in local IT infrastructure?”

A stock response is that obviously sensitive data should remain in local storage. But instead, consider the cost/benefit of storing the data in an infrastructure cloud service provider and not disclosing those sensitive data assets to trusted insiders, contractors and business partners.

Using a cloud service provider for storing sensitive data may actually reduce the threat surface instead of increasing it and give you more control by centralizing and standardizing data storage as part of your overall data governance strategy.

You can RFP/negotiate robust data security controls in a commercial contract with cloud service providers – something you cannot easily do with employees.

A second frequently asked question regarding data governance in the cloud is “How can we protect our unstructured data from a data breach?”

The answer is that it depends on your business and your application software.

Although analysts like Gartner have asserted that over 80% of enterprise data sets are stored in unstructured files like Microsoft Office – this is clearly very dependent on the kind of business you’re in. Arguably, none of the big data breaches happened by people stealing Excel files.

If anything, the database threat surface is growing rapidly. Telecom/cellular service providers have far more data (CDRs, customer service records etc…) in structured databases than in Office and with more smart phones, Android tablets and Chrome OS devices – this will grow even more. As hospitals move to EMR (electronic medical records), this will also soon be the case in the entire health care system where almost all sensitive data is stored in structured databases like Oracle, Microsoft SQL Server, MySQL or PostgreSQL.

Then. there is the rapidly growing  use of  MapReduce/JSON database technology used by Facebook and Digg: CouchDB (with 10 million installations) and MongoDB that connect directly to Web applications. These noSQL databases  may be vulnerable to some of the traditional injection attacks that involve string catenation. Developers are well-advised to use native APIs for building safe queries and patch frequently since the technology is developing rapidly and with large numbers of eyeballs – vulnerabilities are quickly being discovered and patched. Note the proactive approach the the Apache Foundation is taking towards CouchDB security and a recent (Feb 1, 2011) version release for a CouchDB cross-site scripting vulnerability.

So – consider these issues when building your data governance strategy for the cloud and start by asking and answering the 10 key questions for cloud data security.

Tell your friends and colleagues about us. Thanks!
Share this

Mobile device security challenges

It has been said that there is nothing new under the sun and that every generation forgets or never learned the hard-earned lessons from the spilled blood of the previous generation.

Reviewing the security and compliance issues  of a new mobile medical device recently, I was struck by how familiar many of the themes are.

What makes mobile devices special? Actually nothing.

Deploying line of business or life science applications on mobile Android tablets or an iPad has a different set of security requirements than backing up your address book. It requires thinking about the software security and privacy vulnerabilities in a systematic way and using a rigorous practical threat analysis methodology. As we will show in this short article, the key vulnerabilities of mobile devices are similar to traditional IT security vulnerabilities even if the threat surface is dramatically different.

However, a software security assessment of a life science software application deployed on a mobile device needs to look beyond the malware and spyware and data breach attacks on the device. Mobile Android tablets or iPads running electronic medical records applications are usually deployed in uncontrolled, complex and highly vulnerable environments such as enterprise IT networks in hospitals.  The software security issues are much more severe than those of a single tablet:   a combination of network vulnerabilities, application software vulnerabilities, malicious attackers superimposed on  the large, complex threat surface of an enterprise IT network.

The mobile medical device is now an attack vector into the hospital network, a far more valuable asset than the mobile device itself.

It seems that there are 5 key areas of vulnerability for  mobile devices, but not surprising, they all coincide with the classic IT network vulnerabilities:

Protocol coverage is lacking: Mobile  devices often rely on built-in  firewalls or enterprise network isolation. The protection that firewalls provide is only as good as the policy they are configured to implement and there are a whole slew of issues related to remote security policy management of untethered devices. I expect that analysis of network exploits on mobile devices with internal firewalls, will match analysis of real-world configuration data from corporate firewalls  that shows  rule sets that frequently violate well-established security guidelines (for example zone-spanning objects and lack of stealth rules). In addition, a stateful inspection firewall on a mobile device doesn’t perform deep content inspection on complete sessions and is therefore blind  to data theft attacks – for example piggy-back attacks  on text messaging in order to steal sensitive data.

Proxy-based access to control a device is convenient but may enable attackers to compromise a device and steal data – proxies end-point devices to obtain direct access to the Internet – research with clients show us that as much as 20 percent of all endpoints already bypass content filtering proxies on the enterprise IT network.

Visibility of network transactions is usually missing making incident response very difficult: Firewall and proxy logs are generally never analyzed, and often lag hours behind an event. An IPS often relies on anomaly detection. Anomaly detection relies on network flow data, which is often reported at intervals of 15 to 45 minutes. With that kind of lag, an entire network can be brought down. Because anomaly detection is looking for an anomalous event rather than an attack, it is frequently plagued by time-consuming false positives. A proxy on the other hand relies on URL filtering and simple keyword matching that analyzes the HTTP header and URL string. By looking at content and ignoring the network; a proxy can suffer from high rates of false negatives, missing attacks.

Multiple security and application layers increases cost of implementation and maintenance. Installation of multiple, disparate, proxy-based security products complicate network and end-point maintenance. Proxies require changes to the network infrastructure and in large networks may be impossible to install.  Updating mobile device application software to latest patch levels can be challenging to enforce and control and may result in injecting new software vulnerabilities into the device as there is probably not central IT administrator in charge of updating the mobile electronic medical records application running on 300 Android tablets in the hospital.

Redundant, multiple network security elements increase risk in the overall solution: This is additional risk that manifests itself as a result of the interaction between  mobile devices accessing cloud services via  a complex system of cache servers, SSL accelerators, Load balancers, Reverse proxy servers, transparent proxies, IDS/IPS and Web Application Firewalls. Consider that endpoints can bypass SSL proxies by specifying a gateway IP address and transparent proxies on a Windows network are no assurance for unauthenticated user agents bypassing the entire proxy infrastructure. HTTP-Aware firewalls such as Web application firewalls can be completely or partially bypassed in some cases. Transparent proxies can be compromised by techniques of HTTP response splitting since they rely on fine-grained mechanisms of matching strings in HTTP headers.  This is why Mozilla is delaying their implementation of Web sockets which may not matter if you’re running Chrome OS.

It’s a new dawn but with old rules.

Tell your friends and colleagues about us. Thanks!
Share this

Giving ISO 27001 business context

ISO 27001 is arguably the most comprehensive information security framework available today. Moreover, it is a vendor neutral standard. However – ISO 27001 doesn’t relate to assets or asset value and doesn’t address business context which requires prioritizing security controls and their costs.  This article discusses the benefits of performing an ISO 27001 based risk assessment exercise using techniques of threat modeling.  An organization that follows this methodology will reap the benefits of improved data security and achieving readiness for ISO 27001 certification.

Why is threat analysis beneficial for ISO 27001?

Quantitative threat analysis using the popular PTA (Practical Threat Analysis) modeling tool provides a number of meaningful benefits for ISO 27001 risk assessments:

  • Quantitative: enables business decision makers to state asset values, risk profile and controls in familiar monetary values. This takes security decisions out of the realm of qualitative risk discussion and into the realm of business justification.
  • Robust: enables analysts to preserve data integrity of complex multi-dimensional risk models versus Excel spreadsheets that tend to be unwieldy, unstable and difficult to maintain.
  • Versatile: enables organizations to reuse existing threat libraries in new business situations and perform continuous risk assessment and what-if analysis on control scenarios without jeopardizing the integrity of the data.
  • Effective: helps determine the most effective security countermeasures and their order of implementation, saving you money.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Bank of America and Wikileaks

First reported in the Huffington Post in November 2010, the Bank of America has set up a Wikileaks defense team after an announcement by Julian Assange that Wikileaks has information from a 5GB hard drive of a Bank of America executive.

In a burst of wikipanic, Bank of America has dived into full-on counterespionage mode…15 to 20 bank officials, along with consulting firm Booz Allen Hamilton, will be “scouring thousands of documents in the event that they become public, reviewing every case where a computer has gone missing and hunting for any sign that its systems might have been compromised.”

Interesting that they needed Booz and Hamilton.  I thought Bank of America was a Vontu DLP (now Symantec) customer.  It says something about the technology either not working, being discarded or simply not implemented properly because the Wikileaks announcement was made in October 2009. So it took BoA over a year to respond.  Good luck finding forensics over a year after the leak happened.

This is a good thing for information security consultants and solution providers, especially if it drives companies to invest in DLP. There are some good technologies out there and companies that implement DLP thoughtfully (even if for dubious reasons) will be profiting from the improved visibility into transactions on their network and better protection of IP and customer data.

Ethics of the bank executive aside, it is conceivable (albeit totally speculative), that the Obama administration is behind the Wikileaks disclosures on US banking. It is consistent with the Obama policy that required banks to accept TARP funds and stress testing in order to make the financial institutions more beholden to the Federal government. This is consistent with the State Department cables leak, which also appears (from my vantage point in the Middle East) to be deliberately disclosed to Wikileaks in order further the agenda against the Iranians without coming out and saying so specifically.

Tell your friends and colleagues about us. Thanks!
Share this

The 7 deadly sins of software security

Companies spend millions on compliance, but proprietary assets are still getting ripped off by insiders and hackers who compromise buggy, poor-designed applications. Here are 7 software development mistakes you don’t want to make in 2011.

7. Don’t KISS

If my experience is any indication – the software industry as a whole is wasting hundreds of millions of dollars a year by not Keeping It Simple. For example, complex technologies like Java J2EE are not warranted for the majority of Web applications. In my experience PHP is simpler to program and maintain, and scales well at a reasonable price – witness the millions of Yahoo pages are served by PHP each day. Lack of KISS is the main reason for high-costs, late schedules, failed projects and unsecure software that no one can maintain. When a programmer uses a component and doesn’t know it works (see EJBQL and CMP 2.0) and has to shlep around a lot of piping (look at an Eclipse project for a 3 tier J2EE project) then the energies go into implementation instead of thinking about code threats. It’s sort of like Microsoft Powerpoint, where you spend 80% of your time in the application’s GUI instead thinking about and then just stating your message.

Seems to me that the industry is trading off simpler, reliable and secure programming for fashion and features (J2EE,XP…)

6. Mismanage software development

The classic The Mythical Man-Month, written 20 years ago said that projects based on per-unit “man-months” usually dont work due to the unique nature of software development. The difference in productivity between the best programmer and an average guy is 100x. This means that 5 nwe college grads are inferior to solid programmer who knows what she’s doing. You are always better off with a few talented programmers than a large cast of average developers, a) because of individual productivity differentials and b) because smaller groups are always more effective.

This general observation is relevant to our case since the average developer construes O/S security with applying patches and application security with having an application firewall. Truth be told, it only takes one page of best practices for a Web application programmer not to allow SQL injection, long URLs, arbitrarily long input strings or directory traversal.

5. Take a wrong turn with outsourcing

Don’t outsource something just because it’s too hard to understand or you’re in a rush to market. A server clustering system offered by a major vendor was ported a while back to Linux by a team in India. The Indian market was booming and job loyalty was low, like Israel and Silicon Valley in the 90’s. In addition, due to transportation and cultural issues the work day was a fixed 8 hours not a “finish before you go home/never break the build” philosophy. The software was ported and is being delivered to customers with cryptic documentation, patch on patch on patch, multiple options to perform the same function (only one of which may be right, so the customer has to guess because documentation is unclear) and brittle functionality – a small change in configuration files can break the cluster.

Brittleness and poor documentation force the user to rely on strict manual operational procedures which depend on people which creates operational vulnerability.

4. Promote or hire the wrong people

I could write a book about this one. One common case is the excellent technologist who is promoted (desiring the job) into a managerial spot. He doesn’t have the people skills, won’t admit failure and can’t visualize going back to his old programmer slot. Another common case is hiring an ex-military guy to run a young engineering team. Six months later after the team has quit, your CEO will realize that you can’t hand orders to programmers like soldiers and you can’t flirt with the lady engineers and ask them to fetch the boss coffee.

The people who manage the teams have to have the art of software building and people building.

3. Decide based on religious beliefs

I know a company that decided on Open Source and Linux, going with a leading commercial distribution and a large systems integrator believing that the combination of Open Source and big-name vendors would guarantee success. The integrator’s skill set was primarily Windows, the distro vendor could care less about the fundamental flaws in the client’s design,and the company didnt have enough inhouse know-how of tool chain and Linux and couldn’t properly audit the progress and assess the problems of his contractor. Fortunately the project failed. I hate to think what would have happened if they would have succeeded in shipping the product – a SOHO security appliance with a Web interface for remote configuration.

The project spec must fit the system requirements; dont convert the system requirements to your religious beliefs.

2. Ignore internal system threats

Sales people know that sometimes their biggest competitors in closing a deal with a customer are people inside the company. For developers, this means that the programmer and her boss need to do a threat analysis from day 1 on the system taking into account backdoors, possible misuse, hard-coded parameters that can be forgotten or hacked later on and so forth. Temporary ftp servers for file transfer turn into permanent arrangements and vulnerability.

The team has to think about who will install, integrate and maintain the system even before considering operational issues.

1. Permit weak passwords

Threats such as worms get top PR but dont miss a basic IT mistake: weak authentication or bad passwords. Common password vulnerabilities include weak passwords (birthdays),publicly displayed passwords on Post-its, and Intranet and administrator passwords that the whole company knows. At my last company, people thought I had a great memory while in truth, just by working with the person; I could quickly and correctly guess the password to their workstation or servers. Later, after the team delivers the software, an external system integrator is often involved for installation at customer sites.

It is the responsibility of the developers to ensure that the system integrator will NOT be able to install the file transfer process between the AS400 and the billing system with anonymous ftp. I’m a fan of passphrases, I think they’re easier to remember and harder to crack but at the end of the day, passwords or passphrases need to be treated like cash. If you must, write them down on a piece of paper and save it your wallet. Dont store them on your Palm or save a file called system_passwords.xls in the MyDocuments folder of a PC in the computer room.

What should you do?

The software development environment of 20 years ago is radically different today. Development tools are free, hardware is almost free (think about those $100k Sun Enterprise 450 boxes and $500 Sun Ethernet NICS) and programming talent is a global resource. Its so easy to do things today but thats precisely the problem.
A development team can do but there is no replacement for a program/team manager that manages and directs the team away from the mistakes consistently.

Tell your friends and colleagues about us. Thanks!
Share this

Small business data security

Here are 7 steps to protecting your small business’s data and and intellectual property in 2011 in the era of the Obama Presidency and rising government regulation.

Some of these steps are about not drinking consultant coolade (like Step # 1- Do not be tempted into an expensive business process mapping project) and others are adopting best practices that work for big business (like Step #5 – Monitor your business partners)

Most of all, the 7 steps are about thinking through the threats and potential damage.

Step # 1- Do not be tempted into an expensive business process mapping exercise
Many consultants tell businesses that they must perform a detailed business process analysis and build data flow diagrams of data and business processes. This is an expensive task to execute and extremely difficult to maintain that can require large quantity of billable hours. That’s why they tell you to map data flows. The added value of knowing data flows between your business, your suppliers and customers is arguable. Just skip it.

Step #2 – Do not punch a compliance check list
There is no point in taking a non-value-added process and spending money on it just because the government tells you to. My maternal grandmother, who spoke fluent Yiddish would yell at us: ” grosse augen” (literally big eyes) when we would pile too much food on our plates. Yes, US publicly traded companies are subject to multiple regulations. Yes, retailers that  store and processes PII (personally identifiable data)  have to deal with PCI DSS 2.0, California State Privacy Law etc. But looking at all the corporate governance and compliance violations, it’s clear that government regulation has not made America more competitive nor better managed.  It’s more important for you to think about how much your business assets are worth and how you might get attacked than to punch a compliance check list.

Step #3 – Protecting your intellectual property doesn’t have to be expensive
If you have intellectual property, for example, proprietary mechanical designs in Autocad of machines that you build and maintain, schedule a 1 hour meeting with your accountant  and discuss how much the designs are worth to the business in dollars. In general, the value of any digital, reputational, physical or operational asset to your business can be established fairly quickly  in dollar terms by you and your accountant – in terms of replacement cost, impact on sales and operational costs.  If you store any of those designs on computers, you can get free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux. That way if there is a break-in and the computer is stolen, or if you lose your notebook on an airport conveyor belt, the data will be worthless to the thief.

Step #4 – Do not store Personally identifiable information or credit cards
I know it’s convenient to have the names, phone numbers and credit card numbers of customers but the absolutely worst thing you can do is to store that data. VISA has it right. Don’t store credit cards and magnetic strip data. It will not help you sell more anyway, you can use Paypal online or simply ask for the credit card at the cash register.  Get on Facebook and tell your customers how secure you are because you don’t store their personal data.

Step #5 – Don’t be afraid of your own employees, but do monitor your business partners
Despite the hype on trusted insiders, most data loss is from business partners. Write a non-disclosure agreement with your business partners and trust them, and audit their compliance at least once a year with a face-to-face interview.

Step #6 – Do annual security awareness training but keep it short and sweet
Awareness is great but like Andy Grove said – “A little fear in the workplace is not necassarily a bad thing”. Have your employees and contractors read, understand and sign a 1 page procedure for information security.

Step #7 – Don’t automatically buy whatever your IT consultant is selling
By now – you are getting into a security mindset.  Thinking about asset value, attacks and cost-effective security countermeasures like encryption. Download the free risk assessment software and get a feel for your value at risk.  After you’ve done some practical threat analysis of your business risk exposure you will be in an excellent position to talk with your IT consultant. While most companies don’t like to talk about data theft issues, we have found it invaluable to talk to colleagues in your market and get a sense of what they have done and how well the controls perform.

Tell your friends and colleagues about us. Thanks!
Share this

Protecting your data in the cloud

Several factors combine to make data security in the cloud a challenge.

Web applications have fundamental vulnerabilities. HTTP is the cloud protocol of choice for everything from file backup in the cloud to Sales force management in the cloud. HTTP and HTML evolved from a protocol for static file delivery to a protocol for 2 way applications – a purpose for which they  were never designed; let’s examine some of the data security issues with the current rich content Web 2.0 model:

1. The multiple layers at the server side from db server to Web server or App server are vulnerable to attack since the Web application passes messages to the data tier through several interfaces in order to execute SQL.  The interfaces are vulnerable, in particular to SQL injection

2. HTTP is a stateless protocol. As a result, the simplest kind of Ajax application generates dozens of http transactions between the client and the server. The simplest autocomplete floods the pipe with Ajax transactions.  If you have ever put a sniffer like Wireshark on the line you will see this.  The rich interactivity on the client with Ajax generates a huge, disproportionate amount of traffic and a high price tag for simple operations.   For example – in a tcp socket-socket link, if you want to know if there are new mail messages, no polling is required and the message length is just a few bytes. This is primarily a latency and load issue on the cloud computing infrastructure but also creates additional difficulties in detecting data loss and opens the door for network-based attacks such as a slow POST DDOS attack.

3. Passing messages between remote process (client and server) inside the query string is patently a bad idea that is not remedied by using https (although if you pass privacy data in a query string you must use https). It is a bad idea because it is fragile (may break on software changes) and vulnerable to any number of software bugs and exploits from buffer overflow to sql injection to simple query hacking.  To get a feel for the order of magnitude of the problem, just google for web application security.

The current rich Web 2.0 model is broken, not because Javascript or PHP are bad, it’s just that the existing Web application stack on server and client is a bad fit to the world of applications.

There is little free market demand for software security. The key demand-side driver for cloud computing is that it is a service that can be consumed at a  variable cost like a utility. We might think that with all the headlines on data security breaches,  that consumers would be discerning about the security of the service.  However,  data loss risk is negligible in a consumer buying decision since people use applications based on their utility and productivity and beauty of the UI not because of their security, since we all assume that the security is built-in.  The cloud model requires the consumer to consider impact of data loss, similar to considering the impact of a power spike on home appliances with digital controllers.  Data security in the cloud won’t happen by itself.

Enforcing data security in the cloud is harder than in the enterprise. Trusted insiders can exploit application vulnerabilities no matter where the application runs.  However, our ability to detect data loss inside the cloud is far less than our ability to detect data loss inside an office network and more expensive to mitigate in a virtualized operating system environment.

Inside an enterprise network, you can put procedural, network monitoring and DLP solutions into place, however the same security countermeasures may not be supported by your cloud provider as a standard item.   By implementing custom countermeasures in the cloud, you won’t enjoy the economy of scale of a shared, virtualized infrastructure nor benefit from the experience curve of the cloud service provider.  It will become your problem.

Data security is about economics. If you want guaranteed service levels on the security of your IP and customer data that you store in a SaaS system, you need to RFP and negotiate the appropriate contract and security countermeasures (encrypting data at rest and in motion, employee monitoring, key management, data loss prevention, malicious software detection and more).  Compliance with PCI DSS 2.0 and HIPAA may come at additional cost.

Data security in the cloud is a cost borne upstream by the customer and downstream by the cloud provider.

From a cloud service provider perspective, note that there are high fixed costs involved in providing capacity, customer support and secure infrastructure while the revenue from consumers is variable. Consumers that adopt a hybrid model for cloud delivery will have additional fixed and variable costs of operation.

In order to protect your data in the cloud, I suggest adopting some common-sense best practices:

  • Before moving your application to the cloud, do some attack modeling and consider the value of your assets to be stored in the cloud, versus the cloud service costs and custom security measures you may (or may not need) to implement
  • Invest in software security. Remember that hackers attack your software, not your security procedures.
  • After you set a budget, choose a cloud service according to your threat model and read their dotted line on data security before committing
Tell your friends and colleagues about us. Thanks!
Share this

Making security live in a performance culture

In a recent PCI seminar I attended,  the speaker (who hails from the European PCI Security Council) claimed that most European businesses were in a very bad place in terms of their data security but that that the ultimate business objective is 100 percent compliance. I’ve heard similar pronouncements from industry analysts like Forrester.

This is problematic for a number of reasons, starting with the fact that it is impossible to be 100 percent compliant with this or any other standard. A business lives in a performance culture whereas regulators live in a compliance culture. Compliance does not contribute to improving business performance unless the compliance activity is used as an opportunity to improve product security and customer safety and reduce the cost of current security measures.  This is definitely the path you want to choose – forcing your compliance exercise into the same performance mold that your business values and not settling for less.

In a compliance culture

  • I comply with the standard.
  • I am told the standard. If I am not told, I don’t act.
  • The standard is my objective.
  • When I meet the standard, I am done.

In a performance culture

  • My job is to take risks and deliver value by performing and executing ahead of expectations
  • A standard is like a quota.  Something you want to exceed because next year it will be higher.
  • Meeting a standard means little. I continuously improve.
Tell your friends and colleagues about us. Thanks!
Share this

Why Rich Web 2.0 may break the cloud

There are some good reasons why cloud computing is growing so rapidly.

First of all there are  the technology enablers: Bandwidth and computing power is cheap. Software development is more accessible than ever. Small software teams can develop great products and distribute it world wide instantly.

But cloud computing goes beyond supply-side economics and directly to the heart of the demand-side – the customer who consumes IT.

Consuming  computing as a utility simplifies life for a business. It’s easy to understand (unlike data security technology) and it’s easy to measure economic benefit (unlike governance, risk and compliance activities).

Cloud computing is more than an economic option; it’s also a personal option. Cloud computing is an interesting, almost revolutionary consumer alternative to internal IT systems due to it’s low cost and service utility model.

Current corporate IT  operations provide services to  captive “users” and empower management (historically, information technology has its roots in MIS – management information systems).  When IT vendors go to market, they go to the CxO executives. All the IT sales training and CIO strategies are based on empowering management and being peers in the boardroom. Sell high, don’t sell low. After all, employees don’t sign checks.

But cloud computing is changing the paradigm of top-down, management-board decision-based IT. If you are a sales professional and need a new application for your business unit,  you can acquire the application like a smart phone and a package of minutes. Cloud computing is a service you can buy without a corporate signature loop.

An employee in a remote sales office can sign up for Salesforce.com ($50/month for 5 sales people) or Google Apps (free up to 50 users) and manage software development on github.com (free for Open Source).

So far – that’s the good news. But – in the Cloud of rich Web 2.0 application services, we are not in Kansas anymore.  There is a very very good reason to be worried. With all the expertise of cloud security providers – the Web 2.0 service they provide is only as secure as the application software itself.

The current rich Web 2.0 application development and execution model is broken.

Consider that a Web 2.0 application has to serve browsers and smart phones. It’s based on a heterogeneous server stack with 5-7 layers (database, database connectors, middleware, scripting languages like PHP, Java and C#, application servers, web servers, caching servers and proxy servers.  On the client-side there is an additional  heterogeneous stack of HTML, XML, Javascript, CSS and Flash.

On the server-side, we have

  • 2-5 languages (PHP, SQL, tcsh, Java, C/C++, PL/SQL)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Server-side database management (MySQL, MS SQL Server, Oracle, PostgreSQL)

On the client side, we have

  • 2-5 languages ((Javascript, XML, HTML, CSS, Java, ActionScript)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Local data storage – often duplicating session and application data stored on the server data tier.

A minimum of 2 languages on the server side (PHP, SQL) and 3 on the client side (Javascript, HTML, CSS) turns developers into frequent searchers for answers on the Internet (many of which are incorrect)  driving up the frequency of software defects relative to a single language development platform where the development team has a better chance of attaining maturity and proficiency. More bugs means more security vulnerabilities.

Back end data base servers interfaced to front end scripting languages like C# and PHP comes built-in with vulnerabilities to attacks on the data tier via the interface.

But the biggest vulnerability of rich Web 2.0 applications is that  message passing is performed in the UI in clear text – literally inviting exploits and data leakage.

The multiple interfaces,  clear text message passing and the lack of a solid understanding of how  the application will actually work in the wild guarantee that SQL injection, Web server exploits, JSON exploits, CSS exploits and application design flaws that enable attackers to steal data will continue to star in today’s headlines.

Passing messages between remote processes on the UI is a really bad idea, but the entire rich We 2.0 execution model is based on this really bad idea.

Ask a simple question: How many ways are there to pass an array of search strings from a browser client to a Web server? Let’s say at least two – comma-delimited strings or JSON-encoded arrays.  Then ask another question – do Mozilla (Firefox), Webkit (Chrome) and Microsoft IE8 treat client data transfer in a uniform, vendor-neutral standard way?  Of course not.   The list of Microsoft IE incompatibilities or different interpretations of W3C standards is endless.   Mozilla and Webkit  transmit UTF-8 url-encoded data as-is in a query string sent to the server. But, Microsoft IE8 takes UTF-8 data in the query string and converts it to ? (yes question marks) in an XHR transaction unless the data has been previously uri-encoded.   Are browser incompatibilities a source of of application bugs? Do these bugs lead to software security vulnerabilities?  Definitely.

So, it’s really easy to develop cool Web 2.0 applications for seeing who’s hot and who’s not. It’s also cheap to deploy your totally-cool social networking application on a shoestring budget. Facebook started with a budget of $9,000 and so can you.

But, it’s also totally easy to hack that really cool rich Web 2.0 application, steal personal data and crash the system.

A standard answer to the cloud security challenge is writing the security into the contract with the cloud service provider.

Consider however,who is the customer of that cool social media application running in the cloud on some IaaS (infrastructure as a service). If you are a user of a cool new free application, you cannot negotiate or RFP the security issues away, because you are not the customer.  You generate content for the advertisers, who are the real customers.

With a broken development and execution model for rich Web 2.0 applications, the cloud computing model of software as a service utility is not sustainable for all but the largest providers like Facebook and Salesforce.com.   The cost of security is too high for the application provider and the risk of entrusting valuable business IP  and sensitive customer data to the cloud is unreasonable. Your best option is to hope that your cool Web application will succeed small-time, make you some cash and enable you to fly under the radar with a minimal attack surface.

Like your first girl friend told you – it’s not you, it’s me.

It’s not the IT infrastructure, it’s the software.

Tell your friends and colleagues about us. Thanks!
Share this

Government Agencies Need to Comply with White House Directive to Keep WikiLeaks Documents Off of Their Networks

Yes – there is apparently a White House directive to keep Wikileaks documents off Federal networks – according to a directive from the White House Office of Management & Budget on the treatment of classified documents.

WASHINGTON, Nov 29 (Reuters) – The United States said on Monday that it deeply regretted the release of any classified information and would tighten security to prevent leaks such as WikiLeaks’ disclosure of a trove of State Department cables.

More than 250,000 cables were obtained by the whistle-blower website and given to the New York Times and other media groups, which published stories on Sunday exposing the inner workings of U.S. diplomacy, including candid and embarrassing assessments of world leaders.

The U.S. Justice Department said it was conducting a criminal investigation of the leak of classified documents and the White House, State Department and Pentagon all said they were taking steps to prevent such disclosures in future.

While Secretary of State Hillary Clinton said she would not comment directly on the cables or their substance, she said the United States would take aggressive steps to hold responsible those who “stole” them.

In the directive, federal agencies were informed that employees and federal contractors must avoid viewing and/or downloading classified documents that have been leaked via WikiLeaks disclosures. As the information on WikiLeaks is still classified, even if it’s in the public domain, a federal government employee electronically viewing the information from or downloading the information to devices connected to unclassified networks “risks that material still classified will be placed on non-classified systems”

NOTICE TO EMPLOYEES AND CONTRACTORS CONCERNING SAFEGUARDING OF CLASSIFIED INFORMATION AND USE OF GOVERNMENT INFORMATION TECHNOLOGY SYSTEMS”, Office of Management and Budget, December 3, 2010.

Data security vendor Fidelis Security Systems has announced that they will provide policies in their Network DLP product. Fidelis XPS to help ensure that employees cannot view or download classified documents.

Fidelis XPS is extremely powerful network DLP technology for high speed (in excess of 2.5GB) content interception and analysis in real time of data entering or leaving a network.   With all due respect to the power of Fidelis network DLP, the White House Directive is nonsense.  It’s more security theater, not security countermeasures, designed to show that the administration is “doing something”.

The directive is nonsense for a number of reasons:

a) Requiring employees and federal contractors to avoid viewing and/or downloading classified documents that have been leaked via WikiLeaks disclosures is like saying – “well, you will have to disconnect yourself from the Internet, from Facebook, From Gmail and your smart phone”.   It’s not a practical strategy, since it’s impossible to enforce.

b) The network vector is almost certainly not how the information was leaked.  First of all, this means that network DLP solutions are not an appropriate countermeasure against Wikileaks. Releasing custom network DLP policies for Wikileaks is a crude sort of  link-baiting; misdirected, since Federal decision makers don’t evaluate data security technology  using social media like Facebook.

The Wikileaks documents are provided by trusted insiders that have motive (dislike Obama or Clinton), means (physical, electronic or social access) and opportunity (no one is watching).   There is little utility (besides appearing to be doing something) to install network DLP technology to prevent employees from viewing or downloading.

c) And finally it’s nonsense because the OMB directive talks about viewing and downloading documents and not about leaking.

If the White House is serious about preventing more leaks they should start by firing Secretary Clinton.

Then again – perhaps the wikileaks documents were all leaked under tacit direction from the White House.  Since President Obama has a pattern of sticking it to US friends (Israel, Czech Republic, Poland) whatever embarrassment it might cause friendly allies is more than worth the price of issuing a worthless OMB directive.

Tell your friends and colleagues about us. Thanks!
Share this