Category Archives: Anti-Fraud

cyber attacks

Why audit and risk management do not mitigate risk – part II

In my previous post Risk does not walk alone – I noted both the importance and often ignored lack of relevance of internal audit and corporate risk management to the business of cyber security.

Audit and risk management are central to the financial services industry

Just because audit and risk management are central to the financial services industry does not make them cyber security countermeasures. Imagine not having a firewall but having an extensive internal audit and risk management activity – the organization and all of it’s paper, policy and procedures would be pillaged in minutes by attackers.

Risk management and audit are “meta activities”

In the financial industry you have risk controls which are the elements audited by internal audit and managed by risk management teams. The risk controls are the defenses not the bureaucracy created by highly regulated industries. So – you can have a risk control of accepting (deciding not to have end point security and accepting the risk of data loss from employee workstations), or mitigating (installing end point DLP agents) or preventing (taking away USB ports and denying Internet access) etc…This is analogous to a bank accepting risk (giving small loans to young families), mitigating (requiring young families to supply 80% collateral), and preventing (deciding not to give loans to young families).

The important part is to understand that risk management and audit are “meta activities” and not defenses in their own right.

Why risk management often fails in cyber security operations

We note that attempts to apply quantitative risk management to cyber generally do not work because the risk management professionals do not understand cyber threats and equate people and process with mitigation.

Conversely – cyber-security/IT professionals do not have the tools to estimate asset value.  Without taking into account asset value, it is impossible to prioritize controls as every car owner knows: you don’t insure a 10 year old Fiat 500 like you insure a late model Lexus RC F.

Unfortunately for the lawyers and regulatory technocrats – while they are performing cross-functional exercises in business alignment of people and processes – the bad guys are stealing 50 Million credit cards from their database servers having hacked their way through the air conditioning systems.

Why cyber, regulatory and governance need to be integrated

Risk management prioritizes application of controls/cyber countermeasures according to control cost, asset value and mitigation effectiveness and internal audit ensures compliance with the company’s cyber, regulatory  and corporate governance policies.

Because these 3 areas (cyber, regulatory and governance) are increasingly entangled and integrated (you can’t comply with HIPAA without dealing with all 3) – it becomes supremely important to integrate the 3 areas because A) it’s expensive no to and B) it creates considerable exposure because it creates “cracks” in compliance.    Witness Target.

At a major Scandinavian telco – we counted over 25 separate functions for security, compliance and governance a few years ago  – and it was clear that this number needed to converge to 2 – risk and cyber and an independent audit unit. Whether or not they succeeded is another story.

Tell your friends and colleagues about us. Thanks!
Share this

The megaupload bust

My daughter was distressed yesterday after the Feds shutdown the megaupload file sharing site – “How am I going to see all those series and Korean movies I love? It’s not fair!”

The FBI have been after Mr Dotcom for 8 years. His big problem was not the file sharing but his other criminal activities.  After all, there is infinite demand for file sharing,  Filesonic is cleaning up now that Megaupload went bust and Viacom didn’t go after Erich Schmidt as Viacom lost their billion dollar copyright case to Google 2 years ago.

But really – beyond the consumer appetite for entertainment, and corporate appetite for filing intellectual property and copyright suites, why isn’t Hollywood getting it right when it comes to content protection?  If they were getting it right, Sony-Columbia would be running the file sharing sites, charging $1/movie and $3 for premium content and driving all the file sharing sites out of business.

Instead – the big studios are making the same mistake that corporate America makes when it comes to content protection – ignoring the attacker economics.

After all, the HDCP black-listing scheme defies the laws of physics and reason. For example, you may be a perfectly law-abiding citizen, but if someone in Sofia hacks your model XY500 DVD player, the device key is revoked, and you will never be able to play discs that came out after the date the device was compromised. If a hacker taps into the HDMI / HDCP signal copies a movie enroute to your model TV Set, the HDCP device key can be revoked and your 80 inch TV will never play high-definition again.

Blu-Ray copy protection was broken 5 years this month (January 2007) (Courtesy of muslix64, the same fellow who cracked HD-DVD). Both HD DVD and Blu-ray use HDCP (High-Bandwidth Digital Content Protection) for authentication and content playing, and both use the AACS (Advanced Access Content System) for content encryption. (AACS is the content protection for the video on DVDs and HDCP is the content protection on the HDMI link between the DVD player and the TV). It appears that muslix64 took a snapshot in memory of a running process, then used selective keying – serially trying bytes 1-4, then 2-5, 3-6 etc as the keys until the MPEG frame decrypted. (much faster than a pure brute force attack). If the video player process stores the key in clear text in memory, this type of attack will always work.

Like most flawed encryption schemes, AACS is vulnerable to threats to due a poor software implementation.

” The AACS design prevents legitimate purchasers from playing legitimately purchased content on legitimately purchased machines, and fails to prevent people from ripping the content and sharing it through bittorrent. The DRM people wanted something that could not be done, so unsurprisingly they winded up buying something that does not do it”

James Donald.

Now we understand why BitTorrent is so popular and why

Tell your friends and colleagues about us. Thanks!
Share this

Security sturm und drang – selling fear.

Sturm und Drang is associated with literature or music aiming to frighten the audience or imbue them with extremes of emotion”.

The Symantec Internet Security Threat Report is a good example of sturm und drung marketing endemic in the information security industry.

Vendors like Symantec sell fear, not security products, when they report on “Rises on Data Theft, Data Leakage, and Targeted Attacks Leading to Hackers’ Financial Gain”, without suggesting cost-effective security countermeasures.

1. Lumps consumers and enterprises together

“End users, whether consumers or enterprises, need to ensure proper security measures to prevent an attacker from gaining access to their confidential information, causing financial loss, harming valuable customers, or damaging their own reputation.”

Since when do consumers have customers…Consumers are insured for credit card theft and PCI DSS certified merchants are protected from chargeback exposure with the acquiring bank. What financial losses do consumers and enterprises have in common?

2. Incorrectly classifies assets, incorrectly uses legal terms

“Symantec tracked the trade of stolen confidential information and captured data frequently sold on underground economy servers. These servers are often used by hackers and criminal organizations to sell stolen information, including social security numbers, credit cards, and e-mail address lists”.

Social security numbers are classified as PII (personally identifiable information) not confidential information. If Symantec is uncertain how to classify this asset, they should read the US State privacy laws and PCI DSS specification. As a matter of fact, the law does not protect confidential information – it protects a confidence relationship. Once the information is disclosed (and Social security numbers are frequently disclosed), a third party is not prevented from independently duplicating and using the information. See the Wikipedia.

3. Provides misleading data

“Increase in Data Breaches Help Facilitate Identity Theft”

By not quantifying the threat probability, Symantec deliberately misleads the reader into thinking that cyber threats are the main attack on PII.

Au contraire. The FTC says that most identify theft cases are caused by offline methods such as dumpster diving, stealing and pretexting. According to Applied Cybersecurity Research, “Internet-related identity theft accounted for about 9 percent of all ID thefts in the United States in 2005”.

4. Cites vulnerability stats without suggesting countermeasures

“Symantec documented 12 zero-day vulnerabilities during the second half of 2006”

What is the point of a threat model without security countermeasures?

a. What were the vulnerabilities, and do consumer PCs have the same vulnerabilities as corporate servers behind a Checkpoint firewall?

b. What are the most cost-effective security countermeasures?

c. Does Symantec recommend that consumers use the same security countermeasures and risk assessment procedures as business enterprises?

See the full report here:
Symantec Reports Rise in Data Theft, Data Leakage, and Targeted Attacks Leading to Hackers’ Financial Gain

Tell your friends and colleagues about us. Thanks!
Share this

Ten steps to protecting your organization’s data

Here are 10 steps  to protecting your organization’s privacy data and intellectual property.

As a preface, begin with the understanding that you already have all the resources you need.

Discussions with colleagues in a large forensics accounting firm that specialize in anti-fraud investigations, money laundering and anti-terror funding (ATF), confirm what I’ve suspected for a long time. Armies of junior analysts working for the large accounting firms who have never seen or experienced a fraudulent event and are unfamiliar with the your business operation are not a reasonable replacement for careful risk analysis by the business done by people who are familiar with the business.

Step # 1- Do not do an expensive business process mapping project.

Many consultants tell organizations that they must perform a detailed business process analysis and build data flow diagrams of data and users who process data. This is an expensive task to execute and extremely difficult to maintain that can require large quantity of billable hours. That’s why they tell you to map data flows. The added value of knowing data flows inside your organization between people doing their job is arguable. There are much better ways to protect your data without writing out a 7 digit check. Here is the first one you should try out. Select the 10 most valuable data assets that your company owns. For example – proprietary mechanical designs of machines, detailed financials of a private company being acquired, and details of competitive contracts with large accounts. In a few interviews with finance, operations, IT, sales and engineering, you can nail down those key assets. After you’ve done that, schedule a 1 hour meeting with the CFO and ask her how much each asset is worth in dollars. In general, the value of a digital, reputational, physical or operational asset to a business can be established fairly quickly by the CFO in dollar terms – in terms of replacement cost, impact on sales and operational costs.

Step #2 – Do not develop a regulatory compliance grid.

There is no point in taking a non-value-added process and spend money making it more effective.

My maternal grandmother, who spoke fluent Yiddish would yell at us – ” grosse augen” when we would pile too much food on our plates. ” Grosse augen” ( or as my folks put it); is having eyes that are bigger than your capacity. Yes, US publicly traded companies are subject to multiple regulations – if the company sells to customers and stores and processes PII (personally identifiable data) they will have to deal with PCI DSS 1.1, California State Privacy Law, Sarbanes-Oxley PCI DSS 1.1 protects one asset – payment card numer and magnetic stripe, while Sarbanes-Oxley is about accounting records. Yes, there are a few commercial software products that map business processes, databases and data elements to multiple regulations; their goal is to help streamline the work involved in multiple regulatory compliance projects – eliminating redundancy where possibility using commonality.
Looking at all the corporate governance and compliance violations; cases such as Hannaford supermarkets and AOL – it’s clear government regulation has not made America more competitive nor better managed.

Step #3 – Identify the top 5 data assets in your business and valuate them

I saw an article recently that linked regulatory compliance mandate and asset cost. Definitely not true – the value of an asset for a company is whatever operational management/CFO say it is. Asset value has nothing to do with compliance but it has everything to do with a cost effective risk control plan. For example – a company might think that whole disk encryption on all company notebook computers is a good idea – but if only 20 people have sensitive data – why spend 1 million dollars on mobile device data encryption when you can solve the problem for less than 5k?

Step #4 – Do not store PII

The absolutely worst thing you can do is a project to analyse data retention and protection regulations that govern each of the sensitive data elements that need protecting, and working with legal and compliance consultants who know the relevant regulations. VISA has it right. Don’t store credit cards and magnetic strip data. It will not help the marketing guys sell more anyway – and you can give the money you save on some fancy database encryption software to the earthquake victims in Myanmar and China.

Step #5 – Monitor your outsourcing vendors

Despite the hype on trusted insiders, most data loss is from business partners. You can write a non-disclosure agreement with an outsourcing vendor and trust them, but you must verify their compliance and prevent unauthorized data leaks.

The best story I had in years was in a meeting with the VP internal audit at a medium sized bank in Israel. He took a sales call with me and I pitched our extrusion prevention technology from Fidelis Security Systems as a way to protect their customer data. He said – look Danny, we don’t need technology – we’ve outsourced everything to a very large bank and their data center security is world-class. Two weeks later, the big bank had a serious data breach event (a high school student hacked into the internal network of the bank from a public Windows-based kiosk and helped himself to some customer lists. Two months later, the small bank was reported to be looking to get out of their outsourcing contract. Don’t rely on contracts alone – use people and DLP technology to detect data leakage.

Step #6 – Do annual security awareness training but keep it short and sweet

Awareness is great but like Andy Grove said – “A little fear in the workplace is not necassarily a bad thing”. Have everyone read, understand and sign a 1 page procedure for information security. Forget interview projects and expensive self-assessment systems – what salesman in his right mind will take time to fill out one of those forms – if he doesn’t update his accounts on salesforce.com? Install an extrusion detection system at the network perimeter. Prosecute violators in real time. Do random spot checks on the read-and-understand procedure. Give demerits to the supervisors and managers if their employees don’t pass the spot check.

Step #7 – Calculate valuate at risk of your top 5 data assets

ISO 27001 and PCI DSS 1.1 checklists are great starting points but they focus on whether a particular technology, policy or control has been implemented, and not whether these controls are cost-effective security countermeasures against internal and external attackers. Use Practical Threat Analysis with a PTA risk library for ISO 27001 or PCI DSS 1.1 and you will be able to build a cost-effective risk mitigation plan based on asset values, threat probabilities and estimated damage levels.

Step #8 – Ask your vendors and colleagues difficult questions

After you’ve done a practical threat analysis of your risk exposure to attacks on sensitive customer data and IP you will be in better position than ever to know what policies, procedures and technologies are the most effective security controlss. You’ll be in an excellent position to ask difficult questions and negotiate terms with your favorite vendor. While the attitude of many companies is to hold data protection protections close to their chests, it is valuable to talk to your colleagues at other companies in the same market and get a sense of what they have done and how well the controls perform.

Step #9 – Resist the temptation to do a customer data integration (CDI) project.

Customer data is often stored in many applications and locations in a large organization. The knee-jerk reaction of IT is to do a big data integration project and get all the digital assets under one roof. There are three reasons why this is a terrible idea. (a) Most of these projects fail, overrun and never deliver promised value (b) If you do suceed in getting all the data in one place, it’s like waving a huge red flag to attackers – heah , come over here – we have a lot of sensitive data that is nicely documented and easily accessible. Companies with enterprise software systems such as SAP and Oracle Applications are three times more likely to be attacked. (c) Ask yourself – would Google have succeeded if with global data integration strategy?

Step #10 – Prepare a business care for data loss prevention before evaluating products

Despite claims that protecting data assets is strategic to an enterprise, and IT governance talk about busines alignment and adding value – my experience is that most organizations will not do anything until they’ve had a fraud or data security event. The first step to protecting customer data and IP in any sized business from a individual proprietership to a 10,000 person global enterprise is laying the case at the door of the company’s management. This is where executives need to take a leadership position – starting with a clear position on which data assets are important and how much they’re worth to the company.

Practical threat analysis is a great way to identify and assess threats to your business and evaluate the potential business impact in dollars and cents to your operation using best-practice risk models provided by the PTA Professional threat modeling tool.

In summary

Software Associates specializes in helping medical device and healthcare software vendors achieve HIPAA compliance and protect customer assets and provides a full range of risk management services, from stopping fraud to ensuring regulatory compliance and enhancing your ability to serve your customers.

There are resources that help you turn information into insight such as   Risk Management from LexisNexis, Identity Fraud TrueID solutions from LexisNexis that help significantly reduce fraud losses and Background Checks from LexisNexis that deliver valuable insights that lead to smarter, more informed decisions and greater security for consumers, businesses and government agencies.For consumers, its an easy way to verify personal data, screen potential renters, nannies, doctors and other professionals, and discover any negative background information that could impact your employment eligibility. For businesses and government agencies, it is the foundation of due diligence. It provides the insight you need to reduce risk and improve profitability by helping you safeguard transactions, identify trustworthy customers and partners, hire qualified employees, or locate individuals for debt collections, law enforcement or other needs.

 

Tell your friends and colleagues about us. Thanks!
Share this
catch 22

Catch 22 and Compliance

Let’s say your’e a payment processor going through a PCI DSS 2.0 audit:

Does this sound familiar? (just replace certain words by certain other compliance related words):

Without realizing how it had come about, the combat men in the squadron discovered themselves dominated by the administrators appointed to serve them. They were bullied, insulted, harassed and shoved about all day long by one after the other. When they voiced objection, Captain Black replied that people who were loyal would not mind signing all the loyalty oaths they had to. To anyone who questioned the effectiveness of the loyalty oaths, he replied that people who really did owe allegiance to their country would be proud to pledge it as often as he forced them to. And to anyone who questioned the morality, he replied that “The Star-Spangled Banner” was the greatest piece of music ever composed. The more loyalty oaths a person signed, the more loyal he was; to Captain Black it was as simple as that, and he had Corporal Kolodny sign hundreds with his name each day so that he could always prove he was more loyal than anyone else.

“The important thing is to keep them pledging,” he explained to his cohorts. “It doesn’t matter whether they mean it or not. That’s why they make little kids pledge allegiance even before they know what ‘pledge’ and ‘allegiance’ means.”

EXCERPT FROM Catch-22 – by Joseph Heller

Tell your friends and colleagues about us. Thanks!
Share this

Why less log data is better

Been a couple weeks since I blogged – have my head down on a few medical device projects and a big PCI DSS audit where I’m helping the client improve his IT infrastructure and balance the demands of the PCI auditors.

Last year I gave a talk on quantitative methods for estimating operational risk of information systems in the annual European GRC meeting in Lisbon – you can see the presentation below.

As a I noted in my talk, one of the crucial phases in estimating operational risk is data collection: understanding what threats, vulnerabilities you have and understanding not only what assets you have (digital, human, physical, reputational) but also how much they’re worth in dollars.

Many technology people interpret data collection as some automatic process that reads/scans/sniffs/profiles/processes/analyzes/compresses log files, learning and analyzing the data using automated  algorithms like ANN (adaptive neural networks).

The automated log profiling tool will then automagically tell you where you have vulnerabilities and using “an industry best practice database of security countermeasures”,  build you a risk mediation plan. Just throw in a dash of pie charts and you’re good to go with the CFO.

This was in fashion about 10 years ago (Google automated audit log analysis and you’ll see what I mean) for example this reference on automated audit trail analysis,  Automated tools are good for getting a quick indication of trends, and  tend to suffer from poor precision and recall that  improve rapidly when combined with human eyeballs.

The PCI DSS council in Europe (private communication) says that over 80% of the merchants/payment processors with data breaches  discovered their data breach  3 months or more after the event. Yikes.

So why does maintaining 3 years of log files make sense – quoted from PCI DSS 2.0

10.7 Retain audit trail history for at least
one year, with a minimum of three
months immediately available for
analysis (for example, online, archived,
or restorable from back-up).
10.7.a Obtain and examine security policies and procedures and
verify that they include audit log retention policies and require
audit log retention for at least one year.
10.7.b Verify that audit logs are available for at least one year and
processes are in place to immediately restore at least the last
three months’ logs for analysis

Wouldn’t it be a lot smarter to say –

10.1 Maintain a 4 week revolving log with real-time exception reports as measured by no more than 5 exceptional events/day.

10.2 Estimate the financial damage of the 5 exceptional events in a weekly 1/2 meeting between the IT manager, finance manager and security officer.

10.3 Mitigate the most severe threat as measured by implementing 1 new security countermeasure/month (including the DLP and SIEM systems you bought last year but haven’t implemented yet)


I’m a great fan of technology, but the human eye and brain does it best.

Tell your friends and colleagues about us. Thanks!
Share this

The ethical aspects of data security

Ethical breaches or data breaches.

I was standing in line at Ben Gurion airport, waiting for my bag to be x-rayed. A conversation started with a woman standing next to me in line. The usual sort – “Where are you traveling and what kind of work do you do?”. I replied that I was traveling to Warsaw and that I specialize in data security and compliance – helping companies prevent trusted insider theft and abuse of sensitive data.

She said, “well sure, I understand exactly what you mean – you help enforce ethical behavior of people in the organization”.

I stopped for a moment and asked her, hold on – “what kind of business are you in”? She said – “well, I worked in the GSS for years training teams tasked with protecting high echelon politicians and diplomats. I understand totally the notion of enforcing ethical behavior”. And now? I asked. Now, she said, ” I do the same thing, but on my own”.

Let’s call my new friend “Sarah”.

Sarah’s ethical approach was for me, a breath of fresh air. Until that point, I had defined our data security practice as an exercise in data collection, risk analysis and implementation of the appropriate technical security countermeasures to reduce the risk of data breach and abuse. Employees, competitors and malicious attackers are all potential attackers.  The objective is to implement a cost-effective portfolio of data security countermeasures – policies and procedures, software security assessments, network surveillance, data loss prevention (DLP) and encryption at various levels in the network and applications.

I define security as protecting information assets.

Sarah defines security as protecting ethical behavior.

In my approach to data security, employee behavior is an independent variable, something that might be observed but certainly, not something that can be controlled. Since employees, contractors and business partners tend to have their own weaknesses and problems that are not reported on the balanced score card of the company, my strategy for data security posits that it is more effective to monitor data than to monitor employees and prevent unauthorized transfer or modification of data instead of trying to prevent irrational or criminal behavior of people who work in the extended enterprise.

In Sarah’s approach to data security, if you make a set of rules and train and enforce ethical behavior with good management, sensing and a dosage of fear in the workplace; you have cracked the data security problem.

So – who is right here?

Well – we’re both right, I suppose.

The answer is that without asset valuation and analysis of asset vulnerabilities, protecting a single asset class (human resources, data, systems or network) while ignoring others, may be a mistake.

Let’s examine two specific examples in order to test the truth of this statement.

Consider a call center with 500 customer service representatives. They use a centralized CRM application, they have telephones and email connectivity. Each customer service representative has a set of accounts that she handles. A key threat scenario is leaking customer account information to unauthorized people – private investigators, reporters, paparazzi etc… The key asset is customer data but the key vulnerability is the people that breach ethical behavior on the way to breaching customer data.

In the case of customer service representatives breaching customer privacy, Sarah’s strategy of protecting ethical behavior is the best security countermeasure.

Now, consider a medical device company with technology that performs imaging analysis and visualization. The company deploys MRI machines in rural areas and uses the Internet to provided remote expert diagnosis for doctors and patients who do not have access to big city hospitals. The key asset transmitted from the systems for remote diagnosis is PHI (protected health information), and the key vulnerabilities are in the network interfaces, the applications software and operating systems that the medical device company uses.

In  the case of remote data transfer and distributed/integrated systems, a combined strategy of software security, judicious network design and operating system selection (don’t use Microsoft Windows…) is the correct way to protect the data.

My conversation with Sarah at the airport gave me a lot of food for thought.

Data loss prevention (DLP technology) is great  and  ethical employee behavior is crucial but they need to work hand in glove.

Where there are people, there is a need to mandate, monitor and reinforce ethical behavior using  a clearly communicated corporate strategy with employees and contractors. In an environment where users require freedom and flexibility in using applications such as email and search, the ethical behavior for protecting company assets starts with company executives who show from personal example that IT infrastructure is to be used to further the company’s business and improving customer service and not for personal entertainment, gain or gratification.

It’s the simple things in life that count.

Tell your friends and colleagues about us. Thanks!
Share this

The economics of software piracy

One year ago this time was World Cup season and Mondial fever put a lot of regional conflicts on the back burner for a month – not to mention put a dent in a lot of family budgets (husbands buying the latest 60 inch Sony Bravia and wives on retail therapy while the guys are watching football)

It is ironic that the FIFA 2010 World cup computer game doesn’t run on Ubuntu.  It would have been a huge marketing coup and poetic justice if the game software was released for Ubuntu in a GPL license.

This got me thinking about open source licensing and it’s advantages for developing countries, which really got my hackles up  after reading the Seventh Annual BSA and IDC Global Software Piracy Study – that screams:  Software Theft Remains Significant Issue Around the World

The rate of global software piracy climbed to 43 percent in 2009. This increase was fueled in large part by expanding PC sales in fast-growing, high-piracy countries and increasing sales to consumers — two market segments that traditionally have higher incidents of software theft. In 2009, for every $100 worth of legitimate software sold, an additional $75 worth of unlicensed software made its way onto the market. There was some progress in 2009 — software rates actually dropped in almost half of the countries examined in this year’s study.

Given the global recession, the software piracy picture could have taken a dramatic turn for the worse. But progress is being outstripped by the overall increases in piracy globally — and highlights the need for governments, law enforcement and industry to work together to address this vital economic issue.
Below are key findings from this year’s study:

  • Commercial value of software theft exceeds $50 billion: the commercial value of unlicensed software put into the market in 2009 totalled $51.4 billion.
  • Progress on piracy held through the recession: the rate of PC software piracy dropped in nearly half (49%) of the 111 economies studied, remained the same in 34% and rose in 17%.
  • Piracy continues to rise on a global basis: the worldwide piracy rate increased from 41% in 2008 to 43% in 2009; largely a result of exponential growth in the PC and software markets in higher piracy, fast growing markets such as Brazil, India and China.

I would not take the numbers IDC and BSA bring at face value. The IDC/BSA estimates are guesses multiplied several times. They start off by assuming that each unit of copied software represents a direct loss of sale for software vendor – patently a false assertion.

If it were true, then the demand for software would be independent of price and perfectly inelastic.

A drop in price usually results in an increase in the quantity demanded by consumers. That’s called price elasticity of demand. The demand for a product becomes inelastic when the demand doesn’t change with price. A product with no competing alternative is generally inelastic. Demand for a unique antibiotic, for example is highly inelastic. A patient will pay any price to buy the only drug that will kill their infection.

If software demand was perfectly inelastic, then everyone would pay in order to avoid the BSA enforcement tax. The rate of software piracy would be 0. Since piracy rate is non-zero, that proves that the original assertion is false. (Argument courtesy of the Wikipedia article on price elasticity of demand )

Back when I ran Bynet Software Systems – we were the first Microsoft Back Office/Windows NT distributor in Israel. I had just left Intel – where we had negotiated a deal with Microsoft that allowed every employee to make a copy of MS Office for home usage. Back in 1997 – after the Windows NT launch, the demand for NT was almost totally inelastic – Not There, Nice Try, WNT is VMS + 1 etc. We could not give the stuff away in the first year. Customers were telling us that they would never leave Novell Netware. Never. But, NT got better from release to release and the big Microsoft marketing machine got behind the product. After two years of struggle and selling retail boxes and MLP for NT, demand picked up. Realizing that there IS price elasticity of demand for software – Microsoft dropped retail packaging and moved to OEM licensing, initially distributing OEM licenses via their two tier distribution channel and later totally cutting out the channel and dealing directly with the computer vendors like HP, Dell and IBM for OEM licenses of NT, XP and 2000, 2003 etc. Vista continued with this marketing strategy and most Vista sales were not retail boxes but pre-installed hardware. After Windows 7 released – users have been upgrading en-masse, proving once again the elasticity of demand for a good product.

Microsoft (who are a major stakeholder in BSA) probably don’t have a major piracy problem with operating system sales. Let’s run some numbers. In 2008 –  Microsoft Windows Vista sales were at about a 9 million unit/quarter run rate. Microsoft June 2008 quarterly revenue was $15.8 BN. Single unit OEM pricing for a Windows operating system  is about $80 and in a volume deal – maybe $20. Let’s assume an average of $50/OEM license. This means that the operating system  accounts for about 50*3*9/15800 = 8.5% of Microsoft revenue.

The BSA Global Piracy Study states that the “median piracy rate in is down one percentage point from last year” – 1 percent of 8.5 percent is meaningless for Microsoft – in dollar terms – BSA work to reduce piracy is less meaningful than a 7 percent drop in the US Dollar rate in 2009.

Microsoft might have a problem with their cash cow – Microsoft Office. Microsoft Office 2007 retails for $450 but is available in an academic license for less than $100. Open Office 2.4 runs just fine on Windows 7 and XP and retails for $0. At those prices, sizable numbers of users are just sliding down the elasticity curve – calling into serious question the IDC/BSA statistics on software piracy.

But there is more to software piracy than providing software at a reasonable price. In poor areas of the world – assuming that the BSA efforts at combating software piracy are successful – only the very rich would have access to applications like Microsoft Office. The middle and lower class people won’t have the opportunity to become MS Office-literate because the prices would be too high. For that I only have three words –download Open Office – the free and open productivity suite.

Finally – I can only anonymously quote a senior Microsoft executive who told me a number of years ago that off the record, Microsoft didn’t mind people copying the software and using a crack because it was a good way of introducing new users to the technology and inducing them to buy the new, improved and supported release a year or two later.

Tell your friends and colleagues about us. Thanks!
Share this

Why Rich Web 2.0 may break the cloud

There are some good reasons why cloud computing is growing so rapidly.

First of all there are  the technology enablers: Bandwidth and computing power is cheap. Software development is more accessible than ever. Small software teams can develop great products and distribute it world wide instantly.

But cloud computing goes beyond supply-side economics and directly to the heart of the demand-side – the customer who consumes IT.

Consuming  computing as a utility simplifies life for a business. It’s easy to understand (unlike data security technology) and it’s easy to measure economic benefit (unlike governance, risk and compliance activities).

Cloud computing is more than an economic option; it’s also a personal option. Cloud computing is an interesting, almost revolutionary consumer alternative to internal IT systems due to it’s low cost and service utility model.

Current corporate IT  operations provide services to  captive “users” and empower management (historically, information technology has its roots in MIS – management information systems).  When IT vendors go to market, they go to the CxO executives. All the IT sales training and CIO strategies are based on empowering management and being peers in the boardroom. Sell high, don’t sell low. After all, employees don’t sign checks.

But cloud computing is changing the paradigm of top-down, management-board decision-based IT. If you are a sales professional and need a new application for your business unit,  you can acquire the application like a smart phone and a package of minutes. Cloud computing is a service you can buy without a corporate signature loop.

An employee in a remote sales office can sign up for Salesforce.com ($50/month for 5 sales people) or Google Apps (free up to 50 users) and manage software development on github.com (free for Open Source).

So far – that’s the good news. But – in the Cloud of rich Web 2.0 application services, we are not in Kansas anymore.  There is a very very good reason to be worried. With all the expertise of cloud security providers – the Web 2.0 service they provide is only as secure as the application software itself.

The current rich Web 2.0 application development and execution model is broken.

Consider that a Web 2.0 application has to serve browsers and smart phones. It’s based on a heterogeneous server stack with 5-7 layers (database, database connectors, middleware, scripting languages like PHP, Java and C#, application servers, web servers, caching servers and proxy servers.  On the client-side there is an additional  heterogeneous stack of HTML, XML, Javascript, CSS and Flash.

On the server-side, we have

  • 2-5 languages (PHP, SQL, tcsh, Java, C/C++, PL/SQL)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Server-side database management (MySQL, MS SQL Server, Oracle, PostgreSQL)

On the client side, we have

  • 2-5 languages ((Javascript, XML, HTML, CSS, Java, ActionScript)
  • Lots of interface methods (hidden fields, query strings, JSON)
  • Local data storage – often duplicating session and application data stored on the server data tier.

A minimum of 2 languages on the server side (PHP, SQL) and 3 on the client side (Javascript, HTML, CSS) turns developers into frequent searchers for answers on the Internet (many of which are incorrect)  driving up the frequency of software defects relative to a single language development platform where the development team has a better chance of attaining maturity and proficiency. More bugs means more security vulnerabilities.

Back end data base servers interfaced to front end scripting languages like C# and PHP comes built-in with vulnerabilities to attacks on the data tier via the interface.

But the biggest vulnerability of rich Web 2.0 applications is that  message passing is performed in the UI in clear text – literally inviting exploits and data leakage.

The multiple interfaces,  clear text message passing and the lack of a solid understanding of how  the application will actually work in the wild guarantee that SQL injection, Web server exploits, JSON exploits, CSS exploits and application design flaws that enable attackers to steal data will continue to star in today’s headlines.

Passing messages between remote processes on the UI is a really bad idea, but the entire rich We 2.0 execution model is based on this really bad idea.

Ask a simple question: How many ways are there to pass an array of search strings from a browser client to a Web server? Let’s say at least two – comma-delimited strings or JSON-encoded arrays.  Then ask another question – do Mozilla (Firefox), Webkit (Chrome) and Microsoft IE8 treat client data transfer in a uniform, vendor-neutral standard way?  Of course not.   The list of Microsoft IE incompatibilities or different interpretations of W3C standards is endless.   Mozilla and Webkit  transmit UTF-8 url-encoded data as-is in a query string sent to the server. But, Microsoft IE8 takes UTF-8 data in the query string and converts it to ? (yes question marks) in an XHR transaction unless the data has been previously uri-encoded.   Are browser incompatibilities a source of of application bugs? Do these bugs lead to software security vulnerabilities?  Definitely.

So, it’s really easy to develop cool Web 2.0 applications for seeing who’s hot and who’s not. It’s also cheap to deploy your totally-cool social networking application on a shoestring budget. Facebook started with a budget of $9,000 and so can you.

But, it’s also totally easy to hack that really cool rich Web 2.0 application, steal personal data and crash the system.

A standard answer to the cloud security challenge is writing the security into the contract with the cloud service provider.

Consider however,who is the customer of that cool social media application running in the cloud on some IaaS (infrastructure as a service). If you are a user of a cool new free application, you cannot negotiate or RFP the security issues away, because you are not the customer.  You generate content for the advertisers, who are the real customers.

With a broken development and execution model for rich Web 2.0 applications, the cloud computing model of software as a service utility is not sustainable for all but the largest providers like Facebook and Salesforce.com.   The cost of security is too high for the application provider and the risk of entrusting valuable business IP  and sensitive customer data to the cloud is unreasonable. Your best option is to hope that your cool Web application will succeed small-time, make you some cash and enable you to fly under the radar with a minimal attack surface.

Like your first girl friend told you – it’s not you, it’s me.

It’s not the IT infrastructure, it’s the software.

Tell your friends and colleagues about us. Thanks!
Share this

Government Agencies Need to Comply with White House Directive to Keep WikiLeaks Documents Off of Their Networks

Yes – there is apparently a White House directive to keep Wikileaks documents off Federal networks – according to a directive from the White House Office of Management & Budget on the treatment of classified documents.

WASHINGTON, Nov 29 (Reuters) – The United States said on Monday that it deeply regretted the release of any classified information and would tighten security to prevent leaks such as WikiLeaks’ disclosure of a trove of State Department cables.

More than 250,000 cables were obtained by the whistle-blower website and given to the New York Times and other media groups, which published stories on Sunday exposing the inner workings of U.S. diplomacy, including candid and embarrassing assessments of world leaders.

The U.S. Justice Department said it was conducting a criminal investigation of the leak of classified documents and the White House, State Department and Pentagon all said they were taking steps to prevent such disclosures in future.

While Secretary of State Hillary Clinton said she would not comment directly on the cables or their substance, she said the United States would take aggressive steps to hold responsible those who “stole” them.

In the directive, federal agencies were informed that employees and federal contractors must avoid viewing and/or downloading classified documents that have been leaked via WikiLeaks disclosures. As the information on WikiLeaks is still classified, even if it’s in the public domain, a federal government employee electronically viewing the information from or downloading the information to devices connected to unclassified networks “risks that material still classified will be placed on non-classified systems”

NOTICE TO EMPLOYEES AND CONTRACTORS CONCERNING SAFEGUARDING OF CLASSIFIED INFORMATION AND USE OF GOVERNMENT INFORMATION TECHNOLOGY SYSTEMS”, Office of Management and Budget, December 3, 2010.

Data security vendor Fidelis Security Systems has announced that they will provide policies in their Network DLP product. Fidelis XPS to help ensure that employees cannot view or download classified documents.

Fidelis XPS is extremely powerful network DLP technology for high speed (in excess of 2.5GB) content interception and analysis in real time of data entering or leaving a network.   With all due respect to the power of Fidelis network DLP, the White House Directive is nonsense.  It’s more security theater, not security countermeasures, designed to show that the administration is “doing something”.

The directive is nonsense for a number of reasons:

a) Requiring employees and federal contractors to avoid viewing and/or downloading classified documents that have been leaked via WikiLeaks disclosures is like saying – “well, you will have to disconnect yourself from the Internet, from Facebook, From Gmail and your smart phone”.   It’s not a practical strategy, since it’s impossible to enforce.

b) The network vector is almost certainly not how the information was leaked.  First of all, this means that network DLP solutions are not an appropriate countermeasure against Wikileaks. Releasing custom network DLP policies for Wikileaks is a crude sort of  link-baiting; misdirected, since Federal decision makers don’t evaluate data security technology  using social media like Facebook.

The Wikileaks documents are provided by trusted insiders that have motive (dislike Obama or Clinton), means (physical, electronic or social access) and opportunity (no one is watching).   There is little utility (besides appearing to be doing something) to install network DLP technology to prevent employees from viewing or downloading.

c) And finally it’s nonsense because the OMB directive talks about viewing and downloading documents and not about leaking.

If the White House is serious about preventing more leaks they should start by firing Secretary Clinton.

Then again – perhaps the wikileaks documents were all leaked under tacit direction from the White House.  Since President Obama has a pattern of sticking it to US friends (Israel, Czech Republic, Poland) whatever embarrassment it might cause friendly allies is more than worth the price of issuing a worthless OMB directive.

Tell your friends and colleagues about us. Thanks!
Share this