Tag Archives: ethics

Anat kamm

Procedures are not a substitute for ethical behavior

Are procedures  a substitute for responsible and ethical behavior?

The  behavior of former secretary  of  State (and Presidential race loser) Hilary Clinton is an important example of how feeling entitled is not the exclusive domain of under 20-somethings. When we do a threat analysis of medical devices, we try to look beyond the technical security countermeasures and dive into the human factors of employees and managers of the organization.

Leadership from the front trumps security technology.

President Obama’s notion of leading from behind is problematic in the data security and governance space – leadership is about leading from the front.

President Obama’s weak position on enforcing data security and privacy in his administration (Snowden, Clinton and NSA) set a poor example that will take years to undo and probably cost Hilary Clinton the election.

In the business environment,  management leadership from the front on data security and privacy is a more effective (as in cheaper and stronger) countermeasure than technology when it comes to mitigating trusted insider threats.

In the family environment, we traditionally see parents as responsible for taking a leadership position on issues of ethics and responsible behavior.

Has mobile changed this?

Sprint  announced new services that  will allow parents to set phone use limits by time of day or week, see daily calls, text messaging and application activity of their children.  Sprint Mobile Controls powered by Safely, a division of Location Labs,  allows parents to see rich graphical representations of how their family calls, texts and use applications and to lock phones remotely at specific times.

For example:

  • Seeing who your son or daughter has been calling or texting recently – and how often.
  • Establishing an allowed list of phone numbers from which your child can receive a call or text.
  • Seeing a list of your child’s contacts with an associated picture ranked by overall texting and calling activity.
  • Viewing what apps your child is downloading to their phone.
  • Choosing up to three anytime apps that your child can use when their device is locked.
  • Allowing your child to override phone restrictions in case of an emergency.
  • Setting alert notifications for new contacts, or School Hours and Late Night time periods.
  • Setting Watchlist contacts: Receive alert notifications when your child communicates with a Watchlist contact.

This seems like a similar play to product and marketing initiatives by credit card companies to control usage of credit card by children using prepaid cards like the Visa Buxx – except in the case of Visa the marketing message is education in addition to parental control:  Visa Buxx benefits for parents and teens include:

  • Powerful tool to encourage financial responsibility
  • Convenient and flexible way to pay
  • Safer than cash
  • Parental control and peace of mind
  • Wide acceptance—everywhere Visa debit cards are welcome

Visa Buxx was introduced almost 10 years ago. I don’t have any data on how much business the product generates for card issuers but fast forward to December 2011, the message of responsibility has given way to parental control in the mobile market:

In the case of mobile phones, I can see the advantage of a home privacy and security product. From Sprint’s perspective; controlling teens is a big untapped market. Trefis. (the online site that analyzes stock behavior by product lines) has aptly called it “Sprint Targets Burgeoning Teen Market with Parents Playing Big Brother

The teen market, consisting of those in the 12 to 17 year age group, is plugged into cellular devices and plans to a much greater extent than you might imagine. According to a Pew Internet Research study, more than 75% of this group owns a wireless phone. This isn’t news to Sprint Nextel (NYSE: S) or mobile phone competitors such as Nokia (NYSE:NOK), AT&T (NYSE:T) and Verizon (NYSE:VZ).

I do not believe that technology is a replacement for education.

It will be interesting to track how well Sprint does with their teen privacy and security product and if parents buy the marketing concept of privacy controls as a proxy for responsible behavior.

Tell your friends and colleagues about us. Thanks!
Share this

Why data leaks

The 6 key business requirements for protecting patient data in networked medical devices and EHR systems:

  1. Prevent data leakage directly of ePHI (electronic protected health information) from  the device itself, the management information system and or the hospital information system interface. Data loss can be protected directly using network DLP technology from companies like Websense of Fidelis Security
  2. Ensure availability of the medical device or EHR application.  When the application goes offline, it becomes easier to attack via maintenance interfaces, technician and super-user passwords and copy data from backup devices or access databases directly while the device is in maintenance mode.
  3. Ensure integrity of the  data stored in the networked medical device or EHR system. This is really ABC of information security but if you do not have a way to detect missing or manipulated records in your database, you should see this as a wake-up call because if you do get hacked, you will  not know about it.
  4. Ensure that a networked or mobile medical device cannot by exploited by malicious attackers to cause damage to the patient
  5. Ensure that a networked or mobile medical device cannot by exploited by malicious attackers to cause damage to the hospital enterprise network
  6. Ensure that data loss cannot be exploited by business partners for financial gain.   The best defense against data loss is DLP – data loss prevention since it does not rely on access control management.

Why does data leak?

Just like theft, data is leaked or stolen because it has value, otherwise the employee or contractor would not bother.  There is no impact from leakage of trivial or universally available information.  Sending a  weather report by mistake to a competitor obviously will not make a difference.

The financial impact of a data breach is directly proportional to the value of the asset. Imagine an insurance company obtaining PHI under false pretenses, discovering that the patient had been mistreated, and suppressing the information.  The legal exposure could be in the millions.  Now consider a data leakage event of patient names without any clinical data – the impact is low, simply because names of people are public domain and without the clinical data, there is no added value to the information.

Why people steal data

The key attack vector for a data loss event is people  – often business partners working with inside employees. People handle electronic data and make mistakes or do not follow policies. People are increasing conscious that information has value – All information has some value to someone and that someone may be willing to pay or return a favor. This is an ethical issue which is best addressed by direct managers leading from the front and by example with examples of ethical behavior.

People are tempted or actively encouraged to expose leaked/lost data – consider Wikileaks and data leakage for political reasons as we recently witnessed in Israel in the Anat Kamm affair.

People maintain information systems and make mistakes, leave privileged user names on a system or temporary files with ePHI on a publicly available Windows share.

APT (Advanced Persistent Threat Attacks)

People design business processes and make mistakes – creating a business process for customer service where any customer service representative can see any customer record creates a vulnerability that can be exploited by malicious insiders or attackers using APT (Advanced Persistent Threat Attacks) that target a particular individual in a particular business unit – as seen in the recent successful APT attack on RSA, that targeted an HR employee with an Excel worksheet containing malware that enabled the attackers to steal SecurID token data,  and then use the stolen tokens to hack Lockheed Martin.

According to Wikipedia, APT attacks utilize traditional attack vectors such as malware and social engineering, but also extend to advanced attacks such as satellite imaging. It’s a low-and-slow attack, designed to go undetected.  There is always a specific objective behind it, rather than the chaotic and organized attacks of script kiddies.

Tell your friends and colleagues about us. Thanks!
Share this

Why big data for healthcare is dangerous and wrong

The Mckinsey Global Institute recently published a report entitled – Big data: The next frontier for innovation, competition, and productivity .

The Mckinsey Global Institute report on big data is no more than a lengthy essay in fallacies, inflated hyperbole, faulty assumptions, lacking in evidence for its claims and ignoring the two most important stakeholders of healthcare – namely doctors and patients.

They just gloss over the security and privacy implications of putting up a big big target with a sign that says “Here is a lot of patient healthcare data – please come and steal me“.

System efficiency does not improve patient health

In health care, big data can boost efficiency by reducing systemwide costs linked to undertreatment and overtreatment and by reducing errors and duplication in treatment. These levers will also improve the quality of care and patient outcomes.

To calculate the impact of big-data-enabled levers on productivity, we assumed that the majority of the quantifiable impact would be on reducing inputs.

We held outputs constant—i.e., assuming the same level of health care quality. We know that this assumption will underestimate the impact as many of our big-data-enabled levers are likely to improve the quality of health by, for instance, ensuring that new drugs come to the market faster…

They don’t know that.

The MGI report does not offer any correlation between reduction in systemwide costs and improving the quality of care of the individual patient.

The report deals with the macroeconomics of the pharmaceutical and healthcare organization industries.

In order to illustrate why systemwide costs are not an important factor in the last mile of healthcare delivery, let’s consider the ratio of system overhead to primary care teams in Kaiser-Permanente – one of the largest US HMOs. At KP, (according to their 2010 annual report) – out of 167,000 employees, there were 16,000 doctors, and 47,000 nurses.

Primary care teams account for only 20 percent of KP head-count. Arguably, big-data analytics might enable KP management to deploy services in more effective way but do virtually nothing for the 20 percent headcount that actually encounter patients on a day to day basis.

Let’s not improve health, let’s make it cheaper to keep a lot of people sick

Note the sentence – “assuming the same level of health care quality”. In other words, we don’t want to improve health, we want to reduce the costs of treating obese people who eat junk food and ride in cars instead of walking instead of fixing the root causes. Indeed MGI states later in the their report:

Some actions that can help stem the rising costs of US health care while improving its quality don’t necessarily require big data. These include, for example, tackling major underlying issues such as the high incidence and costs of lifestyle and behavior-induced disease.

Lets talk pie in the sky about big data and ignore costs and ROI

…the use of large datasets has the potential to play a major role in more effective and cost-saving care initiatives, the emergence of better products and services, and the creation of new business models in health care and its associated industries.

Being a consulting firm, MGI stays firmly seated on the fence and only commits itself to fluffy generalities about the potential to save costs with big data. The terms ROI or return on investment is  not mentioned even once because it would ruin their argumentation. As a colleague in the IT division of the Hadassah Medical Organization in Jerusalem told me yesterday, “Hadassah management has no idea of how much storing all that vital sign from smart phones will cost. As a matter of fact, we don’t even have the infrastructure to store big data”.

It’s safe to wave a lot of high-falutin rhetoric around about $300BN value-creation (whatever that means), when you don’t have to justify a return on investment or ask grass-level stakeholders if the research is crap.

MGI does not explain how that potential might be realized. It sidesteps a discussion of the costs of storing and analyzing big data, never asks if big data helps doctors make better decisions and it glosses over low-cost alternatives related to educating Americans on eating healthy food and walking instead of driving.

The absurdity of automated analysis

..we included savings from reducing overtreatment (and undertreatment) in cases where analysis of clinical data contained in electronic medical records was able to determine optimal medical care.

MGI makes an absurd assumption that automated analysis of clinical data contained in electronic medical records can determine optimal medical care.

This reminds me of a desert island joke.

A physicist and economist were washed up on a desert island. They have a nice supply of canned goods but no can-opener. To no avail, the physicist experiments with throwing the cans from a high place in the hope that they will break open (they don’t). The economist tells his friend “Why waste your time looking for a practical solution, let’s just assume that we have a can-opener!”.

The MGI report just assumes that we have a big data can-opener and that big data can be analyzed to optimize medical care (by the way, they do not even attempt to offer any quantitive indicators for optimization – like reducing the number of women that come down with lymphema after treatment for breast cancer – and lymphedema is a pandemic in Westerm countries, affecting about 140 million people worldwide.

In Western countries, secondary lymphedema is most commonly due to cancer treatment.Between 38 and 89% of breast cancer patients suffer from lymphedema due to axillary lymph node dissection and/or radiation.See :

^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.

  1. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  2. ^ Brorson, M.D., H.; K. Ohlin, M.D., G. Olsson, M.D., B. Svensson, M.D., H. Svensson, M.D. (2008). “Controlled Compression and Liposuction Treatment for Lower Extremity Lymphedema”. Lymphology 41: 52-63.
  3. ^ Kissin, MW; G. Guerci della Rovere, D Easton et al (1986). “Risk of lymphoedema following the treatemnt of breast cancer.”. Br. J. Surg. 73: 580-584.
  4. ^ Segerstrom, K; P. Bjerle, S. Graffman, et al (1992). “Factors that influence the incidence of brachial oedema after treatment of breast cancer”. Scand. J. Plast. Reconstr. Surg. Hand Surg. 26: 223-227.

More is not better

We found very significant potential to create value in developed markets by applying big data levers in health care.  CER (Comparative effectiveness research ) and CDS (Clinical decision support) were identified as key levers and can be valued based on different implementations and timelines

Examples include joining different data pools as we might see at financial services companies that want to combine online financial transaction data, the behavior of customers in branches, data from partners such as insurance companies, and retail purchase history. Also, many levers require a tremendous scale of data (e.g., merging patient records across multiple providers), which can put unique demands upon technology infrastructures. To provide a framework under which to develop and manage the many interlocking technology components necessary to successfully execute big data levers, each organization will need to craft and execute a robust enterprise data strategy.

The American Recovery and Reinvestment Act of 2009 provided some $20 billion to health providers and their support sectors to invest in electronic record systems and health information exchanges to create the scale of clinical data needed for many of the health care big data levers to work.

Why McKinsey is dead wrong about the efficacy of analyzing big EHR data

  1. The notion that more data is better (the approach taken by Google Health and Microsoft and endorsed by the Obama administration and blindly adopted by MGI in their report.
  2. EHR is based on textual data, and is not organized around patient clinical issue.

Meaningful machine analysis of EHR is impossible

Current EHR systems store large volumes of data about diseases and symptoms in unstructured text, codified using systems like SNOMED-CT1. Codification is intended to enable machine-readability and analysis of records and serve as a standard for system interoperability.

Even if the data was perfectly codified, it is impossible to achieve meaningful machine diagnosis of medical interview data that was uncertain to begin with and not collected and validated using evidence-based methods.

More data is less valuable for a basic reason

A fundamental observation about utility functions is that their shape is typically concave: Increments of magnitude yield successively smaller increments of subjective value.2

In prospect theory3, concavity is attributed to the notion of diminishing sensitivity, according to which the more units of a stimulus one is exposed to, the less one is sensitive to additional units.

Under conditions of uncertainty in a medical diagnosis process, as long as it is relevant, less information enables taking a better and faster decision, since less data processing is required by the human brain.

Unstructured EHR data  is not organized around patient issue

When a doctor examines and treats a patient, he thinks in terms of “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up.

In current EHR systems, when a doctor records the encounter, he records planning, tests, therapies, and follow-up, but not under a main “issue” entity; since there is no place for it.

The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, and records everything as unstructured text unrelated to issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

The “reverse-engineering” problem is the root cause for a series of additional problems:

  • Lack of overview of the patient
  • No connection to clinical guidelines, no indication of which guidelines to follow or which have been followed
  • No connection between prescriptions and diseases, except circumstantial
  • No ability to detect and warn for contraindications
  • No archiving or demoting of less important and solved problems
  • Lack of overview of status of the patient, only a series of historical observations
  • In most systems, no search capabilities of any kind
  • An excess of textual data that cannot possibly be read by every doctor at every encounter
  • Confidentiality borders are very hard to define
  • Very rigid and closed interfaces, making extension with custom functionality very difficult

Summary

MGI states that their work is independent and has not been commissioned or sponsored in any way by any business, government, or other institution. True, but  MGI does have consulting gigs with IBM and HP that have vested interests in selling technology and services for big data.

The analogies used in the MGI report and their tacit assumptions probably work for retail in understanding sales trends of hemlines and high heels but they have very little to do with improving health, increasing patient trust and reducing doctor stress.

The study does not cite a single interview with a primary care physician or even a CEO of a healthcare organization that might support or validate their theories about big data value for healthcare. This is shoddy research, no matter how well packaged.

The MGI study makes cynical use of “framing”  in order to influence the readers’ perception of the importance of their research. By citing a large number like $300BN readers assume that impact of big data is well, big. They don’t pay attention to the other stuff – like “well it’s only a potential savings” or “we never considered if primary care teams might benefit from big data (they don’t).

At the end of the day, $300BN in value from big data healthcare is no more than a round number. What we need is less data and more meaningful relationships with our primary care teams.

1ttp://www.nlm.nih.gov/research/umls/Snomed/snomed_main.html

2 Current Directions in Psychological Science, Vol 14, No. 5 http://faculty.chicagobooth.edu/christopher.hsee/vita/Papers/WhenIsMoreBetter.pdf

Tell your friends and colleagues about us. Thanks!
Share this

Data Classification and Controls Policy for PCI DSS

Do you run an e-commerce site?

Are you sure you do not store any payment card data or PII (personally identifiable information) in some MySQL database?

The first step in protecting credit card and customer data is to know what sensitive data you really store, classify what you have  and set up the appropriate security controls.

Here is a policy for any merchant or payment processor who want to achieve and sustain PCI DSS 2.0 compliance and protect customer data.

I. Introduction

You need to identify and apply controls to the data types identified in this policy. The data types identified below are considered digital assets and are to be controlled and managed as specified in this policy while retained or processed by the organization. You should identify and inventory all systems that store or process this information and will audit these systems on a semi-annual bases for effectiveness of controls to manage the data types.

II. Background

The Payment Card Industry (PCI) Security Standard is a requirement for all financial institutions and merchants that use or process credit card information. This security standard is designed to help protect the integrity of the credit card systems and to help mitigate the risk of fraud and identity theft to the individuals who use credit cards to make purchases for goods and services.

The PCI Security Standard was originally introduced by by VISA as the Cardholder Information Security Program (CISP) and specified the security controls for each level or merchant and credit card processor. In 2004 the major brands in the card payment industry agreed to adopt the CISP standard and requirements and a single industry standard in order to reduce the costs of implementation and assessment and increase the rate of adoption. Most organizations were required to meet all requirements of the PCI security standard by June 30th 2005 and it is now an ongoing compliance process with merchants, payment processors and issuers.

III. General Policy Statement

All Credit Card Information and associated data is company confidential and will not be transmitted over public networks in the clear. Credit Card information can only be transmitted encrypted and only for authorized business purposes to authorized parties that have been approved to receive credit card information.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

The valley of death between IT and information security

IT is about executing predictable business processes.

Security is about reducing the impact of unpredictable attacks to a your organization.

In order ot bridge the chasm – IT and security need to adopt a common goal and a common language – a language  of customer-centric threat modelling

Typically, when a company ( business unit, department or manager) needs a line of business software application, IT staffers will do a system analysis starting with business requirements and then proceed to buy or build an application and deploy it.

Similarly, when the information security group needs an anti-virus or firewall, security staffers will make some requirements, test products, and proceed to buy and deploy or subscribe and deploy the new anti-virus or firewall solution.

Things have changed – both in the IT world and in the security world.

Web 2.0 SaaS (software as a service) offerings (or  Web applications in PHP that the CEO’s niece can whip together in a week…) often replace those old structured systems development methodologies. There are of course,  good things about not having to develop a design (like not coming down with an advanced case of analysis paralysis) and iterating quickly to a better product, but the downside of not developing software according to a structured systems design methodology is buggy software.

Buggy software is insecure software. The response to buggy, insecure software is generally doing nothing or installing a product that is a security countermeasure for the vulnerability  (for example, buying a database security solution) instead of fixing the SQL injection vulnerability in the code itself.   Then there is lip-service to so called security development methodologies which despite their intrinsic value, are often too detailed for practioners to follow), that are not a replacement for a serious look at business requirements followed by a structured process of implementation.

There is a fundamental divide, a metaphorical valley of death of  mentality and skill sets between IT and security professionals.

  • IT is about executing predictable business processes.
  • Security is about reducing the impact of unpredictable attacks.

IT’s “best practice” security in 2011 is  firewall/IPS/AV.  Faced with unconventional threats  (for example a combination of trusted contractors exploiting defective software applications, hacktivists or competitors mounting APT attacks behind the lines), IT management  tend to seek a vendor-proposed, one-size-fits-all “solution” instead of performing a first principles threat analysis and discovering  that the problem has nothing to do with malware on the network and everything to do with software defects that may kill customers.

Threat modeling and analysis is the antithesis of installing a firewall, anti-virus or IPS.

Analyzing the impact of attacks requires hard work, hard data collection and hard analysis.  It’s not a sexy, fun to use, feel-good application like Windows Media Player.   Risk analysis  may yield results that are not career enhancing, and as  the threats  get deeper and wider  with  bigger and more complex systems – so the IT security valley of death deepens and gets more untraversable.

There is a joke about systems programmers – they have heard that there are real users out there, actually running applications on their systems – but they know it’s only an urban legend. Like any joke, it has a grain of truth. IT and security are primarily systems and procedures-oriented instead of  customer-safety oriented.

Truly – the essence of security is protecting the people who use a company’s products and services. What utility is there in running 24×7 systems that leak 4 million credit cards or developing embedded medical devices that may kill patients?

Clearly – the challenge of running a profitable company that values customer protection must be shouldered by IT and security teams alike.

Around this common challenge, I  propose that IT and security adopt a common goal and a common language – a language  of customer-centric threat modelling – threats, vulnerabilities, attackers, entry points, assets and security countermeasures.  This may be the best or even only way for IT and security  to traverse the valley of death successfully.

Tell your friends and colleagues about us. Thanks!
Share this

Ehud Barak, information leaks and political activism

What do Anat Kamm, Ehud Barak and Meir Dagan have in common?

Ehud Barak is current Israeli Minister of Defense, former IDF Chief of Staff and former Prime Minister  that led the disastrous withdrawal from Lebanon that fomented Intifada II and then Lebanese War II.  Barak is famous for quotes like “If I was a Palestinian, I would also be a suicide bomber” or “If I was an Iranian, I would also build nuclear weapons“.

During her military service as an assistant in the Central Command bureau Anat Kamm secretly copied over 2,000 classified documents, copied the documents to a CD and leaked it to the Israeli Haaretz journalist Uri Blau. Kamm  was recently convicted of espionage and leaking confidential information without authorization and sentenced to 4.5 years in prison after a plea bargain.

Former Mossad chief Meir Dagan has recently voiced unrestrained criticism of the current administration’s defense policy in the service of his political activism; criticism which is supposedly based on his inside knowledge from the Mossad.

Meir Dagan, together with Gen. Gabi Ashkenazi (former chief of staff), Gen. Amos Yadlin (former head of military intelligence), and Yuval Diskin (former head of Shin Bet), opposed an attack on Iran. While in office (they all retired between November 2010 and May 2011), the Gang of Four successfully blocked attempts by Netanyahu and Barak to move forward on the military option.

Of the four, only Dagan has spoken openly, after leaving office, about what he considers to be the folly of an attack on Iran —  and openly criticized Netanyahu and Barak for irresponsibly pushing Israel to an unnecessary war, relying on his former position of responsibility as chief of intelligence as as implying that what he said must be true.

It was unclear why Dagan would speak of plans best left undisclosed. Unclear, at least until last week, when Dagan announced his plans for a movement to change the method of Israeli government, leaving his options to enter politics in the future open.

I wish Dagan luck.  I’m not happy with his way of publicizing his political activism at the risk of treading the thin line of information leak. It places him on the same slippery slope as Anat Kam who lamely attempted to justify her actions as an act of political protest.

In comparison with Dagan, Barak is circumspect (despite his unfortunate quotes and bad decisions).

Barak was asked about the possibility of making a decision on attacking Iran in the Israeli daily Ha’aretz.

In my various posts I’ve already seen all the possible permutations, as long as one thing remains constant: the role of the military is to prepare the plans. It is important that the political echelon listen very carefully to what the operational and intelligence echelons have to say, but at the end it is the political echelon that has the responsibility for the decision.
More here on Israeli defense minister Ehud Barak on Iran, U.S., and war
Tell your friends and colleagues about us. Thanks!
Share this

The political power of social media

Clay Shirky writes on Foreign Affairs this week

Arguing for the right of people to use the Internet freely is an appropriate policy for the United States, both because it aligns with the strategic goal of strengthening civil society worldwide and because it resonates with American beliefs about freedom of expression

By switching from an instrumental to an environmental view of the effects of social media on the public sphere, the United States will be able to take advantage of the long-term benefits these tools promise.

Oooh – I just love this stuff “resonates with American beliefs” and “environmental view of the effects of social media on the public sphere

“Some ideas are so stupid only intellectuals believe them.”
George Orwell

Twitter and Facebook are communication tools. Not values.

It is the height of foolishness to assert that a communications tool like Facebook and Twitter is a substitute for values. Sure it makes it easier for 80,000 people to attend demonstrations someone else is funding, but don’t forget the agendas of the people funding the demonstrations.

The US will not be able to “to take advantage of the long-term benefits these tools promise” unless it takes a moral and value position, clearly delineating the basic dos ( for starters – honor your parents, honor freedom of religion) and don’ts (not killing your citizens, not raping your women, not chopping off hands of thieves, not funding Muslim terrorists, not holding the world at gun-point over the price of oil).

There is no evidence that social media changes government policy

Look at Egypt. Look at Israel. Look at Wall Street.

Social media hype is escapism from dealing with fundamental issues

Let’s assume that the US has an agenda and responsibility to make the world a better place.

Green / clean energy.  Healthy people.

I think we can all agree these are  good thing for the world. Did social media play any kind of role at all in the blunders of  the Obama administration in their energy or healthcare initiatives? Does the administration have a good record or a bad record with these initiatives?

Solyndra is an illustration of how a major Obama contributor took half a billion in loan guarantees and walked away without exposure.   The factory employed about 150 people and stimulated the pockets of a small number of wealthy people.   And, do not forget, Solyndra is kids stuff compared to the $80 Billion in real money that the US government squandered on Afghan electrification projects with no oversight on the cost-plus contractors that delivered zip to Afghanistan.

Mr. Obama and his yea-sayers like Clay Shirkey need the hifalutin talk about the importance of social media and free speech, to deflect voter attention from  rewards to their campaign contributors, financial service institutions, government contractors and Beltway insiders and winning the next Presidential election.

Is the objective improving the health of Americans or is the objective giving gifts of $44,000 to US doctors so that they can go out and buy some software from one of the 705 companies that have certified to HHS requirements for e-prescribing? WTF does e-prescription software have to do with treating chronic patients?

Even giving President Obama credit for having some good ideas – once you have a big, centralized, I’ll run everything, decide everything, make everyone comply kind of government – you get all kinds of nonsense like Solyndra, Afghan electrification projects, health care software subsidies and … Bar Lev lines,  multi-billion sheqel security fence projects and the funneling of funds from the PA to Israeli businessmen allied to Israeli ex-generals who sell gasoline to Palestinian terror organizations and security services to Palestinian banks.

In the Middle East – even while vilifying Bush, the Obama administration continues the Bush doctrine of not going after the real bad guys who fund terror (the Saudis),  while wasting thousands of American lives (in Iraq and Afghanistan) and blowing over 80 billion dollars in tax payer money on boondoogles like the Iragi and Afghan electrification projects.

Obama praise for the Arab Spring is chilling in its double-talk about democracy (just last month in Tunisia) as Libya, Egypt and their neighbors transition into Islamic fundamentalism rule amidst blatantly undemocratic violence.

In Israel, I would not blame any US President for problems our own doing no more than I would credit Facebook with the 2011 Summer of Love on Rothschild which was no more than an exercise in  mass manipulation by professional political lobbyists and people like Dafne Leaf who were too busy with their liberal agendas to serve their country.

Israeli leaders have been on a slippery downhill slope of declining morals since Sabra and Shatila in 1985.

And for that – we cannot blame any single President or Prime Minister no more than we can credit Facebook with remembering friends’ birthdays –  but only blame ourselves for putting up with the lack of values and morals of our leaders.

Tell your friends and colleagues about us. Thanks!
Share this

Why less log data is better

Been a couple weeks since I blogged – have my head down on a few medical device projects and a big PCI DSS audit where I’m helping the client improve his IT infrastructure and balance the demands of the PCI auditors.

Last year I gave a talk on quantitative methods for estimating operational risk of information systems in the annual European GRC meeting in Lisbon – you can see the presentation below.

As a I noted in my talk, one of the crucial phases in estimating operational risk is data collection: understanding what threats, vulnerabilities you have and understanding not only what assets you have (digital, human, physical, reputational) but also how much they’re worth in dollars.

Many technology people interpret data collection as some automatic process that reads/scans/sniffs/profiles/processes/analyzes/compresses log files, learning and analyzing the data using automated  algorithms like ANN (adaptive neural networks).

The automated log profiling tool will then automagically tell you where you have vulnerabilities and using “an industry best practice database of security countermeasures”,  build you a risk mediation plan. Just throw in a dash of pie charts and you’re good to go with the CFO.

This was in fashion about 10 years ago (Google automated audit log analysis and you’ll see what I mean) for example this reference on automated audit trail analysis,  Automated tools are good for getting a quick indication of trends, and  tend to suffer from poor precision and recall that  improve rapidly when combined with human eyeballs.

The PCI DSS council in Europe (private communication) says that over 80% of the merchants/payment processors with data breaches  discovered their data breach  3 months or more after the event. Yikes.

So why does maintaining 3 years of log files make sense – quoted from PCI DSS 2.0

10.7 Retain audit trail history for at least
one year, with a minimum of three
months immediately available for
analysis (for example, online, archived,
or restorable from back-up).
10.7.a Obtain and examine security policies and procedures and
verify that they include audit log retention policies and require
audit log retention for at least one year.
10.7.b Verify that audit logs are available for at least one year and
processes are in place to immediately restore at least the last
three months’ logs for analysis

Wouldn’t it be a lot smarter to say –

10.1 Maintain a 4 week revolving log with real-time exception reports as measured by no more than 5 exceptional events/day.

10.2 Estimate the financial damage of the 5 exceptional events in a weekly 1/2 meeting between the IT manager, finance manager and security officer.

10.3 Mitigate the most severe threat as measured by implementing 1 new security countermeasure/month (including the DLP and SIEM systems you bought last year but haven’t implemented yet)


I’m a great fan of technology, but the human eye and brain does it best.

Tell your friends and colleagues about us. Thanks!
Share this

The ethical aspects of data security

Ethical breaches or data breaches.

I was standing in line at Ben Gurion airport, waiting for my bag to be x-rayed. A conversation started with a woman standing next to me in line. The usual sort – “Where are you traveling and what kind of work do you do?”. I replied that I was traveling to Warsaw and that I specialize in data security and compliance – helping companies prevent trusted insider theft and abuse of sensitive data.

She said, “well sure, I understand exactly what you mean – you help enforce ethical behavior of people in the organization”.

I stopped for a moment and asked her, hold on – “what kind of business are you in”? She said – “well, I worked in the GSS for years training teams tasked with protecting high echelon politicians and diplomats. I understand totally the notion of enforcing ethical behavior”. And now? I asked. Now, she said, ” I do the same thing, but on my own”.

Let’s call my new friend “Sarah”.

Sarah’s ethical approach was for me, a breath of fresh air. Until that point, I had defined our data security practice as an exercise in data collection, risk analysis and implementation of the appropriate technical security countermeasures to reduce the risk of data breach and abuse. Employees, competitors and malicious attackers are all potential attackers.  The objective is to implement a cost-effective portfolio of data security countermeasures – policies and procedures, software security assessments, network surveillance, data loss prevention (DLP) and encryption at various levels in the network and applications.

I define security as protecting information assets.

Sarah defines security as protecting ethical behavior.

In my approach to data security, employee behavior is an independent variable, something that might be observed but certainly, not something that can be controlled. Since employees, contractors and business partners tend to have their own weaknesses and problems that are not reported on the balanced score card of the company, my strategy for data security posits that it is more effective to monitor data than to monitor employees and prevent unauthorized transfer or modification of data instead of trying to prevent irrational or criminal behavior of people who work in the extended enterprise.

In Sarah’s approach to data security, if you make a set of rules and train and enforce ethical behavior with good management, sensing and a dosage of fear in the workplace; you have cracked the data security problem.

So – who is right here?

Well – we’re both right, I suppose.

The answer is that without asset valuation and analysis of asset vulnerabilities, protecting a single asset class (human resources, data, systems or network) while ignoring others, may be a mistake.

Let’s examine two specific examples in order to test the truth of this statement.

Consider a call center with 500 customer service representatives. They use a centralized CRM application, they have telephones and email connectivity. Each customer service representative has a set of accounts that she handles. A key threat scenario is leaking customer account information to unauthorized people – private investigators, reporters, paparazzi etc… The key asset is customer data but the key vulnerability is the people that breach ethical behavior on the way to breaching customer data.

In the case of customer service representatives breaching customer privacy, Sarah’s strategy of protecting ethical behavior is the best security countermeasure.

Now, consider a medical device company with technology that performs imaging analysis and visualization. The company deploys MRI machines in rural areas and uses the Internet to provided remote expert diagnosis for doctors and patients who do not have access to big city hospitals. The key asset transmitted from the systems for remote diagnosis is PHI (protected health information), and the key vulnerabilities are in the network interfaces, the applications software and operating systems that the medical device company uses.

In  the case of remote data transfer and distributed/integrated systems, a combined strategy of software security, judicious network design and operating system selection (don’t use Microsoft Windows…) is the correct way to protect the data.

My conversation with Sarah at the airport gave me a lot of food for thought.

Data loss prevention (DLP technology) is great  and  ethical employee behavior is crucial but they need to work hand in glove.

Where there are people, there is a need to mandate, monitor and reinforce ethical behavior using  a clearly communicated corporate strategy with employees and contractors. In an environment where users require freedom and flexibility in using applications such as email and search, the ethical behavior for protecting company assets starts with company executives who show from personal example that IT infrastructure is to be used to further the company’s business and improving customer service and not for personal entertainment, gain or gratification.

It’s the simple things in life that count.

Tell your friends and colleagues about us. Thanks!
Share this

The economics of software piracy

One year ago this time was World Cup season and Mondial fever put a lot of regional conflicts on the back burner for a month – not to mention put a dent in a lot of family budgets (husbands buying the latest 60 inch Sony Bravia and wives on retail therapy while the guys are watching football)

It is ironic that the FIFA 2010 World cup computer game doesn’t run on Ubuntu.  It would have been a huge marketing coup and poetic justice if the game software was released for Ubuntu in a GPL license.

This got me thinking about open source licensing and it’s advantages for developing countries, which really got my hackles up  after reading the Seventh Annual BSA and IDC Global Software Piracy Study – that screams:  Software Theft Remains Significant Issue Around the World

The rate of global software piracy climbed to 43 percent in 2009. This increase was fueled in large part by expanding PC sales in fast-growing, high-piracy countries and increasing sales to consumers — two market segments that traditionally have higher incidents of software theft. In 2009, for every $100 worth of legitimate software sold, an additional $75 worth of unlicensed software made its way onto the market. There was some progress in 2009 — software rates actually dropped in almost half of the countries examined in this year’s study.

Given the global recession, the software piracy picture could have taken a dramatic turn for the worse. But progress is being outstripped by the overall increases in piracy globally — and highlights the need for governments, law enforcement and industry to work together to address this vital economic issue.
Below are key findings from this year’s study:

  • Commercial value of software theft exceeds $50 billion: the commercial value of unlicensed software put into the market in 2009 totalled $51.4 billion.
  • Progress on piracy held through the recession: the rate of PC software piracy dropped in nearly half (49%) of the 111 economies studied, remained the same in 34% and rose in 17%.
  • Piracy continues to rise on a global basis: the worldwide piracy rate increased from 41% in 2008 to 43% in 2009; largely a result of exponential growth in the PC and software markets in higher piracy, fast growing markets such as Brazil, India and China.

I would not take the numbers IDC and BSA bring at face value. The IDC/BSA estimates are guesses multiplied several times. They start off by assuming that each unit of copied software represents a direct loss of sale for software vendor – patently a false assertion.

If it were true, then the demand for software would be independent of price and perfectly inelastic.

A drop in price usually results in an increase in the quantity demanded by consumers. That’s called price elasticity of demand. The demand for a product becomes inelastic when the demand doesn’t change with price. A product with no competing alternative is generally inelastic. Demand for a unique antibiotic, for example is highly inelastic. A patient will pay any price to buy the only drug that will kill their infection.

If software demand was perfectly inelastic, then everyone would pay in order to avoid the BSA enforcement tax. The rate of software piracy would be 0. Since piracy rate is non-zero, that proves that the original assertion is false. (Argument courtesy of the Wikipedia article on price elasticity of demand )

Back when I ran Bynet Software Systems – we were the first Microsoft Back Office/Windows NT distributor in Israel. I had just left Intel – where we had negotiated a deal with Microsoft that allowed every employee to make a copy of MS Office for home usage. Back in 1997 – after the Windows NT launch, the demand for NT was almost totally inelastic – Not There, Nice Try, WNT is VMS + 1 etc. We could not give the stuff away in the first year. Customers were telling us that they would never leave Novell Netware. Never. But, NT got better from release to release and the big Microsoft marketing machine got behind the product. After two years of struggle and selling retail boxes and MLP for NT, demand picked up. Realizing that there IS price elasticity of demand for software – Microsoft dropped retail packaging and moved to OEM licensing, initially distributing OEM licenses via their two tier distribution channel and later totally cutting out the channel and dealing directly with the computer vendors like HP, Dell and IBM for OEM licenses of NT, XP and 2000, 2003 etc. Vista continued with this marketing strategy and most Vista sales were not retail boxes but pre-installed hardware. After Windows 7 released – users have been upgrading en-masse, proving once again the elasticity of demand for a good product.

Microsoft (who are a major stakeholder in BSA) probably don’t have a major piracy problem with operating system sales. Let’s run some numbers. In 2008 –  Microsoft Windows Vista sales were at about a 9 million unit/quarter run rate. Microsoft June 2008 quarterly revenue was $15.8 BN. Single unit OEM pricing for a Windows operating system  is about $80 and in a volume deal – maybe $20. Let’s assume an average of $50/OEM license. This means that the operating system  accounts for about 50*3*9/15800 = 8.5% of Microsoft revenue.

The BSA Global Piracy Study states that the “median piracy rate in is down one percentage point from last year” – 1 percent of 8.5 percent is meaningless for Microsoft – in dollar terms – BSA work to reduce piracy is less meaningful than a 7 percent drop in the US Dollar rate in 2009.

Microsoft might have a problem with their cash cow – Microsoft Office. Microsoft Office 2007 retails for $450 but is available in an academic license for less than $100. Open Office 2.4 runs just fine on Windows 7 and XP and retails for $0. At those prices, sizable numbers of users are just sliding down the elasticity curve – calling into serious question the IDC/BSA statistics on software piracy.

But there is more to software piracy than providing software at a reasonable price. In poor areas of the world – assuming that the BSA efforts at combating software piracy are successful – only the very rich would have access to applications like Microsoft Office. The middle and lower class people won’t have the opportunity to become MS Office-literate because the prices would be too high. For that I only have three words –download Open Office – the free and open productivity suite.

Finally – I can only anonymously quote a senior Microsoft executive who told me a number of years ago that off the record, Microsoft didn’t mind people copying the software and using a crack because it was a good way of introducing new users to the technology and inducing them to buy the new, improved and supported release a year or two later.

Tell your friends and colleagues about us. Thanks!
Share this