Tag Archives: Data loss

Message queuing insecurity

I met with Maryellen Ariel Evans last week. She was in Israel on vacation and we had coffee on the Bat Yam boardwalk.   Maryellen is a serial entrepreneur; her latest venture is a security product for IBM Websphere MQ Series. She’s passionate about message queue security and I confess to buying into the vision.

She has correctly put her finger on a huge, unmitigated threat surface of transactions that are transported inside the business and between business units using message queuing technology. Message queuing is a cornerstone of B2B commerce and in a highly interconnected system, there are lots of entry points all using similar or same technology – MQ Series or the TIB.

While organizations are busy optimizing their firewalls and load balancers, attackers can tap in, steal the data on the message bus and use it as a springboard to launch new attacks.  It is conceivable that well placed attacks on  message queues in an intermediary player (for example a payment clearing house) could result in the inability of the processor to clear transactions but also serve as an entry point into upstream and downstream systems.  A highly connected stem of networked message queues is a convenient and vulnerable entry point from which to launch attacks; these attacks can and do cascade.

If these attacks cascade, the entire financial system could crash.

Although most customers are still fixated on perimeter security, I believe that Maryellen has a powerful value proposition for message queuing customers in the supply chains of key industries that rely on message interchange: banking, credit cards, health care and energy.



Tell your friends and colleagues about us. Thanks!
Share this

Securing Web servers with SSL

I’ve been recently writing about why Microsoft Windows and the Microsoft monoculture in general  is a bad idea for medical device vendors – see my essays on Windows vulnerabilities and medical devices here, here and here.

It is now time to slaughter one more sacred cow: SSL.

One of the most prevalent misconceptions with vendors in the medical device and healthcare space regards the role of SSL and TLS in protecting patient information.  When faced with a requirement by a government or hospital customer for compliance to one of the US privacy and security standards, a vendor usually reacts with the CEO asking his CTO to look into “solutions”. The CTO’s answer usually goes  like this:

I did some research. Apparently to be FIPS  (or HIPAA, or …) compliant we should use TLS and not SSL. I think that configuring the browser to be FIPS  (or HIPAA, or …) compliant may take a little work.

Action items are given out to the technical team, they usually look like this:

Joe – You establish a secure web site

Jack – Make sure all the addresses on the workstation point to https instead of http

Jack and Joanne – Compile a new version of the Servers and workstation to work properly on the new site.

Jack and Jill – Do what ever needs to be done so that the web services work on the new site.

That’s all – No other changes need to be done to the application.

Oooh.  I just love that last sentence – “No other changes need to be done to the application”.  What about patching Web servers and the Windows operating systems? What about application software vulnerabilities?  What about message queue vulnerabilities ? What about trusted insiders, contractors and business partners who have access to the application software?

There are multiple attack vectors from the perspective of FIPS and HIPAA compliance and PHI data security.  The following schematic gives you an idea of how an attacker can steal PHI, figure using any combination of no less than 15 attack vectors to abuse and steal PHI:

HIPAA security in the cloud

There are potential data security vulnerabilities in the client layer, transmission layer, platform layer (Operating system) and cloud services (Amazon AWS for example).

So where does SSL fit in? Well, we know that the vulnerabilities for a PHI data breach can not only happen inside any layer but in particular there are vulnerabilities in the system interfaces between layers. That means between server layers and client-server interfaces.  SSL  Quoting from the Apache Tomcat 6.0 SSL Configuration HOW-TO:

SSL, or Secure Socket Layer, is a technology which allows web browsers and web servers to communicate over a secured connection. This means that the data being sent is encrypted by one side, transmitted, then decrypted by the other side before processing. This is a two-way process, meaning that both the server AND the browser encrypt all traffic before sending out data.

Another important aspect of the SSL protocol is Authentication. This means that during your initial attempt to communicate with a web server over a secure connection, that server will present your web browser with a set of credentials, in the form of a “Certificate”, as proof the site is who and what it claims to be. In certain cases, the server may also request a Certificate from your web browser, asking for proof that you are who you claim to be. This is known as “Client Authentication,” although in practice this is used more for business-to-business (B2B) transactions than with individual users. Most SSL-enabled web servers do not request Client Authentication.

In plain English, SSL is good for protecting credentials transmitted between the browser and web server during the login process from eavesdropping attacks.  SSL may still be vulnerable to man in the middle attacks by malware that piggybacks on the plain text browser requests and responses before they are encrypted. Similarly, SSL may be vulnerable to cross-site scripting attacks like the Paypal XSS vulnerability discovered in 2008 that would allow hackers to carry out attacks, add their own content to the site and steal credentials from users.

SSL is a key component in a secure login process, but as a security countermeasure for application software vulnerabilities, endpoint vulnerabilities, removable devices, mobile devices and data security attacks by employees,  servers and endpoints, it is worse than worthless because it sucks the medical device/healthcare vendor into a false feeling of security.

SSL does NOT make a medical device/healthcare Website secure. The SSL lock symbol in the  browser navigation window just means that data in motion between a browser client and Web server is encrypted.   If you can attack the endpoint or the server – the data is not protected. Quoting Gene Spafford ( I think this quote has been used for years but it’s still a good one)

“Using encryption on the Internet is the equivalent of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench.”
Gene Spafford Ph.D. Purdue, Professor of Computer Sciences and Director of CERIAS

This is all fine and dandy, but  recall our conversation from the CTO giving action items to his team to “establish a secure web site” as if it was point and click on a Microsoft Office file. The team may discover that even though SSL is not a very good data security countermeasure (albeit required by FIPS and HIPAA), it may not be that easy to implement, let alone implement well.

It’s no wonder that so many web servers are misconfigured by the clueless being led by other clueless people who never read the original documentation and were all feeding off google searches for tutorials. Yikes!

Most people don’t bother reading the software manuals and google for advice looking for things like “Tomcat SSL configuration tutorial“.  Jack, and Jill and Joanne in our example above, may discover themselves wandering in an  abundance of incorrect,incomplete and misleading information in cyberspace, which is mixture of experts who assume everyone  knows how to setup secure AJP forwarding and Tomcat security constraints and a preponderance of newbies who know nothing (or a little bit, which is worse than nothing).

Working with a client in the clinical trial space, I realized that the first and perhaps biggest problem is a lack of decent documentation, so I wrote SSL and Certificate HOW TO – Apache 2.2 and Tomcat 6, Ubuntu which I hope will be my modest contribution (along with this blog) to dispelling some of the confusion and misconceptions and helping medical device and healthcare vendors implement secure Web applications. No promises – but at least I try to do my bit for the community.

Tell your friends and colleagues about us. Thanks!
Share this

ניהול אבטחת מידע בענן – על תבונה ורגישות

ניהול אבטחת מידע בענן – על תבונה ורגישות

,ממשל נתונים הוא דרישה הכרחית להגנה על נתונים כשעוברים למחשוב בענן. קביעת מדיניות ממשל נתונים היא בעלת חשיבות מיוחדת במודל העבודה של מחשוב ענן שמבוסס על אספקת שירותים בתשלום ליחידת צריכה, בניגוד למודל המסורתי של מערכות מידע המבוסס על התקנה, שילוב מערכות ותפעול מוצרים.

יחד עם ההיצע הגדל של פתרונות מחשוב ענן זולים ובעלי ביצועים גבוהים, ישנו צורך חיוני לארגונים לנסח ולהסדיר את מדיניות ממשל הנתונים שלהם. ממשל נתונים פירושו הגדרת הבעלות על הנתונים, השליטה בגישה לנתונים, עד כמה ניתן לעקוב אחר הנתונים וציות לרגולציות, כמו למשל נתוני חולים (הגנה על מידע רפואי אישי כפי שמוגדרת בתקנות של משרד הבריאות האמריקאי).

כדי לבנות אסטרטגיית ממשל נתונים יעילה לענן, יש לענות על עשר השאלות הבאות – תוך חיפוש האיזון המתאים בין הגיון פשוט לדרישות אבטחת הנתונים:

1. מהם הנתונים היקרים ביותר בארגון? כמה כסף הם שווים?

2. כיצד מאוחסנים נתונים אלה – שרתי קבצים, שרתי מסד נתונים, מערכות ניהול מסמכים?

3. כיצד יש לנהל ולאבטח את הנתונים?

4. למי צריכה להיות גישה לנתונים?

5. למי בפועל יש גישה לנתונים?

6. מתי הייתה הפעם האחרונה שנבחנה מדיניות אבטחת המידע / הצפנה?

7. מה המתכנתים בארגון יודעים על אבטחת מידע בענן?

8. למי יש אפשרות לשנות או לטפל בנתונים? (כולל שותפים עסקיים וקבלנים)

9. במקרה של דליפה למקור בלתי מוסמך, מהו הנזק הכלכלי שיגרם לארגון?

10. במקרה של פריצה, תוך כמה זמן יאותר אירוע אובדן הנתונים?

בהקשר של ממשל נתונים בענן, רבים שואלים מה סוג הנתונים שיש לשמור בתשתית IT מקומית?”.

התשובה המוכנה והמובנת מאליה היא שמידע רגיש צריך להישמר באחסון מקומי.

למרות זאת, יתכן ועדיף לאחסן דווקא מידע רגיש מחוץ לכותלי המשרדים במקום לספק גישה מקומית לעובדים וקבלנים.

השימוש בשירותי תשתית מחשוב בענן לאחסון נתונים רגישים יכול למעשה להקטין את מרחב האיומים לאיומים במקום להגדיל אותו, ולהעניק לארגון יותר שליטה על ידי מרכוז וסטדנדרטיזציה של אחסון נתונים כחלק מאסטרטגיית ממשל נתונים מקיף.

בנוסף ניתן לשאת ולתתבחוזה מסחריעל הרכב אמצעי שליטה יעילים במסגרת חוזה מסחרי עם ספקי שירותי מחשוב ענן, מה שלא ניתן לעשות בקלות מול עובדים בארגון.

השאלה השנייה שחוזרת על עצמה לגבי אסטרטגיית ממשל נתונים בענן היא כיצד ניתן להגן על נתונים בלתי מובנים מפני פריצות?”.

באופן ברור, התשובה תלויה בארגון עצמו ומערכות הוכנה שלו.

למרות שאנליסטים כמו גרטנר טוענים שיותר מ– 80% ממידע הארגוני מאוחסן בקבצים כמו מיקרוסופט אופיס, הנתון הזה תלוי באופן טבעי בתחום העיסוק של הארגון. ספקי שרות אוגרים מרבית המידע שלהם במסדי נתוניםת ולא בקבצי אקסל.


אם בכלל, מרחב האיומים על מסדי נתונים גדל הרבה יותר מהר מהגידול הטבעי בקבצי אופיס. ספקי שירותים בתחום הטלקום והסלולר מחזיקים כמויות עצומות של מידע במסדיי נתונים מובנים (רשומות שיחה, רשומות שירותים ללקוח וכו‘). ככל שסמארטפונים, אנדרואיד, מחשבי לוח והתקני מחשוב ניידים יהיו נפוצים יותר, כך יגדל חלקם של הנתונים המובנים בספקי השירות למיניהם בענן. בתחום הבריאות, בעידן שכל הרשומות רפואיות אלקטרוניות, גדל עוד יותר כמות המידע הרגיש במסדי נתונים כגון אוראקל.

נוסף על כך, השימוש בטכנולוגיית מאגרי מידע גסון המתחברת ישירות ליישומי אינטרנט (נמצא בשימוש רחב בפייסבוק), גדל במהירות עצומה. שימו לב במיוחד לקאוצדיבי שיש מעל עשרה מיליון התקנות לאחר פחות משנתיים בשטח! מאגרי כאלה כאלה עלולים להיות חשופים להתקפות חדירה מסורתיות שמנצלות נקודות תורפה בזמן בנייה והרצת שאילתות.

לסיכום, כשניגשים לבנות אסטרטגיית ממשל נתונים לענן יש להתחשב בכל הנקודות שהוצגו כאן ולהתחיל על ידי מענה לעשר שאלות המפתח לאבטחת נתונים במחשוב ענן.

Tell your friends and colleagues about us. Thanks!
Share this

Bank of America and Wikileaks

First reported in the Huffington Post in November 2010, the Bank of America has set up a Wikileaks defense team after an announcement by Julian Assange that Wikileaks has information from a 5GB hard drive of a Bank of America executive.

In a burst of wikipanic, Bank of America has dived into full-on counterespionage mode…15 to 20 bank officials, along with consulting firm Booz Allen Hamilton, will be “scouring thousands of documents in the event that they become public, reviewing every case where a computer has gone missing and hunting for any sign that its systems might have been compromised.”

Interesting that they needed Booz and Hamilton.  I thought Bank of America was a Vontu DLP (now Symantec) customer.  It says something about the technology either not working, being discarded or simply not implemented properly because the Wikileaks announcement was made in October 2009. So it took BoA over a year to respond.  Good luck finding forensics over a year after the leak happened.

This is a good thing for information security consultants and solution providers, especially if it drives companies to invest in DLP. There are some good technologies out there and companies that implement DLP thoughtfully (even if for dubious reasons) will be profiting from the improved visibility into transactions on their network and better protection of IP and customer data.

Ethics of the bank executive aside, it is conceivable (albeit totally speculative), that the Obama administration is behind the Wikileaks disclosures on US banking. It is consistent with the Obama policy that required banks to accept TARP funds and stress testing in order to make the financial institutions more beholden to the Federal government. This is consistent with the State Department cables leak, which also appears (from my vantage point in the Middle East) to be deliberately disclosed to Wikileaks in order further the agenda against the Iranians without coming out and saying so specifically.

Tell your friends and colleagues about us. Thanks!
Share this

WikiLeaks Breach – trusted insiders not hackers

With a delay of almost 10 years – SCIAM has published an article on the insider threat – WikiLeaks Breach Highlights Insider Security

As one of the pioneers in the DLP space (data loss prevention) and an active data security consultant in the field since 2003 – I am not surprised when civilians like the authors of the article and the current US administration claim discovery of America, once they discover that the emperor is naked.  Of course there is an insider threat and of course it is immune to anti-virus and firewalls and of course the US Federal government is way behind the curve on data security – installing host based security which was state of the art 7 years ago.

My Dad, who worked in the US and Israeli Defense industry for over 50 years is a PhD in systems science. He asked me how it happened that Wikileaks was able to hack into the US State Department cables.  I explained that this was not an external attack but a trusted insider leaking information because of a bribe or anger at Obama or Clinton or a combination of the 4 factors. My Dad just couldn’t get it.   I said look – you know that there is a sense of entitlement with people who are 20-30 something, that permits them to cross almost any line.  My Dad couldn’t get that either and I doubt that the US Federal bureaucrats are in a better place of understanding the problem.

Data leakage by trusted insiders is a complex phenomenon and without doubt, soft data security countermeasures like accepted usage policies have their place alongside hard core content interception technologies like Data loss prevention.  As Andy Grove once said – “a little fear in the workplace is not a bad thing”. The  set of data security countermeasures adopted and implemented must be a good fit to the organization culture, operation and network topology.

BUT, most of all – and this is of supreme importance – it is crucial for the head of the management pyramid to be personally committed by example and leadership to data protection.

The second key success factor is measuring the damage in financial terms. It can be argued that the Wikileaks disclosures via a trusted insider did little substantive damage to the US government and it’s allies and opponents alike. If anything – there is ample evidence that the disclosure has helped to clear the air of some of the urban legends surrounding US foreign policy – like the Israelis and the Palestinians being key to Middle East peace when in fact it is clear beyond doubt that the Iranians and Saudi financing are the key threats that need to be mitigated, not a handful of Israelis building homes in Judea and Samaria.

As an afternote to my comments on the SCIAM article, consider that after the discovery of America, almost 300 years went by before Jefferson and the founding fathers wrote the Declaration of Independence.   I would therefore expect that in the compressed 10:1 time of Internet years, it will be 30 years before organizations like the US government get their hands around the trusted insider threat.

Tell your friends and colleagues about us. Thanks!
Share this

The psychology of data security

Over 6 years after the introduction of the first data loss prevention products, DLP technology has not mainstreamed into general acceptance like firewalls. The cultural phenomenon of companies getting hit by data breaches but not adopting technology countermeasures to mitigate the threat requires deeper investigation but today, I’d like to examine the psychology of data security and data loss prevention.

Data loss has a strange nature that stems from unexpected actions by trusted insiders in an environment assumed to be secure.

Many IT managers are not comfortable with deploying DLP, because it requires admitting to an internal weakness and confessing to  not doing your job. Many CEO’s are not comfortable with DLP as it implies employee monitoring (not to mention countries like Germany that forbid employee monitoring) . As a result, most companies  adopt business controls in lieu of technology controls.  This is not necessarily a mistake, but it’s crucial to implement the business controls properly.

This article will review  four business control activities: human resources,  internal audit, physical security and information security. I will highlight disconnects in each activity and recommend corrective action at the end of the article.

The HR (human resources) department

Ensuring employee loyalty and reliability is a central value for HR, which has responsibility for hiring and guiding the management of employees. High-security organizations, such as defense contractors or securities traders, add additional screening such as polygraphs and security checks to the hiring process. Over time, organizations may sense personality changes, domestic problems or financial distress that indicate increased extrusion risks for employees in sensitive jobs.

Disconnect No. 1: HR isn’t accountable for the corporate brand and therefore doesn’t pay the price when trusted employees and contractors steal data. What can you do?  Make HR part of an inter-departmental team to deal with emerging threats from social media and smart phones.

Internal audit

Data loss prevention is ostensibly part of an overall internal audit process that helps an organization achieve its objectives in the areas of:

  • Operational effectiveness
  • Reliability of financial reporting
  • Compliance with applicable laws and regulations

Internal auditors in the insurance industry say regulation has been their key driver for risk assessment and implementation of preventive procedures and security tools such as intrusion detection. Born in the 1960s and living on in today’s Windows and Linux event logs, log analysis is still the mainstay of the IT audit.  The IT industry has now evolved to cloud computing,  virtualization,Web services and converged IP networks. Welcome to stateless HTTP transactions, dynamic IP addressing and Microsoft Sharepoint where the marketing group can setup their own site and start sharing data with no controls at all. Off-line analysis of logs has fallen behind and yields too little, too late for the IT auditor! According to the PCI Data Security council in Europe – over 30% of companies with a credit card breach discovered the breach after 30 days and 40% after more than 60 days.

Disconnect No. 2: IT auditors have the job, but they have outdated tools and are way behind the threat curve.  What can you do?  Give your internal auditors, real-time network-based data loss monitoring and let them do their job.

Physical security

Physical security starts at the parking lot and continues to the office, with tags and access control. Office buildings can do a simple programming of the gates to ensure that every tag leaving the building also entered the building. Many companies run employee awareness programs to remind the staff to guard classified information and to look for suspicious behavior.

Disconnect No. 3: Perfect physical security will be broken by an iPhone.  What can you do? Not much.

Information security

Information security builds layers of firewalls and content security at the network perimeter, and permissions and identity management that control access by trusted insiders to digital assets, such as business transactions, data warehouse and files.

Consider the psychology behind wall and moat security.

Living inside a walled city lulls the business managers into a false sense of security.

Do not forget that firewalls let traffic in and out, and permissions systems grant access to trusted insiders by definition. For example, an administrator in the billing group will have permission to log on to the accounting database and extract customer records using SQL commands. He can then zip the data with a password and send the file using a private Web mail or ssh account.

Content-security tools based on HTTP/SMTP proxies are effective against viruses, malware and spam (assuming they’re maintained properly). These tools weren’t designed for data loss prevention. They don’t inspect internal traffic; they scan only authorized e-mail channels. They rely on file-specific content recognition and have scalability and maintenance issues. When content security tools don’t fit, we’ve seen customers roll out home-brewed solutions with open-source software such as Snort and Ethereal. A client of ours once  used Snort to nail an employee who was extracting billing records with command-line SQL and stealing the results by Web mail.  The catch is that they knew someone was stealing data – and deployed Snort as a way of collecting incriminating evidence, not as a proactive real-time network monitoring tool.

Disconnect No. 4: Relying on permissions and identity management is like running a retail store that screens you coming in but doesn’t put magnetic tags on the clothes to prevent you from wearing that expensive hat going out. What can do you? Implement real-time data loss audit using passive network monitoring at the perimeter. You’ll get an excellent picture of anomalous data flowing out of your network without the cost of installing software agents on desktops and servers.  The trick is catching and then remediating the vulnerability as fast as you can.  If it’s an engineer sending out design files or a contractor surfing the net from your firewall – fix it now, not 3 months from now.


To correct the disconnects and make data security part of your business, you need to start with CEO-level commitment to data security.  Your company’s management controls should explicitly include data security:

  • Soft controls: Values and behavior sensing
  • Direct controls: Good hiring and physical security
  • Indirect controls: Internal audit
Tell your friends and colleagues about us. Thanks!
Share this

When defense in depth fails – two deadly sins

Defense in depth is a security mantra,  usually for very good military security and information security reasons.  However – defense in depth may be a very bad idea,  if your fundamental assumptions are wrong or you get blinded by security technology.

The sin of wrong assumptions

In the defense space – we can learn from military history that incorrect security assumptions  carry a high price tag.

The 1973 Yom Kippur war that resulted in a stunning Israel victory but cost 2,800 Israeli lives, and the recent American war in Iraq, that yielded little benefit for the cost of over 30,000 American lives are both illustrations of conceptual mistakes in security strategy.

Neither defense in depth (the Bar Lev line) nor military campaigns for democracy (the Iraq war) were a match for arguable security assumptions  (the  Arabs are deterred by Israeli military superiority (they weren’t), Americans can combat terror with conventional armies (no you cannot).

The sin of techno lust

In the business space  it’s easy to get seduced by sexy security technologies but implementing too many  security technologies will increase operational risk of information security instead of achieving defense in depth.

Why is this so?

Reason 1 : More security elements tends to increase risk instead of improving defenses
Adding more network security elements tends to increase the total system risk, as a result of the interaction between the elements and increased system complexity and resulting  inability to  maintain the systems properly.

For example – companies that attempt to prevent data loss  with more user access lists, enterprise DRM ,  firewalls and proxies experience an inflation of ACLs, end point application software (that needs to be deployed and maintained), firewall rules  that may be outmoded and clients that bypass the proxies.

A company may feel more secure while in practice they are less secure – with dormant accounts, shared passwords, excessive access rights,  orphan accounts, redundant accounts, dormant users, underutilized accounts, abuse of administrator access, backdoor access and … paying more for the privilege.

Reason 2 – Product features do not mitigate threats
Many companies tend to spend a disproportionate amount of their  time evaluating product features instead of performing a business threat analysis and selecting a short list of products that might mitigate the threats.  I first realized this when I paid a sales call on the CSO of a large bank in Israel and his secretary told me that the CSO meets 3-5 vendors/day. It’s nice to be wanted, but 5 years later – the bank still does not have a coherent data security policy, encryption policy nor data loss prevention capability.

Focus on features and vendor profiles  results in installing a product without understanding the return on security investment. After selecting a security product based on marketing and FUD tactics and then implementing the product without understanding how well it reduces value at risk – the customer (not the vendor) pays for ownership of an inappropriate solution in addition to paying for the damage caused by attackers who exploit the unmitigated vulnerabilities.

Tell your friends and colleagues about us. Thanks!
Share this

Why the Europeans are not buying DLP

It’s one of those things that European-based information security consultants must  ask themselves at times – why isn’t my phone ringing off the hook for DLP solutions if the European Data protection directives are so clear on the requirement to protect privacy?

The central guideline is the EU Data Protection Directive – and reading the law, we begin to get an answer to our dilemma.

Continue reading

Tell your friends and colleagues about us. Thanks!
Share this

Why security defenses don’t prevent data breaches

Assuming you knew why a data breach will happen, wouldn’t you take your best shot at preventing it?

Consider this:

Your security defenses don’t improve your understanding of the root causes of data breaches, and without understanding the root causes –  your best shot is not good enough.

Why is this so?

First of all – defenses are by definition, not a means of improving our understanding of strategic threats. Think about the Maginot Line in WWI or the Bar-Lev line in 1973. Network and application security products that are used to defend the organization are rather poor at helping us understand and reduce the operational risk of insecure software.

Second of all – it’s hard to keep up.  Security defense products have much longer product development life cycles then the people who develop day zero exploits. The battle is also extremely asymmetric – as it costs millions to develop a good application firewall that can mitigate an attack that was developed at the cost of three man months and a few Ubuntu workstations. Security signatures (even if updated frequently) used by products such as firewalls, IPS and black-box application security are no match for fast moving, application-specific source code vulnerabilities exploited by attackers and contractors.

Remember – that’s your source code, not Microsoft.

Third – threats are evolving rapidly. Current defense in depth strategy is to deploy multiple tools at the network perimeter such as firewalls, intrusion prevention and malicious content filtering. Although content inspection technologies such as DPI and DLP are now available, current focus is primarily on the network, despite the fact that the majority of attacks are on the data – customer data and intellectual property.

The location of the data has become less specific as the notion of trusted systems inside a hard perimeter has practically disappeared with the proliferation of cloud services, Web 2.0 services, SSL VPN and convergence of almost all application transport to HTTP.

Obviously we need a better way of understanding what threats really count for our business. More about that in some up coming posts.

Tell your friends and colleagues about us. Thanks!
Share this

More nonsense with numbers

Now it’s some lazy journalist at Information Week aiding and abetting the pseudo-statistics of of the Ponemon Institute – screaming headlines of  the cost of data breaches of PHI – protected healthcare information

According to Information Week; Analysis: Healthcare Breach Costs May Reach $800 Million

Since the Health Information Technology for Economic and Clinical Health Act or HITECH Act of 2009 came to being, a number of new privacy, security and reporting and non-compliance penalty provisions went into effect. And as summarized by this report from HITRUST, there have been 108 entities who have reported security breaches since September of last year.

Those breaches comprise about 4 million people and records.

In the analysis, Chris Hourihan Manager, CSF Development and Operations, HITRUST used the 2009 Ponemon Institute Cost of a Data Breach Study [.pdf], which found the average cost for each record within a data breach to be $204. That’s $144 of indirect costs and $60 of direct costs. An overview of the Ponemon study is available here.

What is the connection between the Ponemon studies (sponsored by data security vendors) and the PHI leakages.


Why is a PII leak and a meaningless plug number of $60 relevant to PHI (which requires a combination of medical data and personal identifiers?

Why can’t someone make a phone call and ask how much the companies actually paid in fines and then make a few more phone calls and start estimating ancillary costs and direct costs such as legal.

Why not just multiply by the average cost of an iPhone?

After all you can steal data with your mobile easily enough can’t you.

Tell your friends and colleagues about us. Thanks!
Share this