Monday, November 16, 2015

Summary of the cybersecurity posts

Over the span of the last twelve weeks, I have examined some of the breaches and hacks that have occurred. I chose this particular theme because I think if one is going to study cybersecurity, you need to have a strong understanding of where it can go wrong.  One thing that I have noticed in several of my classes throughout this degree is that the cybersecurity professionals like to discuss what should be done to protect the system as if there is an infinite budget that a company can give to the IT department to protect things.  In reality, companies MUST work with a limited budget, and IT will not get to use that entire budget.  It has to be shared with the rest of the company.  Therefore, it's fine to say that the company needs to have certain standards in place or use certain technology.  But you really learn from studying what happens when you don't use those standards or technology.  In the real world, you need to know how you will be affected and how you will overcome the problems.

Not long ago, I received my law degree.  I remember having a similar argument with one of my law professors.  He insisted that a better contract was needed between the parties, and that would have solved the problem.  I replied that from what I had seen in my office and in my studies, that was probably a true answer, but it doesn't account for the fact that every single case we study involves a situation where the parties failed in some respect.  Nobody goes to court when everything is going perfectly according to the contract.  The parties in that particular case didn't draw their contracts carefully.  How are they supposed to proceed now?  Furthermore, what happens when I get a client that didn't have me do their contract; instead, they did it themselves, and now they are having problems, and I need to help them solve those problems.  My professor didn't have a good answer.

The same is true in IT.  If the company engaged in perfect security measures for their information at all times, there is no need for a cybersecurity degree.  Everything is going smoothly, and no hackers exist.  Unfortunately, that's a fictional world.  Companies mess up and hackers want to exploit those mistakes.  So how do we proceed in helping companies that have messed up?  The easy answer is to simply throw money at the problem and fix it before it's ever a problem.  That's a good answer in many respects.  Create a strong system and there's less to do later.  But how do you proceed if you are hired at a company that hasn't done that?  You have to study how other systems were breached.  You need to know what is occurring in the real world, and figure out how to make it work when it's imperfect.

I examined breaches and hacks because it's the imperfect side of business.  These involve big and small companies.  Some focused on the insider threats, whereas others were outside attacks.  Some could have easily been fixed, while others are still perplexing years later.  My goal was simply to shine a light on these past breaches in an attempt to learn more about them.

The assignment was valuable because it showed me where to look for breach causes.  In some cases, I discovered the answer, and some I didn't.  This exercise also taught me to think about other consequences, such as when I received a letter than my information was compromised for a company I had no dealings with.  How did they get my information?  Was this a proper use of my information, or were they not supposed to have it in the first place?  These questions all drive at the root of discovering how breaches occur and what they affect.

Tuesday, November 10, 2015

Week 11, New York Taxis

I discovered an article that talks about a data breach involving New York taxis (Pandurangan, 2014).  At first, this sounded very juicy- after all, a data breach involving taxis in one of the world's most populated cities could be a horrific problem.  In the end, this breach turned out to be a bit anti-climactic.

The breach involved improperly encrypted data that gave information about over 173 million individual trips.  It revealed the pickup and dropoff location and time, and the license number and medallion number.  The problem is, what is this information likely to be used for?  In other words, if we're going to boil it down to a risk analysis, there's a risk here.  The data was not encrypted properly, it was released, and anyone with any skill at decrypting can figure out all of the information above.  On the other side of the analysis- what is this data actually worth?

The article discusses how one cabbie was making an unusual number of trips.  At first, I thought this is where the story would get juicy.  Maybe he is doing a drug running business on the side.  The article says it was just an error in the data. Even assuming it had been a drug running business, that information is useful to the company because they will want to fire him.  It's useful to the authorities because they may want to prosecute him.  It's not so useful to hackers looking for information to exploit.

There is one scenario where a hacker may benefit from the information.  Say there is a particular person being targeted for assassination.  They know that this target has an apartment in a particular area.  They could use the data to figure out if there is a pattern to the target's movements.  There are two problems with this theory: 1) this is the stuff of bad Hollywood movies, and 2) an assassin would likely already have that info without relying upon a data breach.  Simple observation is a much more effective way of finding out the info.

In other words, when you finish the risk analysis, lots of information was released, but the information doesn't seem to hold a very high value.  That's why this didn't make the front page of the news- no customers were harmed, no valuable sensitive info was taken.  It's just an information dump.

The value of examining a breach like this is that it's a good study not only in how not to properly encrypt your data, but also in conducting a risk analysis.  Just because information was breached doesn't mean this information was worth anything.

References:
Pandurangan, Vijay. "On Taxis and Rainbows ." Medium. 21 June 2014. Web. 10 Nov. 2015. 

Monday, November 2, 2015

British Airways Hack- Week 10

While any hack is undesirable, this week's hack could have turned out much worse.  British Airlines was hacked in March, 2015.  The hackers were able to gain information about members of British Airlines frequent fliers club.  The hackers did not gain access to any payment information, names, or addresses.

Again, while any hack is undesirable, let's take a moment to consider how this could have gone differently.  What if the hackers didn't gain access to just frequent flier numbers, but also got names and addresses.  This would potentially cause identity theft issues.  If the hackers got access to payment information, this would potentially cause loss of money in addition to the identity theft.  Both of these are bad, but they are far from the most devastating hacks that could have occurred here.

Consider what would happen if the hackers didn't just gain access to the frequent flier numbers, but were able to hack all the way into the scheduling and routing systems, or worse, air traffic control.  Suddenly, you've got hackers controlling passenger jets.  

Sure, any hack is undesirable.  But if hacks are ranked in terms of potential devastation, the terrorism aspect of a hacker gaining access to passenger jets vastly outranks their gaining access to frequent flier numbers.

References:
British Airways frequent-flyer accounts hacked. (2015, March 29). Retrieved November 2, 2015, from http://www.theguardian.com/business/2015/mar/29/british-airways-frequent-flyer-accounts-hacked 

Monday, October 26, 2015

Experian, Part two

It's a little strange that I wrote about an Experian related breach last week, and this week I'm dealing with an Experian related breach first-hand this week.

Sometime during the week, I got a notice from Experian that my personal information may have been compromised.  This notice was sent to T-Mobile customers who applied for T-Mobile service, and includes info such as birthday, social security numbers, name, address, etc.  In consideration of my information being exposed, Experian offered me two years worth of credit monitoring for free.  Almost certainly, legally speaking, if I accept the credit monitoring, it would be considered a legal settlement and I can't pursue it further.  After all, the monitoring mitigates the damage.

Here's the problem- I am not a T-Mobile customer, have never been a T-Mobile customer, and don't even have any cell phone contract.  I'm a month to month customer, as is my son, and there's no credit check for a month-to-month service.  I am not exaggerating- I think the last time I had a cell phone contract was in the 1990s.

So why am I getting this letter?

There's a few possible explanations.  First, my son's cell phone service is through a carrier that uses the T-Mobile network.  Before that, his carrier decided to stop offering cell phone service and recommended that all of their customers switch to T-Mobile.  I find this possibly the most likely choice, but it's problematic (I will get to that in a moment).

Second possibility: my ex-step-daughter has used T-Mobile in the past, and I have evidence she has not switched her license since moving out over a year and a half ago.  This may have released my address, and possibly my name.  Depending on what information the credit reporting agencies get, I suppose it's possible that my social security number is linked with that address.  So when she turns 18 and gets a cell phone plan, they ask for her ID and run a credit check.  The address gets pulled up and possibly my social security number (again, depending on the info they get), and when the info was breached, it included my info, despite my never having anything to do with T-Mobile directly.  I find this less likely.

Third, it's a mistake.  Because my son's cell phone carrier uses the T-Mobile network, it auto generated this letter.  However, since I'm not on contract, my info wasn't actually released.  This is another likely possibility.

The reason the first explanation is so problematic is that it means I truly have no control over my info.  Even when I choose to not deal with a company, my info is sold to that company and I can't opt out.  In other words, I don't have the option of avoiding the risk unless I completely refuse to have a cell phone.  If my info is sold and I cannot opt out by refusing to have a cell phone contract, then my information is at risk simply because I own a cell phone.  To phrase it even more succinctly- I don't have any real risk mitigation options in the modern world.

As a future lawyer (specifically one focusing her practice on information privacy/cyber-law), this disturbs me greatly.  The law is big on determining who should have the blame.  In certain states, if you are even one percent at fault for something bad that happened to you, you cannot recover*.  That leads to an obvious question- am I at least one percent at fault for owning a cell phone?  After all, I could have opted out.  It's not something I was forced to accept, and I willingly purchased my son a phone and paid his monthly service fee.  I believe there is a good chance the court would see me as at least 1% responsible, which means I can't recover anything.

Let that sink in for a minute...  I refuse to enter a contract with ANY cell phone carrier because I don't want to share personal information.  The business isn't profitable enough for them, so they sell what info they do have to another company as part of a buy-out.  If I accept the credit monitoring, I can't later complain that they never should have had my info to begin with.  And if I decide that I'd rather complain about that, I can't recover anything because I willingly had a cell phone- like almost every other non-Amish citizen of the United States.



*This is fairly rare these days, but quite a few states do still bar recovery if you are more at fault than the other party.

References:
Finkle, J. (2015, October 1). Millions of T-Mobile customers exposed in Experian breach. Retrieved October 26, 2015, from http://www.reuters.com/article/2015/10/02/us-tmobile-dataprotection-idUSKCN0RV5PL20151002

Monday, October 19, 2015

Court ventures breach

One of the most idiotic data breaches occurred in October, 2013 when Court Ventures, a company owned by Experian credit reporting service, sold a Vietnamese identity theft group the records of over 200,000,000 million people.

Oops.

The Vietnamese group practiced identity theft, gathered records (including social security numbers), and then sold this info to people willing to buy that personal info.  Court Ventures didn't check into the legitimacy of the Vietnamese group before selling the info.  In other words, Court Ventures collected a lot of personal information from consumers, sold that information to a client in Vietnam, and that client in turn sold it to its clients who are buying it presumably for nefarious purposes.

The term "identity theft" usually implies that someone's information or identification is being stolen.  But what is it called when it's lawfully (if carelessly) sold to a person who shouldn't have it?  It's called a data breach.  Imagine having to tell over 200,000,000 people that although they entrusted you with their information on loan applications, credit checks, etc., you sold that information to what many would consider hackers.  That leaves the CEO in a very bad situation, even if he put himself there.

Granted, this was not a situation that involved hacking.  In my opinion, it's much worse.  Hacking is when someone has made at least a minimal effort to secure information that shouldn't be seen, but someone has been able to access that information anyway.  This is a situation where you have information that shouldn't be seen, but nobody has broken in.  Instead, the kind of people you want your information are being sold that exact information that they shouldn't see.  You not only weren't protected- your secret information was sold so that the company could profit, and they were so careless and greedy that they didn't care whether the information should be secret or not.


References:

McCandless, D. (2015, October 2). Ideas, issues, knowledge, data - visualized! Retrieved October 19, 2015, from http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/

McCarthy, N. (2014, August 26). Chart: The Biggest Data Breaches in US History. Retrieved October 19, 2015, from http://www.forbes.com/sites/niallmccarthy/2014/08/26/chart-the-biggest-data-breaches-in-u-s-history/

Monday, October 12, 2015

Indiana University- Week Seven

Since I started this blog as part of an assignment for my Master's in Cybersecurity, I wanted to take a look at a data breach involving a University.  These aren't that prevalent, which is a good thing, but it leaves me curious why more colleges aren't hacked.  You have a large number of college students, most of them somewhere between frazzled and partying, and they've handed over an enormous amount of personal information to the University.  I hope that it's because in academic settings, more educated people are paying better attention to the data security, but I don't know if that's accurate or not.  Whatever the reason, it's a good thing more University hacks and breaches haven't occurred.

In 2014, about 146,000 students at Indiana University had their information, including social security numbers, exposed.  This wasn't a hack, but it was a data breach.  Here's the difference: a hack is someone trying to access information that's specifically been made unavailable to them.  It's the online equivalent of breaking and entering.  A data breach can certainly be a hack, but it's larger than that.  It includes accidental releases of info.  Here, the data was exposed because it was stored on an unencrypted area.  Search engines gathered the information (because that's what search engines do), and gained access to 146,000 student's records.  This info should have been encrypted and it's pretty easy to lay the blame on the university for not encrypting an area that should have been encrypted.

When I said above that a hack was the online equivalent of breaking and entering, this data breach was more like a person walking through a public area of a government building, picking up brochures.  Only, someone made a mistake and put confidential info into the brochure racks.  The person who got the information wasn't necessarily acting nefariously- they collected random info that they were told was available for them to collect.  But that info shouldn't have been in that rack for them to collect.

References:
 Wang, Stephanie. "Data Breach at Indiana U May Have Exposed Student SSNs." USA Today. Gannett, 26 Feb. 2014. Web. 12 Oct. 2015. <http://www.usatoday.com/story/news/nation/2014/02/26/indiana-university-data-breach/5830685/>. 

Monday, October 5, 2015

Beautiful interactive hack infographic- Week 6

This week, I wanted to step away from the topic of individual hacks and look at it from a higher level.  I discovered a website called "InformationIsBeautiful.net" that includes visualizations of lots of different kinds of data.  But there was one particular timeline of hacks that was especially good, and useful for the theme of this blog.  This timeline provides information about different hacks that have occurred.  It says when the hack occurred, gives a bit of information about the hack, compares it in size to other hacks, and even provides a link to an outside report where one can discover more information about that particular hack.  The thing I like best is that you can sort by industry and the method of leak.  For example, with only two clicks, I can easily discover that there was only one hack involving the retail industry that was an inside job.

This infographic has a lot of information, but it's presented in a really simple, uncomplicated manner.  By sorting different features, someone is able to parse what their particular industry should be most concerned about.

I think it's rare to stumble across information that presents so much in a very intuitive way.  Often, the more data that's included, the more complicated the site or graphic gets.  Being able to filter out the noise and present the information so simply is a definite boon to an information security professional.

References:
 Ideas, issues, knowledge, data - visualized! (n.d.). Retrieved October 5, 2015, from http://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/ 

Tuesday, September 29, 2015

Week 5- a followup to last week's discussion

This is somewhat of a followup to last week's discussion about the hack of US government data.  My particular area of expertise is in global data privacy law.  I wrote my thesis on "The Right to be Forgotten" which is a developing concept in Europe that gives normal individuals the right to have data that's irrelevant or untimely removed from the Internet.  I found an article that discusses data privacy legislation in the US, especially in the wake of the US government hack.  (Wyden, 2015)  I wanted to discuss this article in terms of what it got right and wrong.

The article was written by Senator Ron Wyden, a democrat.  He claims that in response to the hack of the US government, Congress has proposed a bill called the Cybersecurity Information Sharing Act (CISA) that would allow the US government to get private information from private companies.  Later in the article, he concedes that the bill isn't meant to do that- it's simply too broadly worded and would allow the NSA to snoop at unprecedented levels.  On this measure, it's hard to tell whether Senator Wyden got it right.  Is the bill worded that broadly- probably.  That doesn't mean it will make it into law in that version.  Will the NSA use bill to collect private data?  I'd like to say no, but I'm not sure I believe that.  They probably would, given their recent history of going even farther.

But there's one issue in particular, near the end of the article, that is dead wrong.  Senator Wyden states that the bill shouldn't toss aside "long established protections for Americans' privacy."  When I initially read the article, I was unaware that it was written by a US Senator.  I believed it was a piece by a staff writer at The Guardian, a newspaper in the UK.  Much of Europe has the right to privacy.  If you think about their history, it makes sense.  Put modern digital privacy rights in the context of WW2 Germany.  When people ask and collect data about you, it hasn't turned out well for them.  Meanwhile, in the US, we strongly favor free speech.  This free speech isn't absolute, but it is extremely broad.  We call it the "marketplace of ideas" and we would rather have lots of information so that the citizens can make up their own mind about an issue.  This has caused great divergence in terms of online privacy rights.  In Europe, you can remove information about yourself.  In the US, once it's on the Internet, it's likely to stay there.  The difficulty is that it's hard to draw borders online.
So why does his statement bother me?  Because it's simply wrong.  There is no right to privacy in the US Constitution.  The Supreme Court has said that a right to privacy exists in the "penumbra" of the First Amendment.  In particular, it extends to a couple being allowed to keep private whether or not they are choosing to have a child.  Family matters, in other words.  Other information is sensitive and must be kept guarded such as health information.  The law governs who can legally access such information and what they can do with it.  But it is a complete overstatement to say that Americans have the right to privacy.  As a Senator, I wish Wyden knew this.  Furthermore, I wish he wasn't fueling an already stoked fire between the US and Europe by presenting false facts in major newspapers.

References:
Wyden, R. (2015, July 29). Congress' fix for high-profile hacks is yet another way to grab your private data. Retrieved September 29, 2015, from http://www.theguardian.com/commentisfree/2015/jul/29/congress-stop-high-profile-hacks-reduce-your-privacy

Monday, September 21, 2015

Week 4- the government hack

In July, there was a high profile hack of government employee data.  I chose to write about this for a couple of reasons.  First, a friend of mine was affected (more below).  Second, it's pretty gutsy to hack the US government.

The hack itself started in May with an attack on 100,000 IRS records.  The hackers were able to get social security numbers, birthdays, and addresses.  By July, it spread throughout much of the government and to 22.1 million people that were affected.  Official sources have claimed China is at least partially responsible for the attack.

My friend worked for the IRS.  He got a letter from them saying that his information was taken.  We briefly discussed it and wondered what he was supposed to do about it.  He said that he hasn't worked there in about 4-5 years, and was surprised that they still had information on file for him.  

But it's bigger than one individual.  It takes a lot of guts to hack into the US government.  And I have a suspicion that might amplify this- I suspect that the two attacks are connected and it wasn't dealt with quickly enough the first time to shut down any opportunity they had the second time.  It's not uncommon for an attack to go quite some time before it's discovered.  Meanwhile, information continues to leak out.  With the bureaucracy of the government, I would not be surprised to find out that they either didn't discover the breach quickly, or they didn't act quickly.

References:
Mindock, C. (2015, July 9). US Government Cyber-Attacks Were Biggest In History, Follows Several High-Profile Hacks; 22.1 Million Files Compromised. Retrieved September 21, 2015, from http://www.ibtimes.com/us-government-cyber-attacks-were-biggest-history-follows-several-high-profile-hacks-2002565 

Monday, September 14, 2015

Sony- Week 3

The Sony hack last year (and early into this year) was interesting in a few respects.  It seems to be a lot like Stuxnet in the sense that there's a lot of legend surrounding this particular hack, and it's hard to separate out the legend from the fact.  

On November 25, 2014, a group calling themselves the Guardians of Peace (GOP) put some unreleased Sony movies online.  Almost immediately, there was speculation that North Korea was responsible.  Mind you- not North Korean hackers, but North Korea itself.  So why did people think a government would hack a US movie studio?  At the time, Sony was about to release a movie called "The Interview".  This comedy was about two news reporters who get a chance to interview Kim Jong Un, and the CIA asks them to carry out an assassination.  North Korea said that if the movie was released, they would consider it an act of war.  In fact, North Korea complained to the United Nations about the film, without specifically naming it.  Given the name of the group- Guardians of Peace- this almost made sense.  

The problem is, just five days after Sony was ready to pin everything on North Korea, the FBI said they cannot attribute it to North Korea.  But, three days later, Mike Rogers, the chairman of the House Intelligence Committee said that North Korea was responsible.  So, the question becomes whether he was relying upon incorrect initial reports, or whether the government intelligence community thought North Korea was responsible, changed their minds, and then changed them back (in the span of eight days).  

Meanwhile, many movie chains refused to release The Interview, possibly out of fear of being hacked themselves.  The movie suddenly became a pop phenomenon, and many people went to see it specifically because of all the attention surrounding the film.  I will hazard a guess that this movie would have easily flopped if the hack hadn't occurred; and if I was a more cynical person, I would write a Hollywood blockbuster where a movie studio hacks themselves to build hype for a movie that's certain to flop.

That being said, it's unlikely here.  Not only did the movie get released online, but so did a lot of employee personal data and emails.  Several executives had a series of uncomfortable emails released where they trashed various celebrities.  It's hard to get your talent to work with you if you've said some nasty things behind their back.

So who was really responsible for the hack?  That depends on who you ask.  Some are still pointing to North Korea.  Others are saying this is an inside job.  I tend to hold with RiskBasedSecurity in their "Attribution Bingo". I wonder if we can expand on the idea and make it Attribution Clue: North Korea via an insider threat trojan.  




References: 
A Breakdown and Analysis of the December, 2014 Sony Hack. (2014, December 5). Retrieved September 14, 2015, from https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack/ 

Tuesday, September 8, 2015

Stuxnet- Week Two

One of my favorite stories of cybersecurity gone wrong is the Stuxnet worm.  It's become the stuff of legend, and like a lot of legends, it has so many wild stories that it's hard to separate out what's true and what's fiction.

In early 2010, Iran is busy trying to enrich uranium for its nuclear facilities.  However, the centifuges keep failing at an unusual rate and nobody can quite figure out why.  Meanwhile, a computer security firm in Belarus finds that computers are rebooting for no known reason.  After some research, they discover that it's a computer worm. If you think about computer worms as being similar to "The Very Hungry Caterpillar," you're probably not too far off track.  They work their way through programs, eat everything in sight, and use all their newfound bulk to change or reproduce.

Some of the legends about Stuxnet are that it got into the Iranian nuclear facilities via a USB drive, and that it caused physical damage to the centrifuges, but told the scientists that everything is running fine.  Phrased differently, legend says that some idiot plugged in a thumb drive he shouldn't have, which put the virus onto the computers; and that once it was on the computers, a computer version of Ocean's 11 was being pulled off where things were blowing up in the lab while the scientists upstairs think everything is running fine.  This is a great story, it's just not entirely accurate.

There is evidence that the suppliers of key components were hacked- not a USB drive brought in.  I find this a much more likely scenario.  Say you want to break into the US government or a large, multinational corporation.  Those are big, difficult targets.  While it's hard to take them head-on, it's much easier to find a supplier that isn't doing things properly.  Attack the supplier, get access to the big target through them.

In addition, it's probably an overstatement to say that the Iranian scientists were completely unaware that there were problems.  As the article at Wired says, they noticed the centrifuges were failing at an unusual rate.  They just didn't know the cause of the failure.  That being said, it was unusual because it caused physical damage.  That's the part of Stuxnet that continues to fascinate me.  Most cyberattacks attack digital assets.  Those assets may have real world counterparts and cause damages because of the loss of value to the assets, but this is a computer worm that caused actual, physical damage.  By telling the centrifuges to spin at a different rate, they failed.  When the centrifuges fail, they cannot enrich uranium.  Without enriched uranium, the nuclear facilities were unable to function and it set it back years (or decades).

Recently, there have been reports that Stuxnet (or something very similar) was attempted against North Korea.  The fact that Stuxnet is still making news in 2015 is astounding to me.  While it's been discussed regularly, people are still trying to piece together the details of what happened (and continues to happen), and separate the facts from the myth.  While the myth is great and I'd love to imagine a story that's fit for a Hollywood blockbuster, the truth appears to be less complicated.  A supplier was attacked and it caused major problems.  When phrased like that, it's not too far removed from any other cyber attack.


Monday, August 31, 2015

Ashley Madison

With all the news about the Ashley Madison leak, I feel like I have to weigh in on the cybersecurity issues of the leak.  Given that I just graduated from law school, I have to weigh in on the legal aspects...

I think this company keeps making dumb move after dumb move.  First, from what I understand, the information was released because Ashley Madison charged people to remove their information from the system, but failed to remove it.  In the US, I would think this would be a pretty clear case of negligence (Ashley Madison is a Canadian company, so Canadian law would probably apply, and although it's probably similar, I don't know Canadian law).  Negligence in the US requires duty, breach, causation, and damages.  Duty means that you're required to do something or not do something.  Breach is when you are supposed to do something, you fail to do it, or if you are supposed to refrain from doing something, you do it anyway.  Causation can be tricky, but in essence, the breach caused something bad to happen and it's not too far removed from the facts to cut off liability.  To use an example, if you take your vacuum to be repaired and the repairman fails to fix it- that's causation.  If, however, you take it to be repaired, the repairman doesn't fix it, you need the vacuum because it scares away a mountain lion that lives in your backyard, and now that you don't have the vacuum you get attacked by the mountain lion- that's usually too unforeseeable and the law won't hold the repairman responsible.  Finally, damages are the negative consequences you suffered because they breached a duty to you.

In Ashley Madison, they had a duty to remove the information because they charged people to remove it.  They took on that duty.  When they failed to remove it as agreed, they breached their duty.  This caused the information to become publicly available to the public.  And people have suffered damages to their marriages because of this leak.  The reason I would handle it under tort law instead of breach of contract is because I believe I'd get higher damages and I could easily throw in additional claims of dignity torts such as false light or invasion of privacy.

Second, the reward they are offering for information about the people who leaked the information is insulting.  $500,000 Canadian dollars is roughly $378,000 US dollars.  But they've already suffered more than this in the publicity nightmare.  When countries around the world are talking about your brand on the evening news in terms of the size of the hack, and the discussion continues for a week or two, you might as well close your doors and file for bankruptcy.  You cannot buy that goodwill back.  And from a user's perception, $378,000 divided by the 37 million names released means that as a user, your information is worth a little over a penny to the company.  Thinking in terms of the impact on a person who has had their spouse file for divorce or lost a job because they used a work email- how much does that user think their information is worth?  Probably well more than a penny.  Whoever did the valuation where they decided to offer $378,000 is nuts.

Even though I'm looking at it in terms of cybersecurity, this is an example of a blunder that's so basic that the discussion should start past it.  Does a company really need to be told that if they charge money to remove a user's info, they'd better remove it?  Do they really need to be told that their reward is insulting and not high enough to provide a tipping point that will outweigh the damage they did?  Sadly, apparently so.