Google Analytics


Tip Jar

Support Blog

Tip Jar

Wired State Amazon

« The Fake Church of Entrepreneurship -- But the Sickness Goes Deeper | Main | Anatomy of Weev's Hack: Brute Force isn't Open Access Even if It's Math »



Feed You can follow this conversation by subscribing to the comment feed for this post.

Catherine Fitzpatrick

Couldn't be happier that the judge factored in Reddit and Weev's boasting and martyr dances:


While it's clear that Weev generally was considered a jerk (which never sits well with a jury), one should wonder if this law and sentence actually generates the results we want. Will the next hacker even bother reporting on his/her success or will we just end up with (hacked) vulnerable systems for real criminals to exploit? I for one would prefer if someone told me my system (or the system I use as a customer for that matter) was insecure.

Catherine Fitzpatrick

He never told them, you're buying the bullshit. He did not make any "responsible disclosure". He concocted a fiction about that. He never contacted AT&T, which he could have done when he "discovered" this thing that in fact *he worked at* getting when another hacker victory-danced to him that he had succeeded in getting at this system illicitly.

Instead, he grabbed more than 100,000 emails and told Gawker and Gawker published it.

No one has tried your little buddy here for being a jerk, as much as he deserved it.

They've tried him for violating the CFAA and making *unauthorized access* to computer systems, which is more than fine to do because hackers should not make *unauthorized access* to computer systems and cause damage.

Weev *is* a real criminal. That is what a real criminal *is*.


I'm sorry, but I disagree here. He was an ass in the way that he sought the media with his discovery, but a real criminal would have kept the insecurity quiet, taken all the information he could for as long as he could, and sold it quietly to the highest bidder. While I agree that he deserves punishment for the way he sought the media, using the law in the manner it was used during this trial seriously endangers the chances of future insecurities being brought out in the open.

Catherine Fitzpatrick

Your own criminalized thinking is on display here, of course, and that's always the problem with you.

A real decent person who in fact really cared about security and privacy for a company would quietly contact the company. They would write an email or letter to their information security department or network administrator using their website and say, "I happened to find this security exploit, you might want to fix it."

But that is not what happened. That is never what happens. That's because it is never about decency and real concern for that corporation, but a power struggle by anarchists seeking to take over a realm where they can keep themselves in power.

You don't go to the press and publicize the vulnerability as "more effective" than a quiet letter because then what you've done is bullied and harassed the corporation, and worse, its clients -- all those people are made to feel insecure as hackers in Hackerville, which has the scum of the earth, now have their emails. And while they can change their password, they don't know if they were compromised -- they often are. And they will likely want to keep their same email address, and then they are subject to repeat attacks to try to crack them.

What about those people and those corporations hacked? Why don't they matter? Why are we endlessly coddling the hacker?!

Why does anyone think you "get credit" for not selling information but instead harassing people and taking a star turn with your "leet skillz" in the media?! You don't! It only mitigates your sentence so it is not the maximum one.

It doesn't at all mean that no one will bring insecurities to their proper handlers -- the corporation's IT people. That's because it isn't about proper and decent behavior. There is no

You're implying that it's "the right thing to do" to "bring security flaws in the open to the press". That's ridiculous. It's unnecessary. You can contact the company and they can fix it without having lists of their customers harassed and heckled by the likes of Gawker.

Here's a comment on this idiotic immoral argumentation by "Quietnine" on The Verge:


This is NOT security through public shaming. The IRC logs show them trying to figure out how much they could sell the information for, and debating whether or not they should expand the scraping for passwords or just use what they had to launch a phishing campaign. Then they go on to debate whether or not they should short-sell AT&T stock just before they release the information, explicitly stating they had no intention to ever tell AT&T, because doing damage to AT&T’s reputation was part of their agenda.

I’m on board with the $%#@ AT&T train, but pretending for one second these guys had a shred of good intention is a level of ignorance only matched by 13 year-olds in AnonOps channels who rant like their cause is for the greater good."

And that's just it. Whenever this debate comes up, there are always unethical, criminalized hackers like yourself trying to make the word-salad arguments, and then ethical geeks who come forward and tell their fellows that they're full of shit. It just doesn't happen enough.


Hackers like me? I wouldn't know where to begin in order to hack a system. :P I do notice however that you're still more focused on insults than substance.

If you'd considered my words for a moment, you would have noticed that I largely agree with you here. Absolutely, the right thing to do would have been to first contact AT&T with this insecurity, give them time to fix it, and perhaps notify their customer, before considering to bring it out in the open. Not doing so makes him an ass, and I would even agree that it should earn him punishment from a court of law.

He however was not convicted for the way that he published his results, though it certainly didn't help with his punishment. He was convicted for the actions needed to find the flaw in the first place, and that sets a dangerous precedent if we also want our systems to be secure. If we punish people for FINDING the flaws rather than the way they PUBLISH them, chances of people reporting them greatly diminish. That will only leave us with vulnerable systems, ready for real criminals to take advantage of. In my eyes, there is a huge difference between e.g. finding and publishing a flaw that exposes people's credit card details, and finding a flaw that exposes people's credit card details, keep it silent, and use the information to run up huge credit card bills by purchasing goods.

Catherine Fitzpatrick

Nonsense. You can find flaws and notify systems administrators without being a criminal and getting 120,000 emails of famous people before you give it to a cynical rag like Gawker.

I outline this in detail here:

He isn't punished for "finding a flaw"; he's punished for maliciously hacking and obtaining unauthorized access causing damages over $5000.

"Finding a flaw" means stopping before you even write the grab script. That is all.


Once again: the way he went about it shows him for being a jerk, and I even agree he should be punished for it. I wonder though what your definition of "maliciously hacking" is. Is it defined by the hacking as such or by what he did after he found the flaw?

I'm also not sure how he caused damages of over $5000. Certainly not by the hacking as such. The flaw needed to be fixed, regardless of how he reported it, so that damage was caused by the flaw being there in the first place. Reputation damage perhaps? That wasn't caused by the hacking as such, but in part by the way he reported it (and in part by the flaw being there in the first place).

Catherine Fitzpatrick

The way he went about it is not a personality problem or a cultural tic, it's a series of mechanical steps on scripts affecting machines. It's criminal.

"Malicious" means deliberate, planned, with cunning. "Conspiracy" means in a group, planned. These are legal terms.


(2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains—
(A) information contained in a financial record of a financial institution, or of a card issuer as defined in section 1602 (n) [1] of title 15, or contained in a file of a consumer reporting agency on a consumer, as such terms are defined in the Fair Credit Reporting Act (15 U.S.C. 1681 et seq.);
(B) information from any department or agency of the United States; or
(C) information from any protected computer;

AT&T's servers are a protected computer under the law.

Read the law and stay out of ma Internets dog.

Catherine Fitzpatrick

Of course the damage is over $5000. You computer geeks don't come cheap! The clean-up and damage control with customers is enormous.

The flaw isn't "just there" and he "just walks into it".

It takes cunning, it takes stealth, it takes a criminal mind to put it all together, looking for angles, and finally using BRUTE FORCE in coding terms. This is not public access, when you bang and get 197, and then bang some more and get 625 and keep adjusting and consulting and plotting and conniving and then get 120,000.

It's crime.


So if the CFAA defines as illegal any unauthorized access to a protected computer, how will I know as a customer if my data is safe there? The key element to a secure system is that as many people as possible analyze it, and attempt to break it, as any cryptoanalyst will tell you. I find far fewer problems with people trying to hack my system (and tell me about weaknesses) than with people using gained access for malicious purposes.

Of course the flaw was "just there"; it's an inherent weakness in the system as designed, just like the WEP used for wireless communication is flawed. Just because there was a time when no-one had detected the flaw yet doesn't mean it wasn't there. It was just badly designed.

There is a huge difference between someone being careless with his/her key (or password for that matter), and a design flaw in a protected system. Encryption standards like RSA for instance are very secure, but do not protect against people being stupid enough to share their private keys. WEP on the other hand was flawed, which means it could be cracked without user stupidity.

AT&T's system was flawed; it allowed anyone to gain access to this system without any fault of its users. The cost for needing to fix that flaw were caused by the flaw being there; not by the flaw being found. The damages caused by publishing the gained customer information, those were caused by Weev, and those are the ones he should IMHO be punished for.

Catherine Fitzpatrick

1. If you don't trust AT&T's server, oh, go and get Credo, the lefty "progressive" phone service that is politically correct. Maybe they have better security? Or maybe not... Or don't talk on a mobile phone or ipad if you don't want the Kremlin to know your secrets.

2. Corporations have information security people. They do their jobs, by and large. The same geeks that are in the cohort that spawned Weev work at AT&T. They do what they can and sometimes are hacked by malevolent fucks. Next?

3. No, the open source cult thinks that bugs are invisible to a million eyes, as their sappy slogan has it, but real people in the real world, especially in a virtual world, know that often bugs are found by dedicated people, sometimes working alone. The million eyes can all be prejudiced and stampeded.

4. The idea that the Internet exists for you and your geek criminal pals to break it is one of those flaws built in by Tim Berners-Lee. It will take some years of prosecution to burn this idea out so that the Internet can progress normally.

5. The flaw is not "just there" unless you deliberately ignore the gyrations and contortions needed to exploit it. EXPLOIT it. Go and study the entire sequence of factors I outlined -- where even these savvy freaks got stumped on hack product no. 197 on their way to 120,000. So meh, you lie.

6. It's not a design flaw. It's a convenience for customers not to have log-ins. Anyone using the iphone finds the login a constant brutal annoyance that they'd be happy to shed, in many instances, even if it meant that some evil hacksters saw their cat photos. It's a business logic, and it's justified. What's not justified is asshole behavior.

7. It didn't allow "anyone" to access its servers. Some conniving fucks who worked and worked at it and hacked and slashed tested it, and kept going, and slurped 120,000 emails. Different.

8. The cost was caused by the misuse of the server and unauthorized access, as in the statute. Flaws don't cost money; assholes who exploit them viciously cost money.


I'm sorry Catherine, but here you're just showing blatant ignorance of security and cryptology. Obviously it doesn't matter if millions of ignorant people look at a system, but as any cryptoanalysist will tell you: anyone can design a system that they themselves cannot break.

Security can be compromised by two main factors:
- flaws in the system
- flaws in the users of the system
There is very little you can do against user stupidity other than making sure that at worst they can get their own data compromised. A flaw in the system however exposes the data of multiple or even all users.

Of course there's sometimes a trade-of between convenience and security, but if you decide that trade-of in favour of convenience you are inherently making your system insecure, and putting your user's data at risk. If the users are aware of this, and accept that, that's fine, but if you keep that insecurity secret then you'll end up with very angry users.

In AT&T's case, the flaw was a flaw in the system. Granted, it was detected and exploited by an ass who deserves punishment for the exploitation, but that flaw would still have existed if the ass had not. Who's to tell if he was the first who broke the security? Perhaps the site was broken years ago, and perhaps even by criminals who just exploited the information they gained while keeping the weakness secret. We have no way of knowing.

Catherine Fitzpatrick

Oh, I call bullshit. I don't have to be a cryptographer to understand this, just like I don't have to be a biologist to understand photosynthesis.

The hack involved here isn't rocket science, isn't the cryptography of the CIA's deepest secrets but...a numbers game with the ipad and AT&T. Anybody can step through it, even without understanding everything about it or have the first clue how to reproduce it -- and I just did that in a long post where I studied it and outlined its intellectual facets to consider regarding the law.

And along the way, I quoted the geeks with ethics -- there are some -- who explained why it was wrong -- and criminal -- there are some.

There is a deep flaw in the *hackers* here - not the "users of the system," because *hackers* -- unethical thugs -- deliberately screwed around until they could fuck over AT&T.

If you pick a lock with a lock pick, you achieve the same thing. Just because the lock accepted your lock pick through its hole doesn't mean that somehow it is flawed and you are not. Only robots think that way -- and this is all instructive for us to see.

Unlike unethical hackers, AT&T's geeks were thinking of CUSTOMER REQUIREMENTS and that's why they used the business logic of making it convenient. They didn't dope out the malevolent "brute-force" attack as they might have if they were conniving fucks from 4chan who had all day to exercise their malevolence. They were busy meeting CUSTOMER REQUIREMENTS. So they fixed their exploit. Their customers had to re-do their passwords. They suffered losses. It could have been worse. Fortunately it wasn't. But it needs to be DETERRED BY LAW and prosecuted UNDER THE LAW -- that's what law is about.

Meanwhile, I'll wait for my 100-year-old laptop battery, truly I will.

AT&T has geeks in it that are the same as any geeks. Systems that develop flaws like this arise because there are trade-offs in systems on not only customer convenience, but cost and time. They were rushing to give their customers the satisfaction of the new early adapter experience of the ipad, no doubt. They are not to blame for this; Weev and Spitler are. Just like the rape victim is not to blame for her rape; the football team members are. You can say that young girls shouldn't get drunk at parties all you want, and they shouldn't, and their parents should watch them, but the law says that a rape must be prosecuted and she is not to blame for the rape. The law is about civilization, not your penis.

Flaws exist and get discovered by the AT&T's own geeks -- they all have white hat/black hat type of teams. They also get detected by good IT people who alert them privately.

The idea that we don't know about an exploit that might have happened two years ago doesn't wash, as no other customers complained about unauthorized access of their emails.

These attempts get logged and the customer gets notified and security questions go into effect, etc. There are redundant backups that make something like a previous hack unlikely to have happened as we had not reports from customers.

The idea that for the sake of exposing flaws, we must always have assholes performing grandiose fucktard acts like this "because they can" is ludicrous.

The hacker crackdown now is finishing the job that started in the 1990s but was stopped by Mitch Kapor. May it proceed without him stopping it this time.


No, you don't have to be a cryptographer to understand the issues, but you should at least be willing to acknowledge the basic fundamentals underlying the theory, and you seem to be unwilling to do even that. Of course AT&T employs some very smart people, but people are generally blind to their own mistakes. WEP was widely used and looked upon for many years before cryptoanalysts discovered the inherent flaws in the system.

So what if there was no Weev? Would the flaw not have existed? Do we have any way of knowing if the system wasn't compromised way before he discovered it?

Of course Weev deserves punishment for the way he exploited the system. Searching for, and finding the flaws in the system however should IMHO be praised, and not punished. Otherwise we'll just end up with insecure systems. So no, we don't need "assholes performing grandiose fucktards acts like this", but we need to outlaw the exploitation, and not the flaw finding. CFAA does the exact opposite.


this is why i really dont want programmers making robot overlords.....

again, more crap from a generation grown up on gaming and rand institute game theory crap.

sometimes im amazed Kissinger didnt get us all blown up in the 70s.. i guess it was a good thing he didnt really grow up with comics/tv.



Catherine Fitzpatrick

Again, bullshit. The fundamentals are not rocket science and I've outlined them here. It doesn't matter if some system was flawed for years and people didn't find it -- that's human nature. Weev himself is blind to his own flaws and that's why he's in jail now.

A flaw may exist in the forest without anyone to hear it, but the reality is that what revealed this flaw was *malevolence* and that's what hacking *is*. Anyone might have stumbled on it, but a good-faithed person would simply tell the company. They might indeed be praised if they quietly told the company. Usually in these cases the company itself winds up telling its customers anyway but on their terms, i.e. they don't have to invite further vandalism by describing the details of the hack. People test systems all the time with "white hat" hackers and the notion that you can't have security without criminals to test it constantly is an absurd reductionism and exoneration of crime. CFAA outlines not "exploration" and "flaw finding"-- there has never been a person who found an exploit and reported it quietly, ever. Because that is not the conduct specified in the statute. It is not "unauthorized access" if you *find* the flaw but don't proceed to exploit it for 120,000 addresses. 10 would do.

There is absolutely no proof anywhere that Weev's self-serving claim has any validity - that if he had told AT&T quietly without the press they would have had him arrested. Your persistent effort here to try to pretend there's a distinction between "flaw-finding" and "exploits" that somehow was made in his case is preposterous. It's why we need to regulate the criminality of geeks for the persistent admission of machine logic that is antithetical to humans.


No, you've outlined your view on the fundamentals of the case. You completely fail to acknowledge the basic fundamentals of security theory. The system was flawed, and we have no idea how many people managed to compromise it over the past years, and just kept it silent while they exploited it.

What revealed the flaw was not malevolence; the way the information, obtained after the hack, was published was melavolent. Certainly, the proper response to finding the flaw would have been to quietly tell the company, but the CFAA doesn't distinguish between doing that and publishing it the way Weev did, since it makes the hacking as such illegal rather than the melavolent acts following the hack. As you yourself stated:
" (C) information from any protected computer; "

As I stated before, what should be illegal are the melavolent acts following the hack rather than the hack itself. That would have still caused Weev to have been convicted, but it would allow honest hackers to continue to search for, find, and report flaws in systems without being afraid of a lawsuit.

Catherine Fitzpatrick

Thanks for revealing just how corrupted your tiny mind has become by adapting yourself to a machine.

Just because a machine has no perception of life beyond its servers doesn't mean you have to artificially break up the nexus of human thought and action and the flow of actions into byte-sized chunks that separate "the hack itself" from "the malevolent acts that follow".

The intent of the hack is malevolent -- it is cunning, deliberate, large, aggressive, to "make a point" -- and a point *about power*. "Because I can". The law can and should prosecute challenges of power to private property unless it wants to sanction communist revolution because some machine-thinker now thinks techcommunism is the wave of the future.

There is a long history of jurisprudence in the area of private property and privacy, and all of it is being invoked on the forums where there are lawyers who challenge narrow-minded geeks, I haven't had time to report them all.

The CFAA doesn't distinguish between "doing" and "publishing" because "doing" is conduct that itself falls under the law.

The logic you display here is robot logic: "I can't tell the difference between a good and bad person accessing my server so I bless them all as neutral".

That's silly because intent that results in action matters in law and isn't beyond law the way it is for a server.

The CFAA has never been used to punish a quiet report of a hack. Ever. In its life. Because by doing so, *one indicates that one is not malevolent*. Hello. This should be obvious. It is to those note infected by machine logic.

The recent video with Lanier and the Weev case have now given me an epiphany of how the killer-robot phase begins. It begins now. It begins with this. The law has to fight back.

Rex Cronon

@ PieterHulshoff:
"anyone can design a system that they themselves cannot break" really?

Rex Cronon

@ PieterHulshoff:
"using the law in the manner it was used during this trial seriously endangers the chances of future insecurities being brought out in the open."
imo. it says if you find a backdoor/exploit/error you shouldn't use it for your benefit at the expense of others and expect not to pay for what you did. although he didn't sell the data he got, he used it to increase his own popularity/fame. it wasn't an altruistic move on his part. all those times that his script tried random values resulted in delays/errors for regular users.

"as many people as possible analyze it, and attempt to break it"
-you have a problem with this. if people are encouraged to do this, at one point they start to interfere with regular users ability to use the system, and even worse it can break it down. that sounds so much like sabotage.

"security theory"
-which security theory are you talking about? do you have a link to this? you realize that is just a theory, not a fact:)

there is no "perfect" security system. that doesn't mean that is ok to apply pressure until it breaks.

Rex Cronon

@ PieterHulshoff:
I want you to imagine the following scenario:
Is after midnight. You are sleeping in your apartment. Suddenly you wake up. You don't know what woke you up so you hold your breath for a few seconds and try to use all your senses to get an idea of what is going on. You realize there are strange sounds coming from your front door. You get up, grab something to defend yourself with and go to the door. You look through the peep hole and see a person doing something to your lock. You shout: Who are you? What do you want?. You get an answer: Don't worry. I am just testing for any flaws in your lock.
At this point usually in real life you would either call the cops or/and start a fight with whoever is at the door. Lets call this nice helping person X.
Lets assume that you believe the answer you got, but now the problems start:
-you will have a hard time sleeping with all the noises coming from the door.
-each time somebody tries to enter/exit the apartment they will have a hard time accommodating the person who is trying to find flaws in the lock.
-all that lock testing can actually break the lock.
-if a flaw is found you will have to spend time and money to get a new one.
-if you are not present when flaw is found you won't know if X entered the apartment or what X did do if X entered the apartment.
-for all you know X can leave the door open or a big note about it on the door.
-X can take pictures of your house and post them online.
-X can find private pictures/movies of you or your family and post them online.
There are so many things that X can do once a flaw in your lock is found, and is so hard to believe that all the X's out there are good people:(
Therefore if you want to protect yourself and those dear to you, you can't allow unauthorized testing of your lock(s).



Yes, it's known as Schneier's law (, but as he writes: the idea is much older. Back in 1864, Charles Babbage wrote:

" One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher."

I agree that the law SHOULD say that you shouldn't use a backdoor/exploit/error for your benefit at the expense of others, but that's NOT what the law says. It doesn't differentiate between those who test security systems for flaws, and those who want to abuse access once they have gained it.

There already are laws dealing with people interfering with the regular uses of a system; we don't need the CFAA to deal with that scenario.

With security theory I mean the (often mathematical) science behind security and cryptography. Of course it's "theory", though like all science it's considered fact as long as it's proven useful and cannot be disproven (despite our best efforts).

Whether a security system is perfect depends on your definition of perfect. There are security systems that as far as we know (after decades of research) cannot be brute forced using modern day equipment within the life time of this Earth.

Real life analogies rarely match the digital world (you wouldn't download a car). No, I would not appreciate people testing my house locks, though mostly because it would cause damage (hacking a computer system generally does not, though one can cause damage after access to the system can be found) and wake me up. On the other hand, if I'm to trust my money to a bank I want to make sure that as many knowledgeable people as possible test its security system for flaws.


Honestly Catherine, are you unable to disagree with anyone without resorting to insults? As Andrew already told you:
Sure, this is your blog, and you can use any words you want to (within the boundaries of the law), but it certainly doesn't give credence to your words.

It's not so much my adaption to machine thinking; the law normally already does this. Say a man decides to kill his ex-girlfriend, then his actions may involve the following steps:
1. buying a weapon
2. driving to his ex-girlfriend's house
3. breaking into her house
4. killing her
The law only penalizes step 3 and 4 (though here in the Netherlands, step 1 would often be illegal as well). His driving, though intended to arrive at her house to kill her, doesn't suddenly become illegal (unless he was breaking traffic laws).

In Weev's case, as far as I know, the steps are as follows:
1. Hacking the system
2. Taking information from the system
3. Publishing that information
While I agree with you that the amount of information taken in step 2 and the way he published the information in step 3 should be penalized, I see no reason why step 1, which in itself caused no damage at all, should be illegal.

I KNOW the CFAA defines the "doing" as a conducts that falls under the law, but I argue that it should not be. Weev should be punished, but that could easily be done under steps 2 and 3, without making our systems less secure by making step 1 illegal. The intent is already shown in steps 2 and 3, so it's irrelevant to step 1, just like the intent of killing is irrelevant to the driving step.

The CFAA is a relatively new law; give it some time to be abused as such. As Cardinal Richelieu said: "Give me six lines written by the most honorable of men, and I will find an excuse in them to hang him."

Catherine Fitzpatrick

I'm happy to insult you as many times as I need to Pieter because you're merely a stubborn ass who is trying to justify theft and harassment and invasion of privacy, over and over and over again, with a zillion excuses, dodges, feints, bluffs, and outright lies -- *merely because it's on the Internet*.

No one would spend hours fighting on a blog as to whether buying a gun should somehow be excised out of a murder story and endlessly celebrated as "innovation" and "gunplay" and even the path to a 100-year-old battery. It just wouldn't be necessary. They would understand the nexus of the criminal act, they would understand that a criminal act had indeed taken place, and they wouldn't have to slice and dice it to death to remove some part of it to exonerate it and therefore then try to enlarge the decriminalized space.

That's all that's happening with you and other geeks and apologists -- you're trying to argue backward for decriminalization at every step of the way by enlarging the space for "neutrality" over something like a gun.

All you have to do to break out of that Foucault freakery is to note that if someone were to buy that gun, and drive that car, for that murderer of that girlfriend, he'd be an accomplice. It wouldn't matter if technically the gun was purchased legally.

No one seeks to carve out disperate segments of crimes like this in real life. So why should they do this on the Internet?! The Internet is not fucking special. Not at all. In fact, if anything, it is more criminal and the US government is absolutely right to declare cyberwar as the greatest threat of our time.

Step one in Weev's case is more certainly part of the crime and criminal. It is where the malice is located. You could notice that it would be easy to get anyone's email even on the third go -- and write the company about it. But to deliberately try to hack and slash and bang at it, adjusting the script for various factors, trying this, trying that -- that's crime. That's immorality as well.

Who cares what you argue the CFAA should not be?! This isn't even your country. And it's perfectly fine the way it is because it doesn't matter that there was no Internet in 1984. Murder statutes don't change as such because 20 years or 30 years go buy, what is so special about the Internet?! Privacy, theft, harassment, torts -- these are all basic concepts in law with hundreds of years of jurisprudence, and there is nothing wrong with applying them here.

It's a felony to tamper with the US mails. You can't break into those boxes out in public with chains on them that are way stations for mail carriers' sacks. You can't go into your neighbour's mailbox on the road and take his mail. These are felonies. The government obviously had to develop these stiff statutes and felonies over the years, starting with the problems they faced originally when the Weev style assholes broke into carriages and stole mailbags in the Wild West.

If it is a crime for me to go rifle or open or read my neighbour's mail or break open the public boxes on the street, it's the same on the Internet. The problem is in the people who maliciously wanted to accumulate 120,000 emails of famous people and their gadget numbers to expose them to harm, discussing idly whether to sell them or spam them with gadget ads or not, not the people who sold the gadgets to them and are trying to serve them, even if poorly. The former can't be fixed -- it's a disease of the soul and will fixed only by the criminal justice system as a deterrent. The ladder is fixable with effort and good will -- and by the way *was* fixed.

The CFAA isn't new, but old, and people even want to do way with it because of its elderly status. But I think we have enough jurisprudence building up about it and case law, especially lately, that we don't need all this nonsense about reforming it the EFF way, which means to excuse criminality and move the slider to help Silicon Valley's business and nobody else's.


I'm glad I can provide you with so happiness then. :) If you wish to discredit yourself by insulting anyone who even slightly disagrees with you, that's your prerogative. I know it's easy to just box anyone into a large group; it allows you to ignore any valid statements made by that person just by referring who people like that always lie.

Yes, actions need to be seen individually as well as a whole, but that's no reason to make every action illegal, and let the courts figure out the intent. We don't making driving illegal, just because someone may drive to the location of the crime. We make the crime illegal, and leave the driving to the traffic laws. In your example: being an accomplice is the crime; you won't find it in any of the traffic laws.

Step 1 should not be illegal, because if he had just taken enough information to proof the system was insecure, and had quietly notified AT&T, no crime should have been committed. It shouldn't be up to AT&T or the feds to decide whether someone should be prosecuted for simply trying to find flaws in their system. Weev however SHOULD be punished for taking a lot of information from the system, and bringing it to the public the way he did.

Catherine Fitzpatrick

Peiter, you're simply a dick, and you keep establishing it. I'll insult you as many times as I need to, not because you "disagree with me" but because you persist in bad-faith posts here of nonsense in pursuit of criminalized objectives and decriminalization of what is criminal. It's a moral problem. Not a debating problem. That's why you are to be condemned and insulted. That's how you deal with problems that are essentially moral: you condemn them in the strongest possible terms.

As for your fake tap dance here and "debate," go back to Rex's story about the "testing the lock". There is no use case that is justified for "testing your neighbour's lock" while he sleeps. No one would dream of this as being rational or normal or needed to discover the secret of building 100 year life batteries. I don't go loitering around my neighbour's mailbox, glimpsing his mail, toying with the door to his box, seeing if he left his key in it, etc. etc. That would be sure assholery and might rightly get me a felony charge for mail tampering ultimately.

There isn't any use case where somebody "needs" to tap a website gadzillion times with BRUTE FORCE to try to "see what it does". Like my pinata example. Tap it until it breaks. Then pick up 120,000 candies. Then keep them.

Doing that is not only wrong, it's criminal. There is no good use case for it whatsoever. It is not serving those customers -- indeed it comes between those who serve those customers and those customers. There is no "app" or service of any merit or use that comes from this scrape.

In fact, possessing such a list and having poor moral quality means the chances are high that while drunk or high you will give them away or sell them or be hacked yourself from the IRC channel gang.

So it's all utter, total bullshit, the end.

The indictment and the trial and the sentence do not break the crime into these strange chunks where one piece of the sequence of actions is carved out to remain to be exonerated.

The reason that hacking is a crime at step one is because it is not necessary, not authorized and not legitimate. No one needs this man's "services". A white hat company can be hired to do the same thing legitimately. The inhouse geeks can do this legitimate. Then it isn't unauthorized.

One of the weakest and stupidest arguments the geeks make on all the forums is that this hacking is some kind of "service". No one requires your services, you fucking lunatics. They have their own geeks and white-hat firms that can do this themselves without you, and do it as AUTHORIZED. So fuck off. You have not saved the Internet. You have made it weaker. Go away.

Rex Cronon

"Schneier's Law" is not a law. It is not a fact. It is just a theory that Bruce Schneier has on this subject, based on his life experiences. There is no proof, mathematically or otherwise. Lets assume for a second that is true. In theory it is enough to find one example that doesn't fit this "law" in order to disprove it. I am that example. I don't belive that just because I can't find a flaw in an encryption system that I designed, that means my encryption is perfect. I also believe that there are others like me.
Imo the scenario i gave you is a good example that ilustrates the dangers of hacking. Although, right now you might not be able to download and print a car, that day might be closer than you think. It is already possible to download and print a functional gun if you have the right hardware:) If enough people try to find flows in a banks software that will certainly lead to lag(delays). Bandwidth and processing is used each time that data is sent/retrived to test for possible flaws. Sending erronous data to the bank site can also lead to storage/retrival errors, and to data corruption.
I don't really think you want a black hats to "test" your bank site. You do realize that black hats wouldn't go and tell your bank that it has a back door. At least, not until they took advantage of it. If everybody is doing it, it would be very hard to determine who the good guys are and who the bad one are.



Schneier's theorem states that you can design a system that you cannot break; it doesn't state that you'll be foolish enough to believe that it can't be broken. Enough people have learned from the past so that Babbage theorem doesn't hold true anymore; not every person feels this way.

As for your bank example, and as said before: we already have laws dealing with hacking attempts that interfere with the normal operation of a server. The CFAA isn't needed for that purpose.

In any case: the difference between the good guys and the bad guys isn't defined by their attempt to find flaws in the system; it's defined by what they do after they've found them. It's those actions that deserve penalties under the law; not the hacking as such. As you could read before: I said I feel Weev deserved punishment; I just feel he's being punished for the wrong act, and that's setting a bad precedent.

Catherine Fitzpatrick

No. The difference between good and bad is that the bad guy hacks the server -- he makes an unauthorized access to the server. He is not coming to the server as an iPad customer, but using stealth, cunning, deceit, exploits, and scripts to access other people's private information and damage the corporation.

Normal people in the real world get this. Robots don't. Fight the robots!


If that's true Catherine, please show me a lengthy discussion you participated in, where you disagreed with your opponent, and did not use insults. I've searched your name and alias quite a bit, but I haven't been able to find a single one. It's clear to me that you are incapable of disagreeing with someone without resorting to insults. That's fine by me; I can't be bothered by your insults, but it greatly discredits your words.

Good security systems are brute force proof; AT&T's flaw was not a matter of brute force weakness. Once the flaw was apparent, hardly any effort was needed by anyone to break in again. If you want to discuss a subject, at least get yourself acquainted with the terminology, because it makes you look like you have no idea of what you're talking about.

So, if his services aren't needed, and AT&T has hired such brilliant white hats, why didn't they find the flaw? AT&T's customers' information was at risk all that time (and perhaps that information was stolen 100s of times before Weev figured out how to do it), and you think they don't care? Of course they're pissed that Weev leaked their information, and he deserves punishment for that, but I bet they're just as pissed at AT&T for having such weak security in the first place.


That's odd Catherine, in your own words:

"He did not make any "responsible disclosure". He concocted a fiction about that. He never contacted AT&T, which he could have done when he "discovered" this thing that in fact *he worked at* getting when another hacker victory-danced to him that he had succeeded in getting at this system illicitly."


"A real decent person who in fact really cared about security and privacy for a company would quietly contact the company. They would write an email or letter to their information security department or network administrator using their website and say, "I happened to find this security exploit, you might want to fix it.""

yet now you say he should have been convicted and punished even if he HAD been a real decent person and made a responsible disclosure? Please, at least TRY to be consistent, will you?

Catherine Fitzpatrick

Pieter, as we know, you're just here to troll and harass in pursuit of your copyleftist and open source cultist agenda.

I'm free to insult people on my blog if I like, and they're free not to read it.

I'm also aware that in the "ad hominem attack" game played by many on the Internet, even legitimate criticism that isn't about the man but the ideas will be construed as ad hominem by thin-skinned geeks -- so I don't play that game, it's stupid. Ad hominem attacks are permitted by the First Amendment and also are not prohibited by blog TOS. Next.

If I have a moral condemnation to make of someone, I make it in strong terms. Most people think morality should not even be a dimension of debates about technology and the Internet, and certainly never about them and their own behaviour. I disagree. Next.

It doesn't matter if AT&T didn't have the most perfect defense; the logic that Weev gets to bang on it because he can and because he's a dick isn't a logic I accept because it's immoral. He can make a private notice to the company.

It doesn't matter if other white hats don't find the flaw -- the purpose of these hacks and their dramatic "propaganda of the deed" isn't to advance security but to advance hacker power and egos. They are not sincere and not moral. Many companies have started using two lawyers of authentication to defeat this sort of thing, or multiple security questions over the slightest hitch. But hackers will find ways around this, too, because nobody has designed a foolproof system to solve the problem of the air gap or the analogue hole or the "over the wire" problem that we all know about and don't need new lessons about. Next.

The law does not punish companies for lax security. Sony, AT&T, PayPal -- they've all been hacked. Their customers have all been harmed by that hack. I'm not aware of a single civil lawsuit by a single one of these type of companies' customers over a hack. That's because the average normal person doesn't blame the company for them, they blame hackers. And rightly so. The attempt to shift accountability to the company for its victimhood is rape logic, and hacker machine babble to try to exonerate themselves.

I never said he should have been convicted if he had discovered this hack organically -- i.e. accidently or in the course of normal use -- and told the company.

But that's not what happened and that's why your equisite assholery here is to be called out for as many times as anyone has patience for it: he deliberately put things together, he and his friend connived and planned and conspired to hammer away at this site; they deliberately kept trying to get more; at 197 they didn't stop and say "let's call the company"; they didn't stop at 625, they adjusted, figured out how to be more cunning with their scripts and got 120,000 names. So their hacking itself is what is the problem, and cannot be isolated out as an act that is "good" of people going around hammering on sites. Every single bit of their act, from the first moment that Spitler said, "Oh, there's that number in the Flickr, let's see what I can do with that" is shot through with immorality.

Example: if I were to go around my neighbourhood opening my neighbour's mailboxes at random, testing to see if any of them had been left open, or seeing if the lock on them was broken, I'd be a complete asshole and rightly exposing myself to arrest for mail tampering. No one needs me to do this.

All you're trying to do with your persistent gambit here is to exonerate the first step of hacking as some lovely "white hat act" where people "get" to go around constantly probing, testing, poking and harassing companies. Well, this law and this case is here to tell you: no you don't. And that's a good thing. Because as Rex has already explained, if every asshole coder thinks he is entitled to endless barrage sites with scripts to test them, it uses up resources and creates work for those sites. It's not normal behaviour. The greed of data miners is obvious, and we shouldn't give an inch on this as they pretend that this is "innovation".


Yes, this is the law (well, your law anyway), and I'll complain about it as loudly as I can in order to get it changed, and to make sure it never makes it to the Netherlands. It's a stupid law for punishing the wrong acts, and making our society and systems less secure in the process. You ignore or deny some of the most fundamental rules in security design, and think you can somehow make up for it by increasing the penalties for breaking secured systems. You are wrong.

Yes, it does matter that hired white hats didn't find the flaw, because you claim his actions weren't needed. The customer's data has been at risk ever since this system went online, and may have been compromised 100s of times before Weev got it public, and would still be at risk if the flaw hadn't been exposed. No, customers generally don't sue the company, though that's mainly because of the cost involved, and the airtight click-through agreements that these companies use. Your chances in court would be minimal at best.

Indeed, the law doesn't punish companies for having lax security, but I argue that perhaps it should when said company risks exposing important customer data that way. If my bank risks my money by having lousy security, I want to know about it, and want them held accountable if something goes wrong.

I'm not arguing that his actions in taking 120,000 names and exposing them shouldn't be punished; that's a straw man argument (something you unfortunately resort to way too often). My simple question to you is (and was):

If he had hacked the system, not by accident but deliberately, and quietly told AT&T about the flaw in the system, do you feel he should have been punished for it?

The comments to this entry are closed.


Follow on Twitter

Twitter Updates

    follow me on Twitter
    Blog powered by Typepad