Fog Creek Software
g
Discussion Board




Boy shouts "missed a spot", gets standing ovation

And the clueless windbags start beating their collective national chest http://www.reuters.com/newsArticle.jhtml?type=technologyNews&storyID=5089870&pageNumber=0

"Despite the damage to millions of computers, one leading German newspaper said in a page one commentary Monday there was a strange sense of national pride that a German student had outwitted the world's best computer experts.
"Many of the (German) journalists who traveled to the province could not help but harbor clandestine admiration for the effectiveness if the worm," Die Welt daily wrote."

Just me (Sir to you)
Monday, May 10, 2004

So, you're assuming that whoever wrote this really knew what those journalists were feeling. Have you considered you may be giving this guy a little too much credit?

<sarcasm>
As for the news, from what I was able to digest in the threads dedicated to the subject, we're already reached the conclusion that the guilty party in all this situation is the user, for not patching fast enough, so they might as well release the kid.
</sarcasm>

Paulo Caetano
Monday, May 10, 2004

"there was a strange sense of national pride that a German student had outwitted the world's best computer experts. "

Which "experts" would these be? Microsoft developers?  And how did he outwit anyone?

He wrote a program which took advantage of a widely known and published vulnerability. Not exactly real tough.

Any decent developer, with enough time and the desire could do the same thing. If you wanna impress someone, then be the person that found the vulnerability to start with. But just taking advantage of it? That's script kiddy stuff.

So the Germans are proud that their nation produced a little mischievous miscreant who enough mental ability to coddle together something which wreaked destruction upon the world? Good for them. At least this time they are growing the monsters themselves instead of letting Austria do it for them.

Whatever
Monday, May 10, 2004

Personally I'm surprized that eEye hasn't taken more flak - they discovered the fault over six months ago and sat on it while Microsoft developed a fix (I hope nefarious groups and individuals would be so kind), and then released the details after Microsoft got the patch out. Surely the messenger must be the one to blame.

Dennis Forbes
Monday, May 10, 2004

Dennis writes, perhaps sarcastically, perhaps not, "Surely the messenger must be the one to blame."

If I announce to the world that you've left on vacation with your front door open (or at least unlocked) and someone uses that information to rob you, who's to blame?  You, for not closing (locking) your door; me for letting everyone know about your vulnerability; or the robber who took advantage of the vulnerability?

Clearly the robber has guilt, but how much does/should the "messenger" bear?  If I knew the robber, that might be considered conspiracy.  But if I don't, what laws apply that suggest my guilt?

Personally, I think the "messenger" should bear some responsibility.  But the question of "how much" is where I get stuck.

bpd
Monday, May 10, 2004

"Clearly the robber has guilt, but how much does/should the "messenger" bear?"

I sound like a broken record, however let me again state that analogies are a broken way of making a point. In this case this common analogy is especially weak because eEye isn't exposing that "Bob Newherst at 22 Sussex Lane forgot to set his SQL password - go get him!", they're saying "this vendor, which millions of you trust your security to and runs as the infrastructure of many banks and government agencies, has another exploit that we easily found in our labs. We are one tiny little security company with a couple of guys coming a bit of creativity with a debugger". eEye sat on the vulnerability for over 6 months, patiently awaiting a patch by Microsoft.

It boggles my mind that people really are deluded into believing that only security companies are capable of finding holes, and that only teenage worm writers (who gain nothing out of the transaction) are the only exploiters after the fact. To descend to your analogy, this is more like a nationwide electric door company selling hundreds of millions of electronic doors, including the doors used for banks, government buildings, women's hospices, etc. Joe's Door Replacement, a tiny installation company discovers that, what do you know, the locks disengage between 2:02AM and 2:04AM if you lean on them a bit (something clearly discoverable in security testing). They inform the company (all the while who knows how many other people are well aware of the hole and are quietly emptying banks and attacking women...all "unexplained"), but the company is so busy making the new electro door that such matters take second priority. Finally a half a year later they release a fix...oh but don't be surprized if your door also opens between 2:12 and 2:14, and 3:57 and 4:01.

eEye was a little hasty years back when they released the IIS exploit pretty much in concert with announcing the exploit, however since then they have been quite well reasoned. Let me reiterate that I think it is unbelievable delusional thinking if anyone believes that they're the only ones finding these open doors.

Dennis Forbes
Monday, May 10, 2004

Thank you Dennis Forbes aka "Lone Voice of Security Reason on JOS"

Others will jump in with how it is user's fault's not Microsoft's.  Interestingly enough Microsoft tends to adopt this attitude as well as evidenced below.

http://tinyurl.com/2sr9j

gad
Monday, May 10, 2004

It is Microsoft's fault that their software isn't more well written.  I understand why they didn't write it better ($$$) but the bottom line is it was their past decisions that have led to the current situation.  The community has a small degree of fault for encouraging the behavior, but the bottom line is they dropped the ball.

That being said, as a user, I have only been bitten once ever by a Microsoft security flaw -- Code Red back in 1999/2000.  Left my server running over winter break, came back after the break, and someone had started hosting an ftp site on my box as a result.  Thankfully, he only found my OS partition which only had 400M free, so it was a worthless endevaour on the hacker's part.

Since then, I got wise to the fact that MS product's have security flaws.  I am careful about what websites I visit, what emails I open, and I __always__ use a hardware firewall.  Never been bitten again.

In the words of Big Tom on survivor last night, "I been stupid once but I ain't be stupid again" or something that I could barely understand.

Elephant
Monday, May 10, 2004

I thought he said "Stupid is as stupid does"

MilesArcher
Monday, May 10, 2004

Like I said, who knows, no one could understand him.

Elephant
Monday, May 10, 2004

Very clearly, Microsoft is guilty of selling shoddy, poorly written software that is rife with security holes.  But others must share the blame as well.

In spite of all the publicity that has surrounded the various virus/worms, and all the gazillions of dollars in damages done to business:

Thousand of system administraors have not applied patches for vulnerabilities first identified many months ago.

Millions of home users continue to launch executable attachments, despite being repeatedly told not to.

Millions of home users continue to operate without anti-virus or firewall software, despite being repeatedly told that they should.

I have 3 computers at home, hooked up to a cable modem for the past 5 years.  All running Windows.  Total number of virus/worm infestations:  ZERO.

I dislike Microsoft as much as the next guy,  but saying that it's all their fault and ignoring the irresponsible behaviour of millions of clueless moron users is just simply wrong.

My Cousin Vinniwashtharam
Monday, May 10, 2004

"Since then, I got wise to the fact that MS product's have security flaws. "

Totally in agreement that users should always practice safe computing, which includes running the absolute minimum number of services possible, only running trusted code, always being behind a firewall, keeping patches up to date, etc.

Having said that, a corporation generally has to reveal its systems to its employees and contractors, and not all of them have saintly motives (and rolling out patches in a corporation isn't as easy as it is on a home PC. As a sidenote, a recent IE patch presumed that one of my PCs, a Win2K PC, had the Windows directory at C:\Windows, and set a registry entry for urlmon.dll accordingly. My directory is actually C:\Winnt. This fubared IE in a variety of ways, and is another example of why the oft repeated "patch instantly" can be detrimental). Home users have never been told that a firewall was mandatory, and on top of that Microsoft has a long history of "turn it all on and let the hackers sort em out". A quick example is Microsoft's history of "install every ISAPI hanlder in IIS if you want it or not. Several of the large exploits (such as the venerable HTX process exploit and the "internet printer" service exploit and the RDO exploit and the ...) hit webservers where these default install stuck these in even though they was never actually used.

Having said that, I have developed a sense of unease that is unfortunate given how from a pure technology perspective I am a huge fan of most Microsoft products. Just yesterday I hit send/receive in Outlook 2003 and was a little disturbed when, for the first time ever, it GPFd out -- was it processing headers that took advantage of an exploit? Just a few days earlier the same thing happened on my wife's PC (Outlook XP). I now have a general sense of unease about the machine, and have analyzed it several times.

Dennis Forbes
Monday, May 10, 2004

Dennis, I understand your distain for the analogy, so let me ask my question without the analogy.

Clearly the hackers (or script kiddies or whatever you’d like to call them) have guilt, but how much does/should those who publicly announce such vulnerabilities to the world bear?

By publicly exposing security vulnerabilities, labs are helping the hackers by showing them how to take advantage of the vulnerabilities.  Motivated hackers may find the vulnerabilities on their own, so why help them?  By providing public announcements, you’re helping those hackers who may not care to put in the time or have the expertise to find the vulnerabilities on their own.

What is to be gained by publicly announcing/exposing security vulnerabilities?  The potential for loss seems clear, but what’s the up-side?  And if there isn’t one, why shouldn’t the “messengers” be held liable for damages that result from their public announcements of security vulnerabilities.  (Proving damages is obviously another issue, but that argument can be saved for another time.)

BTW, in your “Joe’s Door Replacement” analogy, you forgot to mention that after having found out how the locks could be disengaged they told not just the manufacturer, but every would-be (or wanna-be) crook.  And it is only this aspect (the public exposure of the security vulnerability) that I’m trying to address.  (Other issues are important, they're just not what I'm addressing at this time.)

bpd
Monday, May 10, 2004

----"What is to be gained by publicly announcing/exposing security vulnerabilities?"----

The fact that Microsoft might be forced into doing something to fix them. You seem to be forgetting that many people and companies reported security vulnerabliities to Microsoft and found that nothing happened for months until they went public.
 

Stephen Jones
Monday, May 10, 2004

<quote>
I have 3 computers at home, hooked up to a cable modem for the past 5 years.  All running Windows.  Total number of virus/worm infestations:  ZERO.
</quote>

Ditto. The entire PC security business is 99% hype.

<quote>
I dislike Microsoft as much as the next guy,  but saying that it's all their fault and ignoring the irresponsible behaviour of millions of clueless moron users is just simply wrong.
</quote>

Don't forget the dirty little secret that many people in corporate environments will DELIBERATELY open virus attachments.

enough already
Monday, May 10, 2004

How about a couple of analogies that seem to me like a better fit?

1. Many banks use (or once used) these zippered cloth bags that allow for the zipper to be locked at the closed position. It's possible to open said zipper, thus gaining access to the contents of the bag, *without unlocking the lock*. This was first publicly demonstrated by a professional magician and the technique is simple enough that no dexterity or practice is necessary. If I run a bank, I want to know about that little security hole now, not after the bag manufacturer has been notified and given the opportunity to redesign the closure.

2. Most apartment buildings require you to have locks installed and configured such that the manager's master key can open all doors. If it should turn out that anyone with access to a lock and a non-master key can create a new master key, then I want to know that now, not after the lock manufacturers, locksmiths, and building managers have been apprised of the situation.

In both cases, the first reaction is likely to be "Gee, that's easy enough that it's a wonder that this flaw wasn't discovered and/or exploited earlier." In both cases, it's seems entirely likely that it *was* discovered and/or exploited earlier and that we've been living in a dream world. If it's all the same to you, I want to actually have some idea of what the risks are so that I can make informed decisions regarding the mitigation and/or acceptance of those risks. I have a tendency to think that failure to disclose known risks is unethical.

Note that in the first case, we have what might be called a weakness (as opposed to a fundamental design flaw) which is likely to have a solution, if only through the implementation of an alternative that is likely to be relatively simple and inexpensive (plastic bags and crazy glue?). Kind of like adding a virus-checker and a firewall to your computer. In the second case, we probably have a fundamental design flaw that likely entails throwing something out and replacing it with something radically different and probably expensive (electronic locks with card readers?). Kind of switching from Windows to Linux.

If feel the same about security vulnerabilities in software. I feel much better knowing where the holes are so that I can take take protective action while I wait for someone to take corrective action. If I decide that things are really bad, I might even dump that software in favour of something else.

Ron Porter
Monday, May 10, 2004

I was under the impression that releasing a sample exploit of a vulnerability was in order to keep whovever makes the patch honest. You can test the exploit before and after the patch, and ensure that that one is now fixed.

You may be able to take some kind of action to patch the hole, again testing it against the published exploit. I suppose you might want to do this as an interim measure, or maybe as something more permanent if you are concerned about potential effects of installing a new patch.

Tom
Monday, May 10, 2004


Although in rare form, Stephen is absolutely correct.

By announcing security flaws, they are pushing MS to do something about it rather than just ignore it. Which is what MS has tended to do in the past if privately approached about a security problem.

A think a nice compromise would be that the people that find the flaw privately approach MS and then give them a certain amount of time before they go public with it. This gives MS a "heads up" about the flaw, but also holds them accountable so that they can't just ignore it like they have done in the past.

Whatever
Monday, May 10, 2004

http://www.eeye.com/html/Research/Advisories/AD20040413C.html

Reporting to Microsoft 2003-10-08, or about 200 days before the patch was released.

Dennis Forbes
Monday, May 10, 2004

I am going to be frank about this:
I can't believe how clueless you have to be wrt security to advocate what is euphemistically called "full disclosure". While a few years ago there might have been a brief spell of "make these groups wake up to the Internet game", we have been living in a completely different world for ages now. It is a known >fact< that the lifecycle for major exploits is: discovery -> private report -> patch availability -> reverse engineer patch -> write exploit for unpatched systems -> release exploit. The patch release is such a major input in the exploit cycle that "patch-to-exploit" timing is seen as a measure of efficiency of the blackhat community.
Where do you think the "worm warnings" come from? The "whitehat" community does monitor BH communications and collaborations, and can see the construction of the worm taking place.
Does this mean there are no "professional stealth exploits"? Of course not. There are and there always will be, and if you are running anything of enough value these are the ones that should truly scare you. In fact, in that case you can count your blessings some security firm stumbeled onto the same thing, even if it means there will be the patch->worm scenario surely to follow, since it means another door will be closed for the criminals.

And by the way: While you can hide from being collateral damage in an indiscriminate worm attack by choosing a less deployed platform, this won't work for a specific target attack. In fact, you might have to worry a whole lott more if you are on a platform that does not have an active script kiddy community trowing out all the exploit jewels into the wild, since your true pro villains toolkit will have a much more desirable lifetime.

Just me (Sir to you)
Tuesday, May 11, 2004

" the lifecycle for major exploits is: discovery -> private report -> patch availability -> reverse engineer patch -> write exploit for unpatched systems -> release exploit."


surely that makes the question of full disclosure irrelevant? 

FullNameRequired
Tuesday, May 11, 2004

No, I should have been more clear. That was the cycle in the case of responsible disclosure. In the old "full disclosure" paradigm, you'd have the worm out on the basis of the disclosure, probably before the patch was ready of course.

What I meant to illustrate is that the descision of when to release the patch is not a simple one, as it triggers a class of exploits. Furthermore you need time to test the patch as thouroughly as possible, since we all know patching is not "free", so you clearly want to avoid retraction reissuance, patch on patch scenarios as much as possible. OTOH you want to release the patch since there might have been undiscosed discovery and targetted exploitation going on already. It is a balancing act.

Just me (Sir to you)
Tuesday, May 11, 2004

http://www.theregister.co.uk/2004/05/07/windows_worm_tax/

"In a research note, Gartner writes: "Many of the vulnerabilities that continue to be identified in Windows 2000, XP and Server 2003 are easily exploitable; attackers will continue to develop worms that will cause damage equal to, or more severe than, the system shutdowns and network congestion caused by the Slammer worm. Enterprises that are dependent on Windows systems must invest both in means to patch faster and in host-based intrusion prevention software for all Windows PCs and servers."

send money
Tuesday, May 11, 2004

That quote could be quite right appart from its singeling out Windows. It holds as advice for every system on the planet.

Just me (Sir to you)
Wednesday, May 12, 2004

*  Recent Topics

*  Fog Creek Home