Fog Creek Software
Discussion Board

Microsoft patch timings and virus/worms

I read somewhere that 96% of the viruses and worms surface AFTER Microsoft makes a vulnerability public and issues a patch. So why can't Microsoft just release a patch and NOT provide the details of it? Like they can just say "Critical patch number 123456 is available. Please install it".

Manish Bansal
Saturday, May 08, 2004

I can think of three reasons, off the top of my head, although I like your idea.

Reason 1: System adminstrators don't want to install every patch unless they have to, for at least two subreasons:

1a) installing a patch on a Windows server almost always requires a reboot. This is very disruptive for many live servers.
1b) installing a patch can break a working server. "If it ain't broke don't fix it" is one of the 10 Commandments of good system administration.

So if the patch only affects code that either isn't accessed on the machine in question (e.g. if it fixes IIS and the machine is not running IIS) then sysadmins may not want to install it.

Reason 2: too many people read Bruce Schneier's book about cryptography, in which he constantly harps on the theme that the only way to get good cryptographical algorithms is to expose them to peer review, and misinterpreted this to mean that the only way to get good security is to reveal your complete security plan. You will see people saying "security through obscurity is no security at all" which is actually not correct; security through obscurity may be weak or may be strong, and it's not as good as security which doesn't require obscurity, but it sure helps to obscure things because it will slow down hackers and maybe make them move on to softer targets. Anyway there is a popular emotion in the security community that all exploits must be documented in great detail, and this is enough to mean that Microsoft is under intense public pressure to describe the exploits they fix.

Reason 3: Even if Microsoft wasn't stupid enough to fall for Reason 2,  the people who find these exploits and report them to Microsoft in the first place are big believers in running around the neighborhood painting big neon signs on all houses with doors that aren't locked, thinking that they are contributing to the security of the world by doing so. Whether they are or not is open to debate, but it does show the ideological split between the somewhat black-and-white view of "things are either secure or insecure" (which I find to be somewhat childish) vs. the more professional view that security is fuzzy: things can be more secure or less secure, and there are many valuable security steps which increase security marginally which are still worth taking.

Joel Spolsky
Fog Creek Software
Monday, May 10, 2004

Surely another element is that the 'hackers' can simply take a look at what .dlls are being replaced,  disassemble & compare them to the previous versions, and develop their exploits from there.

That's essentially the process they follow for developing the exploits in the first place.  So not telling them what sections are affected will only slow them down a little, they'll figure it out soon enough and in the mean time poor windows users have no idea whether or not their system needs patching.

Monday, May 10, 2004

It's hard enough to get people to install patches as it is.

"Critical" is a funny word.  If you see it too many times, your eyes will glaze over.  If Joe installs 5 patches that are "critical" but forgets one, he may notice that his computer didn't actually melt, so he may be less inclined to install the next "critical" patch.  Crying wolf and all that.  But if you say "if you don't install this, some loser will be able to take down your PC through the foobaz service", they'll see a reason for it.  IME, this makes them more likely to install it.  When they're done they'll think "whew, I've accomplished something: my foobaz service is now safe!".

(Plus, as has been pointed out already, the bad guys will probably find the holes, anyway, so you don't gain much, if anything, by trying to hide what's fixed.  Your only hope is to phrase it such that Joe will actually install the sucker.)

Monday, May 10, 2004

On a related tangent, I'm still baffled as to why the update to the Bookshelf Symbol 7 font (installed with Office 2003) made the "critical" list.  I agree that swastikas are probably not appropriate symbols in the font library, but is this really an update that has to take place right away?  Further more, why'd they need to take out the Star of David?

Monday, May 10, 2004

"Further more, why'd they need to take out the Star of David?"

Maybe so Hindus don't feel victimized by the removal of the swastika?  Just a thought .......

Motown (AU)
Tuesday, May 11, 2004

They put the font in the "critical" list so that a fair few people would install it.

If it was on the "nice to have" list, that (probably) nobody even looks at, it would never have got anywhere.

Steve Jones (UK)
Tuesday, May 11, 2004

The info is required since it is not simply a question of patch/not patch, but often there is also the period between the publication of the patch/vulnerability and the time that you have tested the patch in your setting. In the mean time (or even as your chosen final response) you need to be able to mitigate the problem. You'll notice that the patch docs do not contain "exploit pseudocode", but info aimed at operations.
As others have said: the main info for the reverse engineering is in the binaries anyway. When the docs say "an attacker could craft a specially designed invalid request" you still don't know much. When you compare the changes wrt the old DLL things become more obvious.

Just me (Sir to you)
Tuesday, May 11, 2004

Let's not forget "critical" DRM updates. Watch out, if you don't install this, you might be able to watch movies you paid for however you want to!

Mike Schiraldi
Tuesday, May 11, 2004

A big part of the reason why exploits are disclosed is that vendors have historically been shamefully bad at patching potential holes unless the information is published.  Microsoft is certainly not alone in this but have on many occasions claimed that a bug is not exploitable and then delayed or ignored the bug for weeks or months.  They only started making a serious effort at fixing them once the exploits started being published.

Of course, exploits were often 'published' in the underground in the past and simply ignored by Microsoft (and other companies), leaving the absolute worst of both worlds.

Now, would Microsoft stop producing patches if full-disclosure was abandoned?  If exploits were no longer released?  This is hard to say.  The optimist in me says they would continue providing patches.  The pessimist in me says to look back on their history, past is prelude.

I am not saying that full disclosure and release of exploits improves security, only that it seems to keep the software companies slightly more honest.

Chris Thompson
Tuesday, May 11, 2004

I suppose you could have a similar argument about the security of real world things, like building, subways, nuclear reactors, nation-states, etc.

Does reporting on flaws in the security of, say, a nuclear reactor mean that bad guys are more informed on how to attack it?  Do you trust government to fix these flaws without the application of intense public scrutiny?


Jim Rankin
Tuesday, May 11, 2004

""Further more, why'd they need to take out the Star of David?"

Maybe so Hindus don't feel victimized by the removal of the swastika?  Just a thought ......."

More likely it is because Muslims equate the star of david to the nazi swastika.  They have made this same argument with respect to the Israeli version of the red cross.  The muslim version substitutes a crescent for the cross but muslims complain that using a star of david is the same as if the German red cross used a swastika.

The take home lesson?  People are idiots and it is bad policy to make critical software updates based on someone being offended by a character in a font.

In other news, the most revealing part of all this is the casual way in which Joel reports that most patches require a reboot.  Why would someone choose to run an OS that makes you seriously consider this cost benefit analysis for every security patch?

name withheld out of cowardice
Tuesday, May 11, 2004

I think they just got sick of hearing about the "death to jews" easter egg in the font.

Just in case you aren't aware, in older versions of the font if you typed NYC, it would map to three symbols a Skull & Crossbones (Death), Star of David (Jews) and a thumbs-up graphic.  I have no idea if this was put there on purpose -- I tend to think it was merely coincidence, but a lot of people felt that it was some sort of code, because of the perception that there is a large concentration of Jews in New York City (which there are, but it still seems like a rather flimsy smoking gun).

So, anyway, I believe they replaced all three of those symbols because of the persistent urban legends that had come up around them.  And also changed some others just to avoid any potential semi-random acronym from being a possible political/hate statement.

Mr Fancypants
Tuesday, May 11, 2004

Joel writes:

Reason 2: too many people read Bruce Schneier's book about cryptography, in which he constantly harps on the theme that the only way to get good cryptographical algorithms is to expose them to peer review, and misinterpreted this to mean that the only way to get good security is to reveal your complete security plan.

OK Joel, what is the *correct* interpretation of what Bruce wrote?

Karl Max
Wednesday, May 12, 2004

Its the definition of peer.  In this context peer doesn't mean Uncle Tom Cobbley and All, but peers in context.  Those with the knowledge of the implications and the knowledge of the facts.

This doesn't necessarily mean just inside a single organisation.  For instance, there's a security group within that gets sight and oversight on security holes and fixes before they become generally public, not all the members of that group are entirely within 

Simon Lucy
Wednesday, May 12, 2004

Cryptography != security?

Mr Jack
Wednesday, May 12, 2004

Koz is exactly right. It doesn't matter if the patches are described or not from the attacker's perspective. The binaries themselves give away the component patched. And the bad guys out there just reverse engineer the differences in the binaries to find the specific exploit. Describing the vulnerabiity only helps the non-hackers to decide how important it is to apply the fix.

But Joel is also correct that many vulnerabilities are found by people outside MS. Their motives are not malicious so report the details of the problem privately, giving MS time to fix it. But they also usually like to broadcast the details some period of time after the patch is available. So the choice is not totally MS's here.

Wednesday, May 12, 2004

Karl:  The trick is that Bruce Schneier was talking about mathematical algorithms, not security infrastructure as a whole.

The point is that a cryptosystem that depends on keeping the algorythm secret from an attacker is weak.  I.E. if you steal a copy of the enigma machine, it shouldn't help you break the code.  A good cypher depends only on keeping the key secret, not the algorithm.

Eric Seppanen
Thursday, May 13, 2004

The reason worms are written so late is that by writing a worm you give your exploits away. The people who are really into this trade this knowledge so giving something away is plain stupid. It will only get patched faster.

Sufficiently many Windows systems aren't up to date anyway, so a worm using already patched holes are just as effective as one using an new one.

Remember that the latest RPC exploit took over half a year for Microsoft to release a patch against -- after the good guys found and reported it.

Jonas B.
Sunday, May 16, 2004

*  Recent Topics

*  Fog Creek Home