Fog Creek Software
Discussion Board

Reiser FS gets rewritten?

From slashdot, and interview with Hans Reiser:

"We simply rewrite more times and more deeply than others do, and that is how we get our results in our admittedly obscure field. "


Gregor Brandt
Wednesday, June 18, 2003

What do you want to hear?
Contrary to what Joel has said it is ok to rewrite :)

Wednesday, June 18, 2003

If Extreme Programming and Refactoring are considered to be good practices then the combination of the two, Extreme Refactoring (= Rewrite), must be the holy grail ;)

Jan Derk
Wednesday, June 18, 2003

A FS is a little different from other apps... efficiency is everything. You don't really want the FS to get much bigger/more bloated because just about everything the computer does depends on it. If you can find an efficiency, you take it.

More interesting was the WinFS article (and somewhere on slashdot) about how Windows will keep track related files. (I haven't actually read this article yet, just a few comments on slashdot).
Wednesday, June 18, 2003

> You don't really want the FS to get much bigger/more bloated because just about everything the computer does depends on it. <

I take that back, leave it to MS to bloat even the file system:

"In its latest build (M4), Longhorn contains few hints of the technology's imminent implementation. One of those is more than 20 MB in size and bears the name winfs.exe. This file stands for the upcoming Storage Engine."
Wednesday, June 18, 2003

Here is my (short-depth) analysis.

I think effectivly Reiser answered the (infured) question:

i) "Why rewrite" or
ii) "How does Reiser get away with complete rewrites or redesign?"

when he response to the very first question regarding Project Financiaing and Future Direction.

I quote:
"It is not usually the features they want that are wrong, it is the timeframe they want them in and the shortcuts they expect to be made to meet that timeframe"

and also:
"All of the commercial sponsors wanted some quick hack that would not be consistent with the semantics I am evolving ReiserFS towards, and would leave us with unwanted additional primitives"

Resier is effectivly admitting that he will not allow TIME (aka deadlines) to interfear with his overall goals, work style, and design process!

Most OSS projects are not dictated by the three DOMINANT factors of any struggling Capitolistic Business:
1. Time
2. Money
3. and the Overall Market

When was the last time someone instructed Linus: "You better ship Kernel Release X.y.z with features 1,2,3, _OR ELSE!!!"???

Heston Holtmann
Wednesday, June 18, 2003

I thought Linus wasn't the only kernel decision maker anymore.
Wednesday, June 18, 2003

"I take that back, leave it to MS to bloat even the file system."

Considering it has an embedded stripped version of SQL Server in it, I'd say 20MB isn't too bad. :-p

Brad Wilson (
Wednesday, June 18, 2003

Yeah, but what performance hit can you expect if every single file request (and there are dozens per second you're not even aware of) has to hit the same SQL server?
Wednesday, June 18, 2003

Maybe not such a big hit, if you keep requesting the same files over and over. (Why, BTW, would you request several /files/ per second? Pages, yes, but files?)

Most SQL engines do a fairly decent job at adapting to repeated similar queries, so the only way to really stall the file system out is randomly scattered file requests.

And in that case, the seek-head will hide the SQL processing time very well :)

Wednesday, June 18, 2003

Inasmuch as I've read, the files themselves are not kept in the SQL Server. It's an adaptation of the NTFS file system that adds the SQL Server for indexing capabilities.

Brad Wilson (
Wednesday, June 18, 2003

From what I've been reading lately, NTFS doesn't use a FAT, it has a hierarchical (sp?), semi-relational map of files stored in files itself. This is what is indexed by the present indexing system.

WinFS is suppose to replace this with SQL Server allowing better indexing and flexible file attributing etc, not the actual storage of files.

Geoff Bennett
Thursday, June 19, 2003

Why would you request lots of files per second?

How about this:
copy c:\somewhere\*.html d:\somewhere\else

Thursday, June 19, 2003

"How about this:
copy c:\somewhere\*.html d:\somewhere\else"

Bit pedantic and just guessing as my knowledge of the implementation of filesystems is skimpy at best but would that not just involve two files: c:\somewhere and d:\somewhere\else ?

Just me (Sir to you)
Thursday, June 19, 2003

What did he actually _mean_ by "rewrite"? Did he ever actually indicate that he means the same thing Joel is generally against, which is the complete scrapping of almost all code and just plain "starting over"?

Regardless, Joel has yet to ever state, to my knowlege, "rewrites are always bad", or "rewrites will be utter damnation and failure to you, no matter who you are".

As such:

A) He might not mean actually scrapping everything and starting over.

B) Even if he does it doesn't mean it's the optimal thing for him to do.

C) Just because he does it and is still in business doesn't mean you should do it, nor does it mean if you did it that you would have similar results.

More than anything else, it seems that all Joel's arguments against rewriting have little to do this business in particular. The biggest two arguments against rewriting is 1) until you are finished you can't put out any minor tweaks or fixes without wasting time on a discarded codebase, and similarly cannot respond to any new features or fixes from the competition - or changes in the market, and 2) you loose alot of the sort of legacy and rare quirk fixes that have come up over the years. Approximately none of these have much of anything to do with file systems. As far as I'm aware they are pretty much one shot sort of deals, more or less, and not particularly "competitive" in the normal sense of the word.

Thursday, June 19, 2003

That quote is taken WAY out of context.  He was referring specifically to his implementation of a tree balancing algorithm and how it was refined until he got it right.

Check out all his answers here:

Skip down to question 10.

Thursday, June 19, 2003

thanks for the gem. I think I will read that article now:

"You do all understand that while the GPL doesn't permit tying by license, distros have now moved to using threats of invalidating support contracts to achieve the market leverage they need to exclude competitors, yes? By doing this they can exclude mainstream official kernels from being used, exclude rival filesystems, exclude whatever might lead to less customer lockin..... "

Ain't OSS all open and lovydovy ...

Just me (Sir to you)
Thursday, June 19, 2003

Just me: No, it involves copying all the HTML files in c:\somewhere\.  Note that c:\somewhere\ may contain lots of other files, so you have to copy just certain files.

Back-ups also touch a lot of files at once.

And, heck, think about how often the operating system has to load a .DLL (or its equivalent) or otherwise touch a system file.

Brent P. Newhall
Thursday, June 19, 2003


you are right of course. I do not know why but I seemed to have missed both the fact that this was a copy instead of a move, and even between two different drives at that.
You see what happens when you try to cut down on your JoS time by just reading faster.

Just me (Sir to you)
Friday, June 20, 2003

*  Recent Topics

*  Fog Creek Home