Fog Creek Software
Discussion Board




General question about CPU architecture

I'm not a computer/hardware engineer, so please excuse my ignorance.

From a personal computer standpoint, we've been at 32-bit computing for a long time.  Intel and AMD now have 64-bit processors.  I assume the next step will be 128-bit, then 256-bit, etc.

Question: Why isn't it possible to skip generations and just go to, say, 1024-bit computing?

MorePowerPlease
Friday, August 27, 2004

The reason for having more bits is to be able to address more memory (a larger quantity of data).

32 bits is enough to address 4 billion bytes of memory

64 bits is enough to address 16 billion billion bytes of memory (which is a much, much bigger number than 4 billion).

http://www.sunspot.noao.edu/sunspot/pr/answerbook/universe.html estimates that 79 bits would be enough to address as many bytes of data as there are atoms in the unverse ... so 128 bits would be seen as "overkill": because nobody has that much data storage hardware.

Christopher Wells
Friday, August 27, 2004

I guess that the short answer is price. The 16/32/64 transitions were possible (from a price point of view) because the manufacturing processes changed.

JSD

JSD
Friday, August 27, 2004

64-bit seems reasonably comfortable for a long time ahead. So address space will probably stop there, or maybe make one last plunge to 128.

As for 'how much stuff I can pull in with one operation', technically we're already 128-bit (with dual channel memory addressing), and there is no reason to stop here.

Alex
Friday, August 27, 2004

Except for serial interfaces, each bit currently requires a separate physical connection on the system board to memory and the I/O subsystems and supporting elements in the CPU.  To take all those to 1024 now wouldn't be practical.  I believe some current 64 bit CPUs don't expose all their address lines for the same reason even though they handle 64 bit addresses internally.

PCI Express is an example of a hybrid serial/parallel bus design that might make 1024 bit machines practical at some point.

Doug
Friday, August 27, 2004

Less mumbo jumbo - let's just call it "Turbo charged" or add a "GT" to the chip name.

Or how about "Blast Processing" ?  :)

Kent
Friday, August 27, 2004

I can see it now:

AMD announces the Athlon Turbo GT

Art Vandelay
Friday, August 27, 2004

When people say "16-bit CPU", they don't refer to the size of the address bus! The MC68000 which was the heart of Amiga 500 for example (and some older Macs) is a 16-bit CPU even though it has a 24-bit address bus and 32-bit registers. The reason why it is a 16-bit CPU is because its ALU (arithmetic logic unit) is 16-bit wide. It cannot work on numbers bigger than 16-bits!

So the 16/32/64-bit refers to how big a chunk of data the processor can operate on. As you increase this size, the amount of space you need on the die will also increase. Why? Because you will need that many more parallel traces to carry the information around inside the CPU. A 32-bit ALU will have 32 traces multiplexed all over the place going to registers and other units inside the CPU.

We might reach a point where we have 1024-bit CPUs as long as we can fit them inside the chip's packaging... These CPUs will matter (even though they might be able to address more things then there are in the universe) because you will be able to operate on larger numbers all at once.

Current CPUs can obviously deal with very very large numbers but they are still "simulating" the whole thing either through floating point representation and hardware or some other kind of emulation!...  Let's say you have a 128-bit number to work on for some kinda encryption application. Your 32-bit CPU will have to divide that big number into chunks and operate on it that way.  A 128-bit CPU could natively handle the 128-bit number and give you results much faster...

HW Dude
Friday, August 27, 2004

"so 128 bits would be seen as "overkill": because nobody has that much data storage hardware."

hmmm... I seem to remember someone once said we would never need more than 640k memory

nakedCode
Friday, August 27, 2004

I know; I was thinking of going on record as saying that 16 exabytes should be enough for anyone.

Christopher Wells
Friday, August 27, 2004

"The MC68000 which was the heart of Amiga 500 for example (and some older Macs) is a 16-bit CPU even though it has a 24-bit address bus and 32-bit registers. The reason why it is a 16-bit CPU is because its ALU (arithmetic logic unit) is 16-bit wide. It cannot work on numbers bigger than 16-bits!"

The 68000 had a 16-bit data bus and ALU (because of the expense of processor pins and motherboard traces), however all registers, and the instruction set, were 32-bit (not that the instructions were 32-bit, which is an irrelevant metric, but that they operated on 32-bit values): You most certainly could operate on number bigger than 16-bit - it used the ALU in two passes. If you're going to argue that the ALU defines the processor, then the Z80 was a 4-bit processor given that it had a 4-bit ALU.

Dennis Forbes
Friday, August 27, 2004

Uhh, that's 1x10^79 atoms in the universe, which means that it's a litttle over 256 bits to address every atom in the universe, Chris.  Assuming, of course, that we don't discover whole new quark-sized levels of computability for the universe.  And that we aren't on a n-brane or some other weird fringe physics theory and everything we know about the universe is wrong anyway.

The big advantage of an extra-large address space is the room it gives you in interesting ways.  With a 128 bit address space and the likelyhood that there would be only 64-80 bits of that used, it means that you can put a random number in the top 48-64 bits of the address when you load an executable and make it harder to write absolute-addressed stack smashing attacks.  Or, if you aren't so worried about it, have a reasonable likelyhood that if you load two DLLs that they will all map to a convenient location and therefore load faster and be able to share pages.

Likewise, you can map files to memory without the windowing that a 32 bit address space gives you.  The AS/400 SLIC (that's the virtual machine you write against, not the underlying PowerPC processor) has 128 bit addresses and no differentiation between memory and disk.

The whole "x-bit" thing is bogus anyways.  It's kind of like how the Atari Jaguar was "64-bit" when anything programmable on it was either 16 or 32 bits, depending on which one you used.  And the Neo Geo and the Genesis both had a 68000, but one called itself 16 bit and the other 32 bit.  And the Turbografix 16 had an 8-bit 6502 CPU.  And the Super NES had a 65816 CPU that was 16 bits inside, but 8 bits on the outside (but with a 24 bit address space)

Flamebait Sr.
Friday, August 27, 2004

The universe appears to be shrinking alarmingly. The last time I saw figures for the number of atoms in the universe the number was1x10^106

This number was compared to the number of possible games of chess, which was calculated at 1x10^120 so there is still some use for more processing power - and if that's solved we can always start on the Tower of Hanoi :)

Stephen Jones
Saturday, August 28, 2004

--- estimates that 79 bits would be enough to address as many bytes of data as there are atoms in the unverse ... so 128 bits would be seen as "overkill": because nobody has that much data storage hardware.  ---

If you change your hardware to quantum computers, then you have count the number of ... (what's the plural of quantum ?) ?

Michael Moser
Sunday, August 29, 2004

>>  The AS/400 SLIC (that's the virtual machine you write against, not the underlying PowerPC processor) has 128 bit addresses and no differentiation between memory and disk. <<

I always thought the AS/400 was under-appreciated.  What's not to like about a machine that you just plug in and turn on, and it runs for months without rebooting?

From an architectural standpoint, the totally flat memory space was a programmer's dream -- you just wrote to memory, and OS/400 was the only one that cared whether it was real or not.  If you had performance problems, you just added RAM to give the OS more room to play in.

I understand that the AS/400 has become an excellent Java platform since I worked with it last.

example
Sunday, August 29, 2004

And having more bits wouldn't help most people much anyways since disk access is far and away the bottleneck.  As much as processor specs are hyped, bus speed's the problem these days.  Bigger cache helps with this, but only so much.

Michael Chansky
Monday, August 30, 2004

*  Recent Topics

*  Fog Creek Home