Are storage patterns language specific or OS...
Is the mechanism of representing binary patterns of data for particular data types tied to a language or to an OS or yet to the hardware (the instruction set of the chip)?
O gosh! I know I am behaving like an a-hole. I actually know the answer to what I am asking, and even know how outrightly absurd it is, but for some reason I do not understand, I feel need to question my past knowledge of facts from time to time. I have to face the fact that there is still *something* about these binary/hexadecimal/octal and floating point representations that I do not understand.
While I write this, I am saying to myself, "Yeah, you dumb ass. As if you don't know VB integers are 16-bit words and C has a vareity of them in all sizes, small ints, large/long ones, just ints (based on the machine). And .NET CTS has standardized the 32-bit Integer accross all .NET languages in system.int32, and as if you can't tell that they all run on Win32 as of now."
But then, oh! what was my question. I'll come back with it.
Sathyaish Chakravarthy
Friday, April 16, 2004
I am going to comeback with this question. Just with a little more of the pending thought I must invest. I am thinking...churning...
Please don't delete these posts, Joel.
Sathyaish Chakravarthy
Friday, April 16, 2004
I think it is more Joel's minions more than the man himself that you need to worry about.
Tapiwa
Friday, April 16, 2004
You'll jhave to be a bit clearer.
I would have thought that ASCII is independent of language, OS and instruction set for the chip.
Stephen Jones
Saturday, April 17, 2004
Sathyaish, please just take this as gentle criticism... you often raise good questions/points on the board. But it seems like you're had something of a stream-of-consciousness posting cycle yesterday. Posting while sleep-deprived can be dangerous. <g>
When I'm thinking about a creating a new thread (on JoS or a newsgroup), I'll usually save the message to Notepad and come back to it the next day. If I still have a burning desire to ask the question or mention something, I'll post it. Usually I'll just figure out the answer to the problem myself or realize that it just wasn't that important.
Robert Jacobson
Saturday, April 17, 2004
>> Is the mechanism of representing binary patterns of data for particular data types tied to a language or to an OS or yet to the hardware (the instruction set of the chip)?
If I understand your meaning, hardware architecture of the processor generally dictates the convention used in data representation.
The original "my CPU is better than your sucky CPU" argument in the microprocessor world was spawned by the difference between Motorola and Intel processors' representation of integers.
The relevant terms are "little endian" and "big endian". Intel processors were historically little endian while Motorola was big-endian.
Little-endian means that the bytes at lower addresses have the least significance. For instance, a 32 bit integer would be arranged in memory such that the "first" byte was the lowest 8 bits. Some programmers find this incredibly counter-intuitive. Motorola's processors (68xxx family) were big-endian, in other words, high to low bytes in memory, similar to textual/written form for number values.
I think the original motivation for this was marginal speed and efficiency advantages conferred by one approach over the other at the microcode/cpu register level.
Floating point has faired a bit better in terms of standards. IEEE's floating point format seemed to be embraced by most microprocessor vendors in the 80's.
Bored Bystander
Saturday, April 17, 2004
Recent Topics
Fog Creek Home
|