Fog Creek Software
Discussion Board

Bigint lib (up to 300^60000)

I need a library or some math intro how to handle integers up to 300^60000. Any idea?

Friday, April 25, 2003

One possibility is the GNU MP library (

Roel Schroeven
Friday, April 25, 2003

Problem with most large number (large integer big integer) libraries is that they don't have a ActiveX wrapper around them. .Net SDK and Java both have large number libraries.

-- David

Li-fan Chen
Friday, April 25, 2003

No matter how smart your bigint library is, you just cannot
handle numbers that large, considering the total number
of atoms in the the universe is about 10^80. If you are talking about arbitrary numbers, this is unrealistic neither in terms of space complexity nor time complexity.

If you are dealing with special numbers only, like those with most digits zeros, of course it can be done. But I think you have to write your own library.

Friday, April 25, 2003

cool, seems to work fine.

Friday, April 25, 2003


even if there are 10^80 atoms in the universe it doesn't mean that you cannot represent it with a few chars. in this case 5 chars: 10^80

Friday, April 25, 2003

If you want ot represent it in chars, why do you want a big int library? I know, I know, in C char and int can be equivalent. Maybe instead of a big int library you could just write the chars in a large font?

(Okay, its Friday, and this is getting silly)

Friday, April 25, 2003

You cannot represent 10^80 different values without having at least as many bits to store a value in. Assuming that you need one atom per bit, etc...

Frederik Slijkerman
Friday, April 25, 2003

Common Lisp has this built in. You could use Allegro ( ) but there are easier solutions.
(never used this myself though)

Just me (Sir to you)
Friday, April 25, 2003

na: Representing the number "10^80" and representing 10^80 different values are very different things.  As you said you can represent the number in 5 characters, but to represent that many different values you need far more information.

However, Frederik was wrong on an important point.  You don't need 10^80 bits to represent 10^80 values.  You need log2(10^80) bits.  This is only around 266 bits. 

To represent 300^60000 different values, you need log2(300^60000) bits, which is only about 500,000 bits.

Mike McNertney
Friday, April 25, 2003

Frederik, I think your math is not quite right.  The number of bits required to store any integer from 0 to n is the base-2 log of n. 

10 ^ 80 < 16 ^ 80 = 2 ^ 320. 

So 320 bits would more than do it.

300^60000 < 512 ^ 60000 = 2 ^ 540000, so you'd need around half a million bits per integer.  That means that adding two integers would take a little less than one meg. 

I hope you've got a lot of ram.

Now I'm curious -- what's the application?


Eric Lippert
Friday, April 25, 2003

Furthermore, even without the conversion to binary, it is pretty silly to claim that you can't represent 10^80 different values, considering you can do so with an 80 character string of digits. ;)

Mike McNertney
Friday, April 25, 2003


Eric Lippert
Friday, April 25, 2003

To clear up the "number of atoms in the universe" thing:

To represent 10^80 different values at one bit per
atom you need log_2(10^80) atoms. That's less
than 300 atoms.

If there are 10^80 atoms in the universe and you
encode one bit into each atom somehow, then
the largest number you can represent will have
10^80 *bits*, so it will be about 2^(10^80)
in size.

That's a lot bigger than 300^60000. To represent
numbers up to 300^60000, you need log_2(300^60000)
bits, or about 500k bits, or about 60k bytes.

Gareth McCaughan
Friday, April 25, 2003

Ahem. I'd like to mention that the last 5 replies
weren't there when I followed the "Reply" link. :-)

By the way, anyone following the suggestion to
use Common Lisp might want to consider using
the implementation called CLISP rather than any
of the commercial ones if bignum operations make up
a lot of the runtime. CLISP's bignum implementation
is based on GMP, and it's faster than any of its
rivals. (GMP is, I think, the fastest bignum library
around.) On the other hand, because CLISP doesn't
compile to native code it's generally the *slowest*
Common Lisp implementation for everything other
than bignums. :-)

(Python has fairly well integrated bignums too.)

Gareth McCaughan
Friday, April 25, 2003

SC & Frederick,

I think you're confusing matters.

10^ 80 = 100000000000000000000000000000000000000000000000000000000000000000000000000000000

So, representing and working with these numbers is no problem at all. What you can't have is a computer with 10^80 bits of memory. But that's something entirely different. Your own computer has probably 8 gigabits of Ram, right? But you are still able to work with numbers larger than 8 billion...

Dennis Atkins
Friday, April 25, 2003

Please pardon me if this is ignorant, I'm just a student at University, but doesn't bigint storage in actuallity require far more than log2(n) bits in actuallity?

One would think that in order to implement truly unbounded integers in a archetecture independant way, one would need to use a list based data structure (I'm currently writing my own math library using this technique, just to learn). Thus, for each digit, one needs space for the number and a pointer to the next digit...

Am I wrong? Is there a better way of doing this?

Andrew Murray
Friday, April 25, 2003

Well to be sure you need some overhead, but you could also do it array style: one int which tells us the length of the array, and then an array of bytes, with the last byte 0 padded as necessary. Interpret this as a number by appending them (in whichever endianness makes you happy).

or however you want to implement it.

Steven C.
Friday, April 25, 2003


But what if, on any given archetecture, the length of the unbounded integer in question is greater than the largest integer size?

Andrew Murray
Friday, April 25, 2003

You guys are right. I guess I was thinking about certain particular problems when I made that stupid mistake.

The largest known prime number is (2^13466917)-1 which has 4053946 digits. This was found by Michael Cameron and other guys in 2001. It took an 800 MHz AMD T-Bird 42 days. 

So I think 300^60000 is quite safe. And good luck with your big ints.

Saturday, April 26, 2003

Apologies for the stupid mistake... :-)

Frederik Slijkerman
Sunday, April 27, 2003

>Apologies for the stupid mistake... :-)

Easy way to remember, you have 10 fingers, but you can
count to 1023 on them... :-)

Monday, April 28, 2003

Andrew: If the length of the number won't fit into
one word, then the number itself won't fit into your
memory. :-) I suppose that might be untrue for some
architectures; so allow, say, 128 bits for the length
field, and then any number whose length overflows
can't be represented using one bit per elementary
particle in the observable universe.

If you want a scheme that works theoretically
for arbitrarily large numbers, try the following
to store the number N:
1. Define L := number of bits in N. L ~= log(N).
2. Define M := number of bits in L. M ~= log(log(N)).
3. Write M in *unary*, taking M+1 bits.
4. Write L in *binary*, taking M bits.
5. Write N in binary, taking L bits.
Total: L+2M+1 bits, which is log N + O(log log N).

(All logarithms above are to base 2.)

In practice, every bignum package I've seen (which,
I admit, isn't all that many) uses a single fixed-size
field to store the length of the number, and then
stores the number in an array of fixed-size integers.
There's not much overhead; certainly not the factor
of 2 or more you'd get from using linked lists.

Gareth McCaughan
Monday, April 28, 2003


Yes, I crunched through this earlier and came to the same conclusion regarding length. In my current implementation I'm representing all integers as lists as it allows me to program my operations very neatly in a recursive fashion.

Also, it enables me to represent numbers in various bases (hex / bin / oct / dec) using the same data structure and the same interface functions, which is nice.

I never really plan on publishing the thing (there are plenty of great *fast* free math libs out there) so speed isn't really a factor, though it would be interesting to compare the running times of various operations givent the two different implementations.

Thanks very much for the insight, its very helpful getting feedback from those more experienced than myself.

Andrew Murray
Tuesday, April 29, 2003

Sorry for the formatting of the previous post, I'm repairing my maching and posting through lynx, which is kind of a b*tch.

Andrew Murray
Tuesday, April 29, 2003

*  Recent Topics

*  Fog Creek Home