Fog Creek Software
g
Discussion Board




Intentionally Inefficient Algorithms

I recently reread Ferguson & Schneier's "Practical Cryptography" and stumbled across the section on “salting and stretching” passwords (22.2.1). Stretching the password acts (in their description) as a “Moore’s Law Compensator”.  The basic algorithm (using (s) as a subscript, not an array index) is:

X(0) = 0
X(i) = Hash(X(i-1) || password || salt) (i = 1...r)
Key = X(r)

Parameter ‘r’ (the iteration count) is determined dynamically when the password is associated with the protected object. The stretching should take a configured amount of time (say, 30 seconds). This way, you get better protection when you run on a faster machine.

(I’m simplifying a lot. See their excellent book for the gory details and motivations behind salting & stretching.)

ANYWAY...

It occurred to me that this “Moore’s Law Compensator” is almost exclusively CPU bound. A potentially better algorithm might be memory bound (because CPU cache sizes and memory bus bandwidths are increasing much more slowly than raw MIPS counts). Consider the following:

Array[0] = 0
Array[i] = Hash(Array[i-1] || password || salt) (i = 1...r)
Sort the array
Key = Hash(Array)

One million iterations of SHA-256 would require 32Meg RAM for the array.

ANYWAY...

I was just wondering of anyone here had any better suggestions for intentionally inefficient/unoptimizable algorithms. It's an interesting problem space.

Keith Moore
Sunday, February 8, 2004

I'll show my ignorance by joining in.

I always wondered why rounds are always base 2 numbers.  If you had a  130-bit blocksize then brute forcing it on conventionally available hardware would just take so much longer?

Security through obscurity of course..

i like i
Monday, February 9, 2004

Is it not a case that an unoptimised algorithm is simpler to visualise for those reading and therefore easier to understand ?

This is not a general truth, but common enough I believe.

whattimeisiteccles
Monday, February 9, 2004

As far as "memory bounding" goes, one reason not to use it is that a lot of cryptography is being done in specialized chips with very small and slow RAM. Such chips exist for every popular algo that requires key preparation (any relatively modern symmetric one).

Egor Shipovalov
Monday, February 9, 2004

Reminds me of an exercise in Kernighan & Pike's "The Practice of Programming". Ex 2-4:

"Design and implement an algorithm that will sort an array of n integers AS SLOWLY AS POSSIBLE. The algorithm must make progress and eventually terminate."

Of course they don't provide any soutions!
My attempt (scribbled in the margin) was

1. Generate all permutations of the input.
2. Check each permutation in turn to see if it's
    sorted.

Can anyone do "better" ?
I have a suspicion there is no upper bound and you can keep "improving" for ever.

MugsGame
Monday, February 9, 2004

re: Mugs

How about this:

Instead of generating all possible permutations of the existing data, how about generate all possible permutations of all possible data values?

Let's say you're trying to sort an array of 32-bit integers (array1).

Create a second array (array2)
Initialize array2 to zero.
Loop:
    Determine if array2 is sorted
    If it IS sorted...
        Determine if array2 is a permutation of array1.
        If it IS a permutation...
            You're done, array2 == sorted array1
        endif
    endif
    Treat array2 as a 32K-bit integer and increment it
endloop

Ugh! This is sick.

Keith Moore
Monday, February 9, 2004

> Treat array2 as a 32K-bit integer and increment it

Better to change it randomly (because it will take longer, on average, to find the correct solutions, because if it's random then some of them will be duplicates).

I don't suppose it matters whether you randomly select one bit to flip, or whether you swap every byte with a random value.

Christopher Wells
Monday, February 9, 2004

take this to sci.crypt or whatever; last we need is encryption invented in the armchair ;-)

wanderer
Monday, February 9, 2004

Keith: yes, that is much "better" !

Christopher: wouldn't randomness mean the algorithm isn't guaranteed to terminate?

wanderer: don't know about the earlier stuff, but I certainly wasn't suggesting "slow-sort" for crypto purposes. The title of this thread ("Intentionally Inefficient Algorithms") just struck a chord with me...

MugsGame
Monday, February 9, 2004

> wouldn't randomness mean the algorithm isn't guaranteed to terminate?

It isn't guaranteed to terminate within a bounded (finite) time; but by the same token there's a vanishingly small chance (i.e. smaller than any bounded/finite probabibility) of its not terminating "eventually".

Christopher Wells
Monday, February 9, 2004

Well, since it would rely on pseudo-random number generators I would think that the chances of it never terminating would be higher than the calculated probability.  I guess put another way, I’d stick with something you can prove will terminate eventually (i.e. every permutation).

MR
Monday, February 9, 2004

I can prove that your "slowest possible" algorithm can never be the slowest.

1) Write your algorithm.
2) Run it on a set of data and time it.
3) Modify the code to put in a delay of one second.
4) poof. the new version is slower.

repeat ad infinitum.

pdq
Monday, February 9, 2004

> Modify the code to put in a delay of one second.

I think you're missing the point.  The point of the challenge (insofar as there is one) is to find algorithms which _by their very nature_ are inefficient algorithms _in the asymptotic sense_.  Anyone can write a bad algorith, -- the challenge is to be incredibly _perverse_ and _clever_.

> Better to change it randomly (because it will
> take longer, on average, to find the correct
>  solutions

This is basically a variation on BogoSort, the canonical perversly inefficient sort.  See the Jargon File for details.

When I was in school, a common exam question was to state an incredibly awful sort algorithm, and then ask for a proof that it was a working sort algorithm and a bound on its asymptotic performance.

Eric Lippert
Monday, February 9, 2004

> This is basically a variation on BogoSort... When I was in school, a common exam question ...

Comparing this with an earlier post of yours, "knows all that 'school' stuff inside out", I guess I wouldn't be admitted to Microsoft; I studied maths, not CS, so apparently I'm reinventing a wheel here.

Christopher Wells
Monday, February 9, 2004

Eric -- thanks for the reference to BogoSort. I had searched a few times for "answers" to the question, but never found one. Of course after all that I've now found a good page on the topic:
http://home.tiac.net/~cri/2001/badsort.html

Chris -- I did study CS, and BogoSort was never mentioned. Then again neither were hash-tables. Our Data Structures and Algorithms lecturer had a tree (and trie) fetish which consumed most of our time...

MugsGame
Tuesday, February 10, 2004

*  Recent Topics

*  Fog Creek Home