Fog Creek Software
g
Discussion Board




Lisp Jargon - A little help, please.

I'm finding it really hard to get to grips with some of the jargon used in Lisp and Scheme circles.  The main two concepts I have trouble with are 'closure' and 'continuation.'

People keep saying about how great these are, but I've spent an hour reading up on the c2 wiki and I'm no closer to getting a definitive answer.

Here is my best guess so far.  I know this isn't right, so I'd be grateful if somebody could point out where I'm going wrong.  Am I even going in the right direction?

My head hurts.

Closure:

It seems that Closure refers to the environment within which code is run.  Essentially it is the variables available within the current scope.

For example, if I have some pseudo code like this:
   
010: a = 1
020: b = 2
030: c = a + b

The closure at line 30 would be { a = 1, b = 2 }.

Closure starts to get complicated when functions are involved.

Lets say I have this pseudo code.  Please remember that it is pseudo code, since I'm not able to think in s-expressions yet, so plesae be kind:

010: a = 1
020: b = 2
030: c = a + b
040: function apply-weighting(x)
050:    x  * c
060: end
070: c = 10
080: y = apply-weighting(c)

The closure at line 50 is { a = 1, b = 2, c=3}.

This isn't much of an issue when you have a static compiled language, because when apply-weighting is compiled c will be evaluated to the constant value 3.  The later assignment to c won't matter.

With a dynamic language like Lisp it becomes an issue.  When apply-weighting is called the values will be re-evaluated so you want it to use the original values of c, not the current one.

Lambdas make it even more difficult, because you may have something like:

010: function make-apply-weighting(c)
020:    lambda(x)
030:        x * c
040:    end
050: end
060: c = 3
070: apply-weighting-of-3 = make-apply-weighting(c)
080: apply-weighting-of-10 = make-apply-weighting(10)
090: x = apply-weighting-of-3(100)
100: y = apply-weighting-of-10(100)
110: c = x + y

Here their are two different functions with two different closures.

The code at 30 either has the closure {c = 3} or {c = 10} depending on which of the two 'lambda variables' it is being referenced by.

The closure at 110 also contains the two 'lambda variables' each of which in themselves contain a separate closure for 30.
 
Am I right to use the expression 'lambda variables' in this way?


Continuations:

I'm really struggling with this one.

As far as I can see Continuations save code along with it's closure, so that you can run it again at a later date and get the same results as if you were about to run it.

However, none of the texts I'm reading mention closure in relation to continuations, so I must be wrong.

Please help me.

Ged Byrne
Thursday, December 11, 2003

Am I right in saying the examples given are referring to lexical binding, rather than dynamic?

Ged Byrne
Thursday, December 11, 2003

A Closure is a combination of a function and a set of variable bindings.  Here are some examples:

(let ((counter 0))
  (defun new-id () (incf counter))
  (defun reset-id () (setq counter 0)))

Both of these functions share the variable counter. The same thing could've been done with a global variable, but this closure protects the counter variable from being modified outside of these two methods.

A more useful example is what I like to call 'variable capture', basically you create a method which returns a function that has its own local state (via a set of variables):

(defun make-adderb (n)
  #’(lambda (x &optional change)
      (if change
          (setq n x)
          (+ x n))))

This function creates a modifiable adder function.  Thus you create the adder by calling:

> (setq addx (make-adderb 1))
#<Interpreted-Function CE1D56>
> (funcall addx 3)
4

But by using the optional change argument you can modify the internal state of the adder:

> (funcall addx 100 t)
100
> (funcall addx 3)
103

Continuations are bit more complicated and unfortunately I'm not as familiar with them.  Basically they are freeze-dried functional objects containing the state of computation.  Scheme has native support for continuations while ANSI Common Lisp does not.  These examples are taken from Paul Graham's excellent book, "On Lisp", which he has made available on his website (http://www.paulgraham.com):

> (define frozen)
FROZEN
> (append ’(the call/cc returned)
                (list (call-with-current-continuation
                      (lambda (cc)
                            (set! frozen cc)
                            ’a))))
(THE CALL/CC RETURNED A)

The call/cc returns a, but first saves the continuation in the global variable frozen. Calling frozen will restart the old computation at the point of the call/cc. Whatever value we pass to frozen will be returned as the value of the call/cc:

> (frozen ’again)
(THE CALL/CC RETURNED AGAIN)

You'll probably want to reach chapter 20 of his book for more indepth discussion of continuations.

Regards,
Sean

Sean Mountcastle
Thursday, December 11, 2003

I think this is easier to explain in Python.  Here's a function that returns a "closure"

def weighter(weight):
      def do_weighting(value):
            return value * weight
      return do_weighting

timesTen = weighter(10)
timesTwo = weighter(2)

print timesTwo(3)  # prints 6
print timesTen(12) # prints 120


The inner function references a variable in the lexical scope of the outer function.  So, the returned function object (which is created at *runtime* knows to look for 'weight' in the scope it was created in.

Thus, each time you call 'weighter' you get a *new* function, each with a different "closure".  (By the way, a function object in Python is just a wrapper around a "code object".  The wrapper defines default arguments and the closure, while the code object says what to do.  Thus, you can have multiple "function" objects that do the same thing, but in a different environment or with different defaults for omitted arguments.

The main difference here between Python and Lisp, IIUC, is that in Python only the outer function may assign values to the closed-over variable.  This is because Python uses assignment to declare variables, and there's if you assign to the variable in the inner function, it concludes that you want a *local* variable in that function, rather than referring to the variable in the outer scope.

Despite Paul Graham's protests to the contrary, this isn't really an important limitation of Python, because one rarely uses a pure functional style in Python.  Lisp uses closures to simulate objects, so being able to do assignment in a nested scope is more important there.  Python has objects already, so the natural way to do object-like things is with objects.  Still, Guido has mused about the possibility of allowing variables in outer scopes to be rebound.  The catch is finding a syntax that balances all of the usability requirements.

Interestingly, once that change and AST manipulation happen, Python will be as close to Lisp (in the ways Paul Graham argues are important) as it will ever get.  I don't see Guido ever yielding on the remaining two differences (statements as expressions, and macros), and I agree with his reasoning for them.

Phillip J. Eby
Thursday, December 11, 2003

Let's pin down the terminology.  When you mention this: {a = 1, b = 2, c=3}, that's called an "environment".  An environment is a table of symbol/value pairs.  A symbol keys into a value, just like in your notation.

A "closure" is built when a function is defined.  That closure is just a bundle of two things:  the function's definition, and a ref to the environment in which it was defined.  So if you have:

010: bar = 1
020: function foo()
030:    return bar
040: end

when foo() is created, not only is the definition stored somewhere, but so's the environment, so Python knows where to lookup the symbol 'bar'.  Here, the environment is {bar=1, foo=<closure>}.

That's it.  Closures are a feature of "lexical scoping."  The alternative, dynamic scoping is weird and interesting, but you can stop reading now if you don't care about it.  With dynamic scoping, you lose that environment, and only store the function's definition.  No bundling.  'bar' won't be looked up in the environment foo() was defined in.  Instead, if you call foo(), 'bar' will be looked up in your own calling environment.  Weird stuff, and I can explain it more if you want.  Only sometimes useful, but when it is, it's insanely useful.

Tayssir John Gabbour
Thursday, December 11, 2003

Phillip, Paul Graham went to the recent big lisp conference and claimed _all_ current lisps suck.  That's why he's writing Arc.  And furthermore he criticizes other languages (like Java) without bothering to learn about them.  So while he may seem to be speaking for a community, I would just take him to be a guy with lots of opinions.  Lisp is a community where people have lots of opinions, because there's no real benevolent dictator and because people have lots of control over syntax.  So they're free to say any old stupid thing and I'm glad for that, but no one speaks for everyone.  Even if he wrote a couple books.

As for objects, what lisp are you referring to?  There is more than one lisp, and they're incredibly different despite being built out of symbols rather than characters.  I was a Pythonista way before I was a lisper, and are you claiming that Python's oop is better than CLOS?  Pythonistas /use/ oop more, because Python tries to have only one obvious way to do things, and sometimes that way must use oop.  Whereas lispers just use oop when we want data abstraction.  For syntax abstraction, we use macros.

You must be thinking of Scheme.  There, you're probably on more stable ground.  I love Scheme for learning, but I'd probably never use it as a real language.  My opinion, and it will hopefully change.

I'm not attacking you, just I feel you're always wanting to rag on lisp.  If you look at my usenet posts, I've always objected when lispers trashed Python, explaining that Python's creators have enormous good taste; and while it's way too constraining for the Common Lisper, it should be respected and studied.  And I'd hope you learn about the lisp world too.

Tayssir John Gabbour
Thursday, December 11, 2003

Thanks to everybody, especially Tayssir.

So a closure is the combination of the Functions definition and the Functions environment.  Is this right.

I know Ruby a lot better than Python and Lisp, as you can probably tell from my pseudo code.

I take it that the environment is equivalent to Ruby's bindings[1].  Is this right?

I take it that closures are unique to the Lisp family because of the special status of the code as data/expressions.

In other languages, like Ruby, a function is a function, distinct from the source used to define it. 

Does this mean that the use of Rubies eval[2] with a binding to define a function is an effective simulation of Lisp's closures, or does it fall short somehow?  If it does fall short, how?

I'm guessing that this isn't actually a closure because I'm having to take steps to get the same effect.  With Lisp you get the closure for free.

Regarding Dynamic binding.  I take it that this is what Rebol does.  For example:

Type DESKTOP or SET-USER for settings.
>> a: 10
== 10
>> adder: func[x] [ x + a ]
>> adder 10
== 20
>> a: 5
== 5
>> adder 10
== 15
>>

The latter assignment of a to 5 affects a because of the dynamic scoping.  The value for a within the function is decided upon the value of the variable in memory at the moment it is called rather than the value in the text as the moment it was defined.

Again, I can simulate a closure by wrapping the function in an object.

Is this correct?

[1] http://www.rubycentral.com/book/ref_c_binding.html
[2]
http://www.rubycentral.com/book/ref_m_kernel.html#Kernel.eval

Ged Byrne
Thursday, December 11, 2003

Tayssir, I'm not aware that I've been "ragging" on Lisp.  I've frequently mentioned what I *like* about Lisp.  Could you please identify what part of my post(s) you took to be critical of Lisp?  I've gone back over what I posted in this thread and I don't get it.

As for what dialects of Lisp I've used, most of my practical experience is with Emacs Lisp.  I've studied some CLOS, enough to find the concepts really cool but the implementation a bad idea.  (Specifically, I find Lexicographic precedence of methods to be a bad idea.)  All of this experience is many years out of date at this point.

In any case, I certainly didn't claim Python has better OO than CLOS.  In another thread here, I mentioned that I'd love to have generic functions and multimethods in Python, although I'd prefer Dylan's pointwise ordering or Cecil's predicate implication ordering over CLOS' lexicographic ordering.

Anyway, my comments about Paul's criticism of Python were merely meant to point out that in my experience, Lisp people find functions to be a more fundamental concept than objects, whereas Pythonistas usually see things the other way around.  Which one is "right" was not my point.

Phillip J. Eby
Thursday, December 11, 2003

Ged, I didn't want to give the impression that a closure takes a "snapshot" of the environment.  That environment can change over time as you modify values, since it's a table, not a snapshot.  That's how a function can give different answers if you call it at different times.  With your Rebol example, you entered:
>> a: 10
which bound 'a' to the value 10.  A /snapshot/ of that environment would at that moment look like {a=10}.  Then you put adder into that environment with:
>> adder: func[x] [ x + a ]
so a snapshot would then look like {a=10, adder=<closure>}.  (Remember inside that closure is a reference to this same environment, since you defined this function here.)  When you entered:
>> a: 5
the environment snapshot became {a=5, adder=<closure>}.  So as time went on, you modified the environment which adder refers to.

Again, notice you happened to define adder() in the exact same environment you're calling it from.  So we can't tell if Rebol is lexically or dynamically scoping, because the two environments happen to be the exact same table.

Now, suppose you are in a totally different Rebol package or module, one that doesn't have 'a' in its environment when you call adder().  Totally different namespace.  If Rebol is lexically scoped, it will look for 'a' in the adder() closure's environment.  Right now, 'a' is 5 there.  However, if Rebol is instead dynamically scoped, Rebol will signal an error, because it looks in the caller's environment, and won't find 'a' there.

Tayssir John Gabbour
Thursday, December 11, 2003

BTW, lisp doesn't really claim much for lexical scoping.  If I know my history, Algol had it, then Scheme did, which influenced Common Lisp into using it by default.  Previously, as kind of a design bug, many early lisps used dynamic by default.  Which kind of sucks, because you'd often like to have vars defined "lexically" near the functions that use them.

In the past, I think you had to wrap a function within a closure, if you wanted it to behave lexically.  I'm sure a macro could have done this invisibly, but that's not too beginner-friendly.

Emacs lisp still uses dynamic by default, but as I hear, Stallman didn't have many resources so he decided to only support one of the two.  Using dynamic was probably a good decision, given 99% of what people usually do with a text editor, but I haven't developed much on the Emacs platform so I can't say.

And sorry Phillip, I guess I totally misunderstood your intentions.  Well from what I understand of Common Lisp, sure functions are pretty important, but that doesn't mean a programmer has to use the functional paradigm.  Haskell and Scheme are likely far better at functional programming than Common Lisp, if that's what you like.  Common Lispers often prefer iteration over recursion, and there's a huge amount of support for iteration.

If you want to structure your entire app as oop, go for it.  But I think you can also ease your way up to oop, keeping data in any structure you want, until you think oop is "right."  OOP is also a part of Common Lisp's typesystem and errorhandling systems, as I understand.  But I never felt I had to use it, or anything else.  In Python, I had to, because it's often the Right Answer.  That's an interesting philosophy and I'm not going to argue if a language tries to nudge people into the Right Way.  Because there's a community memory there.

Tayssir John Gabbour
Thursday, December 11, 2003

RMS wrote the initial versions of GNU Emacs before lexical scoping for Lisps had became common (no pun intended).  There was a widespread belief that dynamic scoping was more efficient, and Stallman also considered the ability to override variables as important aid to flexibility.

Continuations...

Think about two canonical functions FOO and BAR.    FOO does some work, generates a temporary value, and then calls BAR with that value.  FOO never does any further work; all does is return the value that BAR returns.

In that case, we don't need to go back through FOO at all.  The caller can't tell the difference, since the result is the same.  This is called "tail-call optimization"; Scheme requires it, and Common Lisp allows it.  In a typical implementation, the assembly code completely overwrites the current function frame, getting a significant speedup.

So the top-level calls FOO, which calls BAR, which returns directly to the top-level.  (You can call BAR directly, too.  It's a normal function.)

BAR is the continuation of FOO, because it's what FOO passes control to when FOO has calculated its result.

In other situations there may not be explicit functions being called, but the concept is the same.  A sub-expression passes control (and its result) to its containing expression, for example.  After it has done this, the sub-expression is *gone*.  The containing expression is the continuation of the sub-expression.

In fact, optimizing compilers will often convert nested expressions into a series of functions, each one tail-calling the next and passing along the results of sub-expressions.  So it's the same implementation, as well as the same concept.

rwh
Friday, December 12, 2003

JScript also supports closures, but it is a bad idea to use them unless you know what you are doing.  See http://blogs.gotdotnet.com/EricLi/PermaLink.aspx/f249659d-a3f1-4f6b-9ca3-3bbcc2df240e  for details.

Eric Lippert
Friday, December 12, 2003

That's a good explanation of continuations, rwh.

The thing about continuations is that if you can wrap your head around what it means to write programs in the "continuation passing style", you can get some powerful results, particularly in purely functional languages. 

Essentially, a continuation encapsulates the concept "here's what I'm going to do next". Consider a little program in a sort of pseudo-JScript language:

function a() { b(); c(); }
function b() { d(); }
a();

So what happens?  We call a(), calls b(), calls d(), then we GO BACK to a(), which calls c(), we're done.  In continuation passing style, every function takes an argument which is its continuation, and when a function is done, it calls its continuation.  We could rewrite a() to look something like this, using a pseudo code syntax for the continuations:

function a(cont) { b(//call c then do cont//) }
a(//do nothing//);

See how this works?  a() calls b(), and then a() is done.  b() is going to take care of calling c(), right? Well, actually, no.  b is going to call d(), and pass //call c then do a's cont// as d's contination, so d() will take care of calling c() -- or, d() will tell one of its callee's to do it...  As you can see, the continuation keeps on moving around but _eventually_ we wind our way to that //do nothing// continuation, and we're done.

If I recall correctly, this style is primarily motivated by the desire to write efficient and elegant language compilers and interpreters.  Programs that can be transformed into CPS can be very elegantly implemented, because all of that nonsense about maintaining stacks and activation frames and all that goo goes away -- all the necessary state is in the continuation object. 

Furthermore, since the continuation object essentially captures the entire state of "what comes next", you can do really cool things -- like implement coroutinues, or write a debugger that can run programs _backwards_ -- just maintain a list of all the continuations you've ever seen, and you can back up your program to any previous state.

Does that make it a little clearer?  Continuations are big magic, and I confess that I do not have my head entirely wrapped around them either.  Getting into that functional headspace can be tricky.

Eric Lippert
Friday, December 12, 2003

Whoops, something I meant to add to that --

of course // call c then do cont // would be more concisely expressed as //c(cont)// -- what I want to emphasize is that c() is not evaluated until d() (or whatever) actually runs that continuation.  I hope that pseudocode syntax clarifies more than it obscures....

Eric Lippert
Friday, December 12, 2003

rwh and Eric,

Thanks for such a great explanation.  Lets see if I've got this.
   
Without continuations a procedural style is implemented using a stack.

Every time a routine jumps into a subroutine the current state of the program is pushed onto the stack.

When the subroutine is finished the state is popped back off the stack so that the routine can continue where it left off.

This can present a couple of problems. 

1 - If the routines are nested too deeply the stack becomes massive, possibly overflowing.
2  - It can be wasteful.  The original routine may having nothing more to do, so the effort of pushing and popping is wasted.
   
The most obvious solution to 2 is tail call optimisation.  If the routine does nothing else after calling the subroutine we throw away the current state, instructing the subroutine to return to its caller.
   
This is rather like when Peter borrows 10 of Paul.  Later Peter lends 10 to Jane.

Jane says "I'll pay you back tomorrow when I get paid"

Peter replies "Actually, I owe Paul 10 and your seeing him tommorrow.  Could you pay him back for me?"

Is this right?

Continuations take this one step further.  If the routine has more work to do, simple tail end optimisation isn't possible.
   
However, rather than pushing the program state onto the stack it is passed to the subroutine along with the remainder of the routines instructions.

Rather than return to the routine, the subroutine will execute the rest of the routines instructions.

This is rather like the following, more convulted story.

Peter was planning to go to the shops on Wednesday, the day he gets paid.

He meets Susan on Monday.  Susan gives Peter 10 so he can buy her a box of chocolates.

On tuesday Peter borrows 10 of Paul, promising to pay it back when he next sees him.

Early on wednesday morning Peter meets Susan.  Susan borrows 10 of Peter and promises to pay him back on Thursday.

Peter says, "Actually, I owe Paul 10 and you're seeing him tommorrow.  Could you pay him back for me?"

Thinking further he adds, "And you going to see Susan.  Heres another 10.  Could you buy her chocolates and give them to her tommorrow along with her change."

Finally, he concludes, "You couldn't get a few things for me as well.  It saves both of us having to go."

Peter passes Susan his shopping list, bag and money and goes back to bed.

Is this right, or am I missing something?

Ged Byrne
Friday, December 12, 2003

Ged and others:

you may find a good explanation of what continuations are in this tutorial:

http://www.ccs.neu.edu/home/dorai/t-y-scheme/t-y-scheme.html

Giovanni Corriga
Friday, December 12, 2003

I have to start running errands like that.

(And yes, that's a pretty good analogy.)

Eric Lippert
Friday, December 12, 2003

Yeah, that's basically it.  Continuations are effectively structured GOTOs that pass along arguments.  In an environment based around continuations, you never use your chip's CALL or RET instructions.  Every function ends with an unconditional JMP.

That means when you have a single function that

does setup work

... invokes some other function

does more work with the result of the call

that single function gets broken up into two: one to do the setup, the other to do the remaining work.  A reference to the second half gets passed along with the invocation.

Normally this conversion (to Continuation Passing Style -- CPS) is done by the compiler.  The code ends up as lots of small, ugly functions doing work and jumping to each other.  But that's a great format for doing aggressive optimizations.

Looking back over your original message, I think you've been trying to comprehend CALL-WITH-CURRENT-CONTINUATION ("call/cc" for short).  Call/cc lets you do two things:

1. Get access to the next function in the CPS chain.  This is done at run-time, so it's quite flexible.  And we're talking about "functions" at the CPS level.  In the code itself, the "next function" may be the second half of your source code's function.

2. At any point in the system, you can say "I want to jump to this CPS chain next, not the one that I'd go to normally."  You may be 100 functions deep in your source code, but in the CPS world there's no difference between jumping to what you were supposed to (functions taking you back up the chain of 100), and jumping to this new function.  Boom.  You're suddenly back near the top of your program.

The mindbending part is that once you have the head of a CPS chain, you can call it as many times as you want, and from anywhere you can get your hands on it (including a global variable).  Each time you do, boom, you're back at that location in your code.

So as Eric Lippert talked about, continuations and the continuation-passing style let you create your own flow-control mechanisms.

You can get Python-style co-routines using functions, rather than special language constructs.

You can implement backtracking mechanisms like Prolog has.

These sorts of CPS call chains are supposed to be very effective as callbacks in GUI environments.

Web pages use CPS.  When you click on a link or post a form, you're telling the system to replace the current page with some other page, passing along some values.  Your back button lets you move your browser to an earlier point in the control chain---each page you've been to is a continuation that you can invoke instead of clicking on a link or posting a form.

In parsers, it's usually important to distinguish ordinary failure (didn't match) from catastrophic failure (unexpected EOF in a string constant).  In a CPS style, you can handle everything in the same way.  There's one function to tail-call on a match, one to tail-call on an ordinary failure, and one to tail-call on a catastrophic failure.  And if you need more options, you can add more functions.

rwh
Friday, December 12, 2003

For understanding continuations, I think it's helpful to know a little about how they can be implemented. So, whenever you grab the "current continuation", imagine that the machine takes a copy of your entire stack and saves it somewhere. Then, when you invoke that continuation, it sticks that copy back where the stack lives and does whatever it would have done to return a value from wherever-you-were-before.

Alternatively: don't use a stack as such at all. Instead, build your "stack frames" on the heap, link them together with pointers, and let the garbage collector take care of them. Then the "current continuation" is just a pointer to the current topmost stack frame, plus a little more information. So when you stash that somewhere, you prevent that frame getting discarded even after you've returned from wherever you were.

These two schemes (pun intended) aren't quite equivalent. Imagine that procedure A calls procedure B, which saves a copy of the current continuation and returns. Then procedure A alters one of its local variables and returns. Then the saved continuation is invoked. What value does that local variable (in procedure A) have? You get different answers with the two approaches. I think the answer from the second approach is the correct one (i.e., the one required by the Scheme standard). -- But now I'm puzzled, because (1) the implementation called SCM copies the stack to save a continuation, but (2) in a very simple test I just did, it appears to do the right thing. Is there a Scheme expert in the house? (I'm more a Common Lisper, myself.)

Gareth McCaughan
Monday, December 15, 2003

*  Recent Topics

*  Fog Creek Home