函数如何被粘贴? [英] How are functions curried?

查看:118
本文介绍了函数如何被粘贴?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我理解柯里化的概念是什么,并且知道如何使用它。这些不是我的问题,而是我很好奇它是如何在比Haskell代码更低的层次上实现的。 例如,当(+)2 4 是curried,是一个指向 2 的指针,直到 4 传入?甘道夫是否弯曲时空?这是什么魔术?

解决方案

简短回答:是指针维持到 2 ,直到将 4 传入。






长于必要的答案:

从概念上讲,你应该考虑用lambda演算和术语重写来定义Haskell。假设你有以下定义:

  fxy = x + y 
$ b $ f 这个定义在lambda微积分中出现,如下所示,

  \x  - > (\y-->(x + y))

如果您不熟悉lambda演算,这基本上是说返回参数 x 的一个函数<参数 y 的函数返回( x + y ))。在lambda微积分中,当我们将这样的函数应用于某个值时,我们可以用该函数体的副本替换该函数的应用程序,并替换该函数的参数。



因此,表达式 f 1 2 通过以下重写序列来计算:

<$ (\ y - >(x + y)))1 2
(\ y - >(1 + y))p $ p> (\ x - > 2#代换1代表x
(1 + 2)#代换2代表y
3

所以你可以在这里看到,如果我们只给 f 提供一个参数,我们会停在 \y - > (1 + y)。所以我们有一个完整的术语,它只是一个将某个东西加1的函数,完全与我们原来的术语分开,这个术语可能仍然在某处使用(对于 f

显然,这不是Haskell直接实现的方式。曾几何时,Haskell(或可能是其前任之一;我对历史不太确定)是由图减少。这是一种技术,用于执行与上面描述的术语缩减等价的操作,这会自动带来懒惰的评估和相当数量的数据共享。在图形缩减中,所有内容是对图中节点的引用。我不会详细讨论,但是当评估引擎将函数的应用程序减少到某个值时,它会拷贝与函数主体相对应的子图,并带有必要的替换的函数参数的参数值(但共享对其不受替换影响的图节点的引用)。所以基本上,是的,部分应用一个函数会在内存中创建一个新的结构,该结构引用了所提供的参数(即指向 2 >的指针),并且程序可以通过围绕着这个结构的引用(甚至分享它并多次应用它),直到提供了更多的参数并且实际上可以减少它。然而,它并不像它只是记住函数并积累参数,直到获得所有参数;评估引擎实际上每次将它应用于新参数时都会执行一些工作,实际上,图还原引擎甚至无法区分返回函数的应用程序和仍需要更多参数的应用程序,以及刚刚得到最后一个参数的应用程序参数。

我不能告诉你更多关于Haskell的当前实现的信息,我相信它是一个遥远的图形缩减的突变后裔,具有聪明的捷径并加快速度,但我可能是错的;可能因为他们发现了一种完全不同的执行策略,它不再像图形缩减一样。但是我90%确定它最终仍然会传递数据结构,这些数据结构保留对部分参数的引用,并且它可能仍然有一些等同于部分参数的因素,因为对于如何评估懒惰作品。我也相当肯定它会做很多优化和捷径,所以如果你直接调用5个参数的函数,比如 f 1 2 3 4 5 经过连续5次更多的硬编码复制f的主体的麻烦。


I understand what the concept of currying is, and know how to use it. These are not my questions, rather I am curious as to how this is actually implemented at some lower level than, say, Haskell code.

For example, when (+) 2 4 is curried, is a pointer to the 2 maintained until the 4 is passed in? Does Gandalf bend space-time? What is this magic?

解决方案

Short answer: yes a pointer is maintained to the 2 until the 4 is passed in.


Longer than necessary answer:

Conceptually, you're supposed to think about Haskell being defined in terms of the lambda calculus and term rewriting. Lets say you have the following definition:

f x y = x + y

This definition for f comes out in lambda calculus as something like the following, where I've explicitly put parentheses around the lambda bodies:

\x -> (\y -> (x + y))

If you're not familiar with the lambda calculus, this basically says "a function of an argument x that returns (a function of an argument y that returns (x + y))". In the lambda calculus, when we apply a function like this to some value, we can replace the application of the function by a copy of the body of the function with the value substituted for the function's parameter.

So then the expression f 1 2 is evaluated by the following sequence of rewrites:

(\x -> (\y -> (x + y))) 1 2
(\y -> (1 + y)) 2                 # substituted 1 for x
(1 + 2)                           # substituted 2 for y
3

So you can see here that if we'd only supplied a single argument to f, we would have stopped at \y -> (1 + y). So we've got a whole term that is just a function for adding 1 to something, entirely separate from our original term, which may still be in use somewhere (for other references to f).

The key point is that if we implement functions like this, every function has only one argument but some return functions (and some return functions which return functions which return ...). Every time we apply a function we create a new term that "hard-codes" the first argument into the body of the function (including the bodies of any functions this one returns). This is how you get currying and closures.

Now, that's not how Haskell is directly implemented, obviously. Once upon a time, Haskell (or possibly one of its predecessors; I'm not exactly sure on the history) was implemented by Graph reduction. This is a technique for doing something equivalent to the term reduction I described above, that automatically brings along lazy evaluation and a fair amount of data sharing.

In graph reduction, everything is references to nodes in a graph. I won't go into too much detail, but when the evaluation engine reduces the application of a function to a value, it copies the sub-graph corresponding to the body of the function, with the necessary substitution of the argument value for the function's parameter (but shares references to graph nodes where they are unaffected by the substitution). So essentially, yes partially applying a function creates a new structure in memory that has a reference to the supplied argument (i.e. "a pointer to the 2), and your program can pass around references to that structure (and even share it and apply it multiple times), until more arguments are supplied and it can actually be reduced. However it's not like it's just remembering the function and accumulating arguments until it gets all of them; the evaluation engine actually does some of the work each time it's applied to a new argument. In fact the graph reduction engine can't even tell the difference between an application that returns a function and still needs more arguments, and one that has just got its last argument.

I can't tell you much more about the current implementation of Haskell. I believe it's a distant mutant descendant of graph reduction, with loads of clever short-cuts and go-faster stripes. But I might be wrong about that; maybe they've found a completely different execution strategy that isn't anything at all like graph reduction anymore. But I'm 90% sure it'll still end up passing around data structures that hold on to references to the partial arguments, and it probably still does something equivalent to factoring in the arguments partially, as it seems pretty essential to how lazy evaluation works. I'm also fairly sure it'll do lots of optimisations and short cuts, so if you straightforwardly call a function of 5 arguments like f 1 2 3 4 5 it won't go through all the hassle of copying the body of f 5 times with successively more "hard-coding".

这篇关于函数如何被粘贴?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆