是否有可能有效地评估lambda微积分术语? [英] Is it possible to evaluate lambda calculus terms efficiently?

查看:130
本文介绍了是否有可能有效地评估lambda微积分术语?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近在lambda演算中编写了很多程序,我希望能够实时运行其中的一些程序。然而,尽管趋势功能范式基于lambda演算和B减少规则,但我找不到一个不是玩具的评估者,也不是为了效率。函数式语言应该是快速的,但我所知道的实际上并不提供对正常形式的访问(请参阅Haskell的懒惰评估程序,Scheme的闭包等),因此不能用作LC评估程序。



这让我想知道:是否不可能有效地评估lambda积分术语,仅仅是一个历史事故/没有兴趣,没有人决定为它创建一个快速评估器,或者我只是缺少在当前的知识状态下,评估lambda项的最好方法就是所谓的最优图还原技术。该技术于1989年由J.Lamping引入,是他的POPL文章一种用于优化lambda演算减少,然后由几位作者进行修改和改进。您可以在我的书中找到一份调查问卷,内容是S.Guerrini

术语最优是指共享的管理。在lambda微积分中,你有很多重复,并且根据复制工作,减少的效率至关重要
。在一阶设置中,定向非循环
图(dag)足以管理共享,但只要输入更高的顺序设置,就需要包含
共享和取消共享的更复杂的图结构。 p>

纯粹的lambda表达式中,最优图缩减比所有其他已知的
缩减技术(环境机器,超级组合器或其他)要快。
在上面的书中给出了一些基准(参见第279-301页),其中我们
证明我们的实现性能优于caml-light(ocaml的前身
)和Haskell(真的慢)。
所以,如果你听到有人说,从来没有证明最佳的
减少比其他技术快,这是不正确的:从理论和实验上都证明了
。 p>

函数式语言还没有以这种方式实现的原因是,在函数式编程的实践中,
很少使用函数
,并且其级别非常高,当你这样做时,他们往往是线性的。
只要您提升排名,lambda
术语的内在复杂性就会变得非常高。有一种方法可以让你
减少时间O(2 ^ n)而不是O(2 ^(2 ^ n))并不会使所有的
都有所不同,实际上:两种计算我最近写了一篇短文试图解释这些问题:
关于有效减少lambda术语
在同一篇文章中,我也讨论了具有超优化的
减少技巧的可能性。


I've been writing a lot of programs in the lambda calculus recently and I wish I could run some of them in realtime. Yet, as much as the trending functional paradigm is based on the lambda calculus and the rule of B-reductions, I couldn't find a single evaluator that isn't a toy, not meant for efficiency. Functional languages are supposed to be fast, but those I know don't actually provide access to normal forms (see Haskell's lazy evaluator, Scheme's closures and so on), so don't work as LC evaluators.

That makes me wonder: is it just impossible to evaluate lambda calculus terms efficiently, is it just a historical accident / lack of interest that nobody decided to create a fast evaluator for it, or am I just missing something?

解决方案

At the current state of knowledge, the best way of evaluating lambda terms is the so called optimal graph reduction technique. The technique was introduced in 1989 by J.Lamping is his POPL article "An algorithm for optimal lambda calculus reduction", and then revised and improved by several authors. You may find a survey in my book with S.Guerrini "The optimal implementation of functional programming languages", Cambridge Tracts in Theoretical Computer Science n.45, 1998.

The term "optimal" refers to the management of sharing. In lambda calculus you have a lot of duplication and the efficiency of the reduction is crucially based on avoding replicating work. In a first order setting, directed acyclic graphs (dags) are enough for managing sharing, but as soon as you enter in a higher order setting you need more complex graph structures comprising sharing and unsharing.

On pure lambda terms, optimal graph reduction is faster than all other known reduction techniques (environment machine, supercombinators, or whatever). Some benchmarks are given in the above book (see pag.296-301), where we proved that our implementation outperformed both caml-light (the precursor of ocaml) and Haskell (really slow). So, if you hear people saying that it has never been proved that optimal reduction is faster than other techniques, it is not true: it was proved both in theory and experimentally.

The reason functional languages are not yet implemented in this way is that in the practice of functional programming you very seldom use functionals with a really high rank, and when you do they are frequently linear. As soon as you raise the rank, the intrinsic complexity of lambda terms can become dangerously high. Having a technique that allows you to reduce a term in time O(2^n) instead of O(2^(2^n)) does not make all that difference, in practice: both computations are just unfeasible.

I recently wrote a short article trying to explain these issues: "About the efficient reduction of lambda terms". In the same article, I also discuss the possibility to have superoptimal reduction techniques.

这篇关于是否有可能有效地评估lambda微积分术语?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆