在函数式编程中,可组合性意味着什么? [英] What does composability mean in context of functional programming?

查看:105
本文介绍了在函数式编程中,可组合性意味着什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当功能程序员说某个东西是可组合或不可组合时​​,它们意味着什么?



我读到的这种类型的一些陈述是:


  • 控制结构不可组合。

  • 线程不构成。

  • 单向操作是可组合的。

>

解决方案

Marcelo Cantos给出了一个相当不错的解释,但我认为它可以做得稍微更精确一些。

某种类型的东西是可组合的,当多个实例可以以某种方式组合起来产生相同类型的东西时。
$ b

控件结构可组合性。像C这样的语言区分表达式,它可以使用操作符来创建新的表达式和语句,可以使用控制结构如 if for 和顺序控制结构,它们只是按顺序执行语句。关于这种安排的事情是,这两个类别不是平等的 - 许多控制结构使用表达式(例如,如果>来评估表达式,执行),但表达式不能使用控制结构(例如,你不能为循环返回一个)。尽管想要为循环返回一个似乎是疯狂的或毫无意义的,实际上,将控制结构视为可存储和传递的一类对象周围不仅可能,而且有用。在像Haskell这样的懒惰函数式语言中,像 if 这样的控制结构可以表示为普通函数,可以被操纵在表达式中就像任何其他术语一样,可以根据参数传递的参数来启用诸如构建其他函数的函数,并将它们返回给调用者。因此,尽管C(例如)将程序员可能想要做的事情划分为两类,并且限制了这些类别中的对象可以组合的方式,但Haskell只有一个类别,并且没有这样的限制,所以在这个意义上它提供了更多的可组合性。



线程的可组合性。我会假设Marcelo Cantos的确如此,使用锁定/互斥锁的线程的可组合性。这是一个稍微棘手的情况,因为在它的表面上,我们可以 具有使用多个锁的线程;但重要的一点是,我们不能让线程使用多个锁,并保证它们有



我们可以将锁定义为具有某些可以在其上执行的操作的事物类型,这些操作具有某些保证。一个保证是:假设有一个锁对象 x ,然后规定每个最终调用 lock(x)的进程调用 unlock(x),任何对 lock(x)的调用最终都会成功返回,并且 x 被当前线程/进程锁定。这种保证极大地简化了对程序行为的推理。



不幸的是,如果世界上有多个锁,它就不再是真的了。如果线程A调用 lock(x); lock(y); 和线程B调用 lock(y); lock(x); 那么有可能A获取锁 x 并且B获取锁 y 并且它们都将无限期地等待另一个线程释放另一个锁:死锁。所以,锁不是可组合的,因为当你使用多个锁时,你不能简单地声称那个重要的保证仍然成立 - 不是没有详细分析代码以查看它如何管理锁。换句话说,你不能再把功能当作黑盒子了。



可组合的东西很好,因为它们可以抽象 ,这意味着它们使我们能够推理代码而不必关心所有细节,并且减少了程序员的认知负担。


What do functional programmers mean when they say a certain thing is composable or not composable?

Some of the statements of this sort that I've read are:

  • Control structures are not composable.
  • Threads do not compose.
  • Monadic operations are composable.

解决方案

Marcelo Cantos gave a pretty good explanation, but I think it can be made slightly more precise.

A type of thing is composable when several instances can be combined in a certain way to produce the same type of thing.

Control structure composability. Languages like C make a distinction between expressions, which can be composed using operators to produce new expressions, and statements, which can be composed using control structures like if, for and the "sequence control structure" that simply performs statements in order. The thing about this arrangement is that these two categories are not on an equal footing -- many control structures make use of expressions (e.g. the expression evaluated by if to choose which branch to execute), but expressions cannot make use of control structures (e.g. you can't return a for loop). Although it might seem crazy or meaningless to want to "return a for loop", in fact the general idea of treating control structures as first-class objects that can be stored and passed around is not only possible but useful. In lazy functional languages like Haskell, control structures like if and for can be represented as ordinary functions, which can be manipulated in expressions just like any other term, enabling such things as functions that "build" other functions according to the parameters they are passed, and return them to the caller. So while C (for example) divides "the things that a programmer might want to do" into two categories and limits the ways in which objects from these categories can be combined, Haskell (for example) has just one category and imposes no such limits, so in this sense it provides more composability.

Thread composability. I'll assume as Marcelo Cantos did that you are really talking about the composability of threads that use locks/mutexes. This is a slightly trickier case because on the face of it we can have threads that use multiple locks; but the important point is that we can't have threads that use multiple locks with the guarantees that they are intended to have.

We can define a lock as a type of thing that has certain operations that can be performed on it, which come with certain guarantees. One guarantee is: suppose there is a lock object x, then provided that every process that calls lock(x) eventually calls unlock(x), any call to lock(x) will eventually return successfully with x locked by the current thread/process. This guarantee vastly simplifies reasoning about program behaviour.

Unfortunately, if there is more than one lock in the world, it is no longer true. If thread A calls lock(x); lock(y); and thread B calls lock(y); lock(x); then it's possible that A grabs lock x and B grabs lock y and they will both wait indefinitely for the other thread to release the other lock: deadlock. So, locks are not composable, because when you use more than one, you cannot simply claim that that important guarantee still holds -- not without analysing the code in detail to see how it manages locks. In other words, you can no longer afford to treat functions as "black boxes".

Things that are composable are good because they enable abstractions, meaning that they enable us to reason about code without having to care about all the details, and that reduces the cognitive burden on the programmer.

这篇关于在函数式编程中,可组合性意味着什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆