放宽一元计算中的排序约束 [英] Relax ordering constraints in monadic computation

查看:135
本文介绍了放宽一元计算中的排序约束的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这里有一些值得深思的东西。

当我编写一元代码时,monad会对所做的操作进行排序。例如,如果我在IO monad中写入:

  do a<  -  doSomething 
b< - doSomethingElse
return(a + b)

我知道 doSomething 将在 doSomethingElse 之前被执行。

现在,考虑类似C:

  return(doSomething()+ doSomethingElse()); 

C的语义实际上并没有指定这两个函数调用的评估顺序,所以编译器可以随心所欲地移动它。



我的问题是这样的:我如何在Haskell中编写monadic代码,这也会导致评估顺序不确定? 理想情况下,当编译器的优化器查看代码并开始移动时,我会获得一些好处。



以下是一些可能的技巧,这些工作已经完成,但是处于正确的精神状态:




  • 函数样式编写代码,是,写加上doSomething doSomethingElse 并让加上安排monadic调用。缺点:您无法分享一次性操作的结果,并且 plus 仍会决定最终评估结果的时间。

  • 使用懒惰IO ,也就是 unsafeInterleaveIO ,这会将调度延迟到需求懒惰的评估。但懒惰与严格的未定义顺序不同:特别是我希望我所有的单子动作都得到执行。

  • 惰性IO,立即结合所有参数。特别是, seq 不会强加排序约束。
    $ b $ p
    $ b

    从这个意义上讲,我想要一种比一元排序更灵活的方法,但比完整的懒惰更不灵活。 解决方案

这个monad代码过度序列化的问题被称为交换单体问题


交换monad是monads,它们的操作顺序没有区别(它们通勤),也就是当下面的代码时:



  do a < -  fx 
b < - gy
mab

与以下内容相同:

  do b < -  < gy 
a < - fx
mab

有很多monads可以通勤(例如也许随机)。例如,如果monad是可交换的,那么它内部捕获的操作可以并行计算。他们是非常有用的!



然而,我们没有一个好的语法monads通勤,虽然很多人问过这样的事情 - 它仍然是公开研究问题



另一方面,应用函数给了我们这样的自由来重新排序计算,然而,你必须放弃 bind 的概念。 (如 liftM2 show)的建议。


here is some food for thought.

When I write monadic code, the monad imposes ordering on the operations done. For example, If I write in the IO monad:

do a <- doSomething
   b <- doSomethingElse
   return (a + b)

I know doSomething will be executed before doSomethingElse.

Now, consider the equivalent code in a language like C:

return (doSomething() + doSomethingElse());

The semantics of C don't actually specify what order these two function calls will be evaluated, so the compiler is free to move things around as it pleases.

My question is this: How would I write monadic code in Haskell that also leaves this evaluation order undefined? Ideally, I would reap some benefits when my compiler's optimizer looks at the code and starts moving things around.

Here are some possible techniques that don't get the job done, but are in the right "spirit":

  • Write the code in functorial style, that is, write plus doSomething doSomethingElse and let plus schedule the monadic calls. Drawbacks: You lose sharing on the results of the monadic actions, and plus still makes a decision about when things end up being evaluated.
  • Use lazy IO, that is, unsafeInterleaveIO, which defers the scheduling to the demands lazy of evaluation. But lazy is different from strict with undefined order: in particular I do want all of my monadic actions to get executed.
  • Lazy IO, combined with immediately seq'ing all of the arguments. In particular, seq does not impose ordering constraints.

In this sense, I want something more flexible than monadic ordering but less flexible than full-out laziness.

解决方案

This problem of over-sequentializing monad code is known as the "commutative monads problem".

Commutative monads are monads for which the order of actions makes no difference (they commute), that is when following code:

do a <- f x
   b <- g y
   m a b

is the same as:

do b <- g y
   a <- f x
   m a b

there are many monads that commute (e.g. Maybe, Random). If the monad is commutative, then the operations captured within it can be computed in parallel, for example. They are very useful!

However, we don't have a good syntax for monads that commute, though a lot of people have asked for such a thing -- it is still an open research problem.

As an aside, applicative functors do give us such freedom to reorder computations, however, you have to give up on the notion of bind (as suggestions for e.g. liftM2 show).

这篇关于放宽一元计算中的排序约束的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆