为什么monads?它如何解决副作用? [英] Why monads? How does it resolve side-effects?

查看:90
本文介绍了为什么monads?它如何解决副作用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习Haskell并试图理解Monads。我有两个问题。

据我所知,Monad只是另一个类型类,它声明了与容器内的数据交互的方法,包括Maybes,Lists和IOs。用一个概念来实现这三件事似乎是聪明和干净的,但实际上,关键是在一系列函数,容器和副作用中可以有干净的错误处理。这是一个正确的解释吗?

第二,副作用问题究竟如何解决?用这个容器的概念,语言本质上是说容器内的任何东西都是非确定性的(如I / O)。因为列表和IO都是容器,所以列表与IO是等价的,尽管列表中的值对我来说似乎相当确定。那么确定性和副作用是什么?直到你将它放在一个容器中(这个容器没有什么特别的地方,它的旁边还有一些其他的值,例如Nothing),我就无法将自己的头围绕在这个想法上,它现在可以是随机的。

有人可以解释Haskell凭借输入和输出如何直观地解决状态变化的问题吗?我没有看到这里的魔法。

解决方案


关键是可以有干净的错误处理一系列功能,容器和副作用。这是一个正确的解释吗?


不是真的。你提到了很多人们在试图解释单子时引用的概念,包括副作用,错误处理和非确定性,但是听起来你已经意识到所有这些概念都适用于所有单子。但是你提到的一个概念是: chaining



这有两种不同的风格,所以我会用两种不同的方式来解释它:一个没有副作用,一个有副作用。



没有副作用:



以下例子:

  addM ::(Monad m,Num a)=> m a  - > m a  - > ma 
addM ma mb = do
a < - ma
b < - mb
return(a + b)

这个函数增加了两个数字,它们包含在一些monad中。哪个monad?没关系!在所有情况下,特殊的 do 语法de-sugars如下:

  addM ma mb = 
ma>>> = \ a - >>
mb>> = \b - >
return(a + b)

...或者,运算符优先级明确:

  ma>>> =(\ a  - > mb>> =(\ b  - >返回(a + b)))

现在你可以真正看到这是一连串的小函数,全部组合在一起,其行为将取决于如何为每个monad定义>> = return 。如果您熟悉面向对象语言中的多态性,这本质上是一回事:一个与多个实现的通用接口。它比你的平均OOP接口稍微更有意思,因为接口表示一个计算策略而不是动物或形状或其他。



好吧,让我们来看看 addM 在不同monad中的行为。 Identity monad是一个不错的开始,因为它的定义很简单:

  instance Monad Identity其中
返回a =身份a - 创建身份值
(身份a)>> = f = fa - 将f应用于

所以当我们说:

  addM(Identity 1)(Identity 2)

展开此步骤: (Identity 1)>> =(\ a - >(Identity 2)>> =(\\ p) (a + b))
(\ a - >(Identity 2)>> =(\ b - > return(a + b))1
(Identity 2)>> =(\ b - > return(1 + b))
(\ b - > return(1 + b))2
return 1 + 2)
身份3

好了,既然你提到了干净的错误处理,让我们看看 Maybe monad。它的定义只比 Identity 稍微复杂:

 实例Monad可能其中
返回a =仅仅是 - 与Identity monad相同!
(Just a)>> = f = f a - 与Identity monad再次相同!
Nothing>> = _ = Nothing - 与身份证



所以你可以想象,如果我们说 addM(Just 1)(Just 2),我们会得到 Just 3 。但是,对于咧嘴笑,我们来展开 addM Nothing(只是1)代替:

 无(&)> =(\ a  - >(只有1)>> =(\ b  - > return(a + b)))
Nothing

或者相反, addM(只是1)Nothing



 (Just 1)>> =(\a  - > Nothing>> =(\b  - > ; return(a + b)))
(\ a - > Nothing>> =(\ b - > return(a + b))1
Nothing>> =(\ b - > return(1 + b))
Nothing

>> = Maybe monad的定义进行了调整,以解决失败问题。 可能使用>> = 的值,您会得到您所期望的。



好​​吧,所以你提到了非确定性,是的,列表monad可以被认为是在某种意义上建模非确定性...这有点奇怪,但是将列表想象为代表性ng可能的替代值: [1,2,3] 不是一个集合,它是一个单一的非确定性数字,可以是一个,两个或三个。这听起来很愚蠢,但是当你考虑如何为列表定义>> = 时,它开始有意义:它将给定的函数应用于每个可能的值。因此, addM [1,2] [3,4] 实际上是要计算这两个非确定性值的所有可能和: [4, 5,5,6]



好的,现在解决您的第二个问题...

副作用:



假设您将 addM 应用于<$ c $中的两个值c> IO monad,比如:

  addM(return 1 :: IO Int)(return 2 :: IO Int)

你没有得到任何特别的东西,只有3个在 IO monad。 addM 不会读取或写入任何可变状态,所以它没有什么乐趣。 State ST monad也一样。不好玩。因此,让我们使用一个不同的函数:

  fireTheMissiles :: IO Int  - 返回伤亡人数

显然,每次发射导弹时,世界都会有所不同。显然。现在让我们假设你正在尝试写一些完全无害的,无副作用的非导弹启动代码。也许你再次尝试添加两个数字,但是这次没有任何monad四处飞行:

  add :: Num a => a  - > a  - > a 
add ab = a + b

突然间你的手滑了下来,你不小心打错了:

  add ab = a + b + fireTheMissiles 

一个诚实的错误,真的。钥匙非常紧密。幸运的是,因为 fireTheMissiles 类型 IO Int 而不是简单地 Int

好的,完全是人为的例子,但重点是在 IO ST 和好友,类型系统会将特效与特定上下文隔离。它不会奇迹般地消除副作用,使代码在透明方面不应该是透明的,但是它在编译时确实清楚了效果限制的范围。

回到原点:这与链接或函数组合有什么关系?那么,在这种情况下,它只是一种表达一系列效果的方便方式:

  fireTheMissilesTwice :: IO()
fireTheMissilesTwice = do
a < - fireTheMissiles
打印一个
b< - fireTheMissiles
打印b



总结:



monad代表链式计算的一些策略。 身份的策略是纯函数组合,可能的策略是具有故障传播的函数组合, IO 的政策是不纯功能组合等等。


I am learning Haskell and trying to understand Monads. I have 2 questions.

From what I understand, Monad is just another typeclass that declares ways to interact with data inside "containers", including Maybes, Lists, and IOs. It seems clever and clean to implement these 3 things with one concept, but really, the point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?

Secondly, how exactly is the problem of side-effects solved? With this concept of containers, the language essentially says anything inside the containers is non-deterministic (such as i/o). Because lists and IOs are both containers, lists are equivalence-classed with IO, even though values inside lists seem pretty deterministic to me. So what is deterministic and what has side-effects? I can't wrap my head around the idea that a basic value is deterministic, until you stick it in a container (which is no special than the same value with some other values next to it, e.g. Nothing) and it can now be random.

Can someone explain how, intuitively, Haskell gets away with changing state with inputs and output? I'm not seeing the magic here.

解决方案

The point is so there can be clean error handling in a chain of functions, containers, and side effects. Is this a correct interpretation?

Not really. You've mentioned a lot of concepts that people cite when trying to explain monads, including side effects, error handling and non-determinism, but it sounds like you've gotten the incorrect sense that all of these concepts apply to all monads. But there's one concept you mentioned that does: chaining.

There are two different flavors of this, so I'll explain it two different ways: one without side effects, and one with side effects.

No Side Effects:

Take the following example:

addM :: (Monad m, Num a) => m a -> m a -> m a
addM ma mb = do
    a <- ma
    b <- mb
    return (a + b)

This function adds two numbers, with the twist that they are wrapped in some monad. Which monad? Doesn't matter! In all cases, that special do syntax de-sugars to the following:

addM ma mb =
    ma >>= \a ->
    mb >>= \b ->
    return (a + b)

... or, with operator precedence made explicit:

ma >>= (\a -> mb >>= (\b -> return (a + b)))

Now you can really see that this is a chain of little functions, all composed together, and its behavior will depend on how >>= and return are defined for each monad. If you're familiar with polymorphism in object-oriented languages, this is essentially the same thing: one common interface with multiple implementations. It's slightly more mind-bending than your average OOP interface, since the interface represents a computation policy rather than, say, an animal or a shape or something.

Okay, let's see some examples of how addM behaves across different monads. The Identity monad is a decent place to start, since its definition is trivial:

instance Monad Identity where
    return a = Identity a  -- create an Identity value
    (Identity a) >>= f = f a  -- apply f to a

So what happens when we say:

addM (Identity 1) (Identity 2)

Expanding this, step by step:

(Identity 1) >>= (\a -> (Identity 2) >>= (\b -> return (a + b)))
(\a -> (Identity 2) >>= (\b -> return (a + b)) 1
(Identity 2) >>= (\b -> return (1 + b))
(\b -> return (1 + b)) 2
return (1 + 2)
Identity 3

Great. Now, since you mentioned clean error handling, let's look at the Maybe monad. Its definition is only slightly trickier than Identity:

instance Monad Maybe where
    return a = Just a  -- same as Identity monad!
    (Just a) >>= f = f a  -- same as Identity monad again!
    Nothing >>= _ = Nothing  -- the only real difference from Identity

So you can imagine that if we say addM (Just 1) (Just 2) we'll get Just 3. But for grins, let's expand addM Nothing (Just 1) instead:

Nothing >>= (\a -> (Just 1) >>= (\b -> return (a + b)))
Nothing

Or the other way around, addM (Just 1) Nothing:

(Just 1) >>= (\a -> Nothing >>= (\b -> return (a + b)))
(\a -> Nothing >>= (\b -> return (a + b)) 1
Nothing >>= (\b -> return (1 + b))
Nothing

So the Maybe monad's definition of >>= was tweaked to account for failure. When a function is applied to a Maybe value using >>=, you get what you'd expect.

Okay, so you mentioned non-determinism. Yes, the list monad can be thought of as modeling non-determinism in a sense... It's a little weird, but think of the list as representing alternative possible values: [1, 2, 3] is not a collection, it's a single non-deterministic number that could be either one, two or three. That sounds dumb, but it starts to make some sense when you think about how >>= is defined for lists: it applies the given function to each possible value. So addM [1, 2] [3, 4] is actually going to compute all possible sums of those two non-deterministic values: [4, 5, 5, 6].

Okay, now to address your second question...

Side Effects:

Let's say you apply addM to two values in the IO monad, like:

addM (return 1 :: IO Int) (return 2 :: IO Int)

You don't get anything special, just 3 in the IO monad. addM does not read or write any mutable state, so it's kind of no fun. Same goes for the State or ST monads. No fun. So let's use a different function:

fireTheMissiles :: IO Int  -- returns the number of casualties

Clearly the world will be different each time missiles are fired. Clearly. Now let's say you're trying to write some totally innocuous, side effect free, non-missile-firing code. Perhaps you're trying once again to add two numbers, but this time without any monads flying around:

add :: Num a => a -> a -> a
add a b = a + b

and all of a sudden your hand slips, and you accidentally typo:

add a b = a + b + fireTheMissiles

An honest mistake, really. The keys were so close together. Fortunately, because fireTheMissiles was of type IO Int rather than simply Int, the compiler is able to avert disaster.

Okay, totally contrived example, but the point is that in the case of IO, ST and friends, the type system keeps effects isolated to some specific context. It doesn't magically eliminate side effects, making code referentially transparent that shouldn't be, but it does make it clear at compile time what scope the effects are limited to.

So getting back to the original point: what does this have to do with chaining or composition of functions? Well, in this case, it's just a handy way of expressing a sequence of effects:

fireTheMissilesTwice :: IO ()
fireTheMissilesTwice = do
    a <- fireTheMissiles
    print a
    b <- fireTheMissiles
    print b

Summary:

A monad represents some policy for chaining computations. Identity's policy is pure function composition, Maybe's policy is function composition with failure propogation, IO's policy is impure function composition and so on.

这篇关于为什么monads?它如何解决副作用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆