什么是单态限制? [英] What is the monomorphism restriction?

查看:82
本文介绍了什么是单态限制?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我很困惑haskell编译器如何推断出比我所期望的更少
多态的类型,例如使用无点定义时。



看起来问题在于单态限制,它在默认情况下在编译器的
旧版本上。



考虑以下几点haskell程序:

  { - #LANGUAGE MonomorphismRestriction# - } 

import Data.List(sortBy)

plus =(+)
plus'x =(+ x)​​

sort = sortBy compare

main = do
print $ plus'1.0 2.0
print $ plus 1.0 2.0
print $ sort [3,1,2]

如果我使用 ghc 进行编译,则不会获得任何错误,并且可执行文件的输出为:

  3.0 
3.0
[1,2,3]

如果我将主要主体更改为:

  main = do 
print $ plus'1.0 2.0
print $ plus(1 :: Int)2
print $ sort [3,1,2]



<

  3.0 
3
[1, 2,3]

如预期。但是,如果我尝试将其更改为:

  main = do 
print $ plus'1.0 2.0
print $ plus(1 :: Int)2
print $ plus 1.0 2.0
print $ sort [3,1,2]

我得到一个类型错误:
$ b

  test .hs:13:16:
由于字面值为'1.0',所以没有实例($ F
)在'plus'的第一个参数中,即'1.0'
在第二个参数($)',即'plus 1.0 2.0'
在'do'区块中:print $ plus 1.0 2.0

当尝试使用不同类型调用 sort 两次时会发生同样的情况:

  main = do 
print $ plus'1.0 2.0
print $ plus 1.0 2.0
print $ sort [3,1,2]
打印$ sortcba

产生以下错误:

  te st.hs:14:17:
在字面值'3'中没有实例(Num Char)
在表达式中:3
在'sort'的第一个参数中, [3,1,2]'
在'($)'的第二个参数中,即'sort [3,1,2]'




  • 为什么 ghc 突然想到加上不是多态的,需要一个 Int 参数?
    Int 的唯一引用位于 加上的应用程序中,how如果定义明显是多态的,那么它有多重要

  • 为什么 ghc 突然想到 sort 需要一个 Num Char 实例?



此外,如果我尝试将函数定义放入它们自己的模块中,如下所示:

  { - #LANGUAGE MonomorphismRestriction# - } 

module TestMono where

import Data.List(sortBy)

plus =(+)
plus'x =(+ x)

sort = sortBy compare

编译时出现以下错误:

TestMono.hs:10:15:
(Ord a0)使用'比较'
类型变量'a0'不明确
相关绑定包括
sort :: [a0] - > [a0](绑定在TestMono.hs:10:1)
注意:有几个潜在的实例:
实例Integral a => Ord(GHC.Real.Ratio a)
- 定义于'GHC.Real'
实例Ord() - 定义于'GHC.Classes'
实例(Ord a,Ord b )=> Ord(a,b) - 定义于'GHC.Classes'
...加上23个其他
在'sortBy'的第一个参数中,即'compare'
在表达式中: sortBy compare
在'sort'公式中:sort = sortBy compare




  • 为什么 ghc 不能使用多态类型 Ord a => [a] - > [a] for sort

  • 为什么 ghc 处理加上加上有区别吗? 加上应该有
    多态类型 Num a => a - > a - >一个,我真的不知道从$ sort 类型的
    有多不同,但只有排序引发错误。



最后一件事:如果我评论排序文件编译。然而,如果我尝试将它加载到 ghci 并检查我得到的类型,则

$ b

  * TestMono> :t加
plus ::整数 - >整数 - >整数
* TestMono> :t加'
加':: Num a => a - > a - > a

为什么不是加上





这是关于Haskell中单态限制的规范问题
元问题中所述。 。

解决方案

什么是单态限制?



Haskell wiki所述的单态限制为:


Haskell类型推断中的反直觉规则。
如果您忘记提供类型签名,有时这个规则会使用类型默认值规则填充
自由类型变量和特定类型。


这意味着,在某些情况下,如果你的类型不明确(即多态)
,编译器会选择 instantiate



我该如何解决这个问题?



首先,您可以总是明确地提供一个类型签名,这将
避免触发限制:

  plus :: Num a = > a  - > a  - > 
plus =(+) - 好的!

- 运行如下:
Prelude>加上1.0 1
2.0

另外,如果你正在定义一个函数,你可以避免 无点式
,例如写:

 加上xy = x + y 



将它关闭



可以简单地关闭限制,以便您不必为代码执行任何
任何操作要解决这个问题。行为由两个扩展控制:
MonomorphismRestriction 将启用它(这是默认设置),而
NoMonomorphismRestriction 会禁用它。



您可以将以下行放在文件最上面:

  { - #LANGUAGE NoMonomorphismRestriction# - } 

如果你正在使用GHCi,您可以使用:set 命令启用扩展:

 前奏> :set -XNoMonomorphismRestriction 

你也可以告诉 ghc 从命令行启用扩展:

  ghc ... -XNoMonomorphismRestriction 

注意:您应该更喜欢第一个选项,而不是通过命令行选项选择扩展名。
$ b

请参阅 GHC的页面来解释这个和其他扩展名。



一个完整的解释



为了理解
单形态限制是什么,为什么介绍它以及它的行为如何,我将在下面总结一下你需要知道的一切。


An例子



以下简单定义:

  plus =(+ )

您认为可以取代e + 加上。特别是因为(+):: Num a => a - > a - >一个你也会有 plus :: Num a => a - > a - > a



不幸的是,情况并非如此。例如,我们在GHCi中尝试以下内容:

  Prelude>让加=(+)
前奏>加上1.0 1

我们得到以下输出:



< interactive>:4:6:
从字面值'1.0'产生的(Fractional Integer)的实例
在'plus'的第一个参数中,即'1.0'
在表达式中:plus 1.0 1
在'it'的等式中:it = plus 1.0 1
:set -XMonomorphismRestriction
<

 

实际上,我们可以看到加上的类型不符合我们的预期:

  Prelude> :t加
plus ::整数 - >整数 - >整数

发生的事情是编译器看到加上键入 Num a => a - > a - >一个,一个多态类型。
此外,碰巧上述定义属于稍后将解释的规则,因此
决定通过默认将类型变量 A
默认值是 Integer ,正如我们所见。



请注意,如果您尝试使用 ghc 编译上面的代码,你不会得到任何错误。
这是由于 ghci 句柄(和必须处理)交互式定义。
基本上,在
之前,在 ghci 中输入的每个语句都必须是完全类型检查,换句话说,就好像每个语句都在单独的
模块中。稍后我会解释为什么这件事。



其他一些例子



考虑以下定义:

  f1 x =显示x 

f2 = \ x - >显示x

f3 ::(显示a)=> a - >字符串
f3 = \ x - >显示x

f4 =显示

f5 ::(显示a)=> a - >字符串
f5 =显示

我们预计所有这些函数的行为方式都是相同的并且具有相同类型
show 的类型: Show a => a - >字符串



然而,当编译上述定义时,我们会得到以下错误:

  test.hs:3:12:
由于使用'show'而没有(Show a1)的实例
类型变量'a1'不明确
相关绑定包括
x :: a1(绑定在blah.hs:3:7)
f2 :: a1 - >字符串(绑定在blah.hs:3:1)
注意:有几个潜在的实例:
实例显示双重 - 在'GHC.Float'
实例中定义Show Float - Defined在'GHC.Float'
实例(Integral a,Show a)=>显示(GHC.Real.Ratio a)
- 在'GHC.Real'中定义
...另外24个
在表达式中:show x
在表达式中: \ x - >显示x
在'f2'的等式中:f2 = \ x - >显示x

test.hs:8:6:
使用'show'引起的(显示a0)没有实例
类型变量'a0'不明确
相关绑定包括f4 :: a0 - >字符串(绑定在blah.hs:8:1)
注意:有几个潜在的实例:
实例显示双重 - 在'GHC.Float'
实例中定义Show Float - Defined在'GHC.Float'
实例(Integral a,Show a)=>显示(GHC.Real.Ratio a)
- 在'GHC.Real'中定义
...加24其他
在表达式中:show
在等式' f4':f4 =显示

所以 f2 f4 不能编译。此外,当试图在GHCi中定义这些函数
时,我们得到 no errors ,但是 f2 f4 () - >字符串



单态限制是使 f2 f4 需要一个单形态的
类型,并且不同的行为在 ghc ghci 是由于不同的
违约规则



何时发生?



报告中定义的Haskell中,有两个不同类型的 绑定 / A>。
函数绑定和模式绑定。一个函数绑定就是
a函数的定义:

  fx = x + 1 

请注意,它们的语法是:

 <标识符GT; arg1 arg2 ... argn = expr 

Modulo guard和 where 声明。但它们并不重要。



必须有至少一个参数



模式绑定是以​​下形式的声明:

 < pattern> = expr 

再一次使用模保护。



注意变量是模式,所以绑定:

  plus =(+)

是一个 pattern 绑定。它将表达式加上(一个变量)绑定到表达式(+)



当一个模式绑定只包含一个变量名称时,它被称为$ b $ em简单模式绑定。



单态限制适用于简单模式绑定!

好吧,我们应该正式说:


声明组是一组最小的相互依赖的绑定。

报告的第4.5.1节。 报告中的第4.5.5节

b
$ b

/ b>
$ b


给定的声明组是不受限制的当且仅当:


  1. 组中的每个变量都被一个函数绑定绑定(例如 f x = x
    或简单的模式绑定(例如, plus =(+);第4.4.3.2节)和

  2. 显式类型签名是为组中的每个变量给出的,
    被简单的模式绑定绑定。 (例如 plus :: Num a => a - > a - > a; plus =(+))。



由我添加的示例



<因此,一个受限声明组是一个组,其中有
非简单模式绑定(例如(x:xs )= f something (f,g)=((+),( - )))或
有一些简单( plus =(+))。



影响受限声明组。



大多数情况下,您没有定义相互递归函数并因此声明
组变成只是一个绑定。



它是做什么的?



单形态限制由报告
4.5.5部分中的两条规则描述。


第一条规则




通常的Hindley-Milner对多态性的限制是,只有在环境中不存在的类型
变量可以被推广。
另外,受限制声明
组的约束类型变量在该组的泛化步骤中可能不会被泛化。

(回想一下,一个类型变量是如果它必须属于某个
类型的类,则受到限制;请参阅第4.5.2节。)

突出显示的部分是单形性限制引入。
它说如果该类型是多态的(即它包含某个类型变量)
该类型变量受到约束(即它有一个类约束:
例如类型 Num a => a - > a - > a 是多态的,因为它包含 a
也有约束,因为 a 的约束 Num 超过它。 )
then 它不能一概而论。



简单地说不是泛化意味着 可能会改变它的类型



如果您有以下定义: / p>

  plus =(+)

x ::整数
x =加1 2

y :: Double
y = plus 1.0 2

然后你会得到类型错误。因为编译器在<$ c $的声明中发现加上
而不是整数 c> x 它会将类型
变量 a 整数以及因此加上的类型变成:

 整数 - >整数 - >整数

但是,当它类型检查 y ,它会看到 plus
被应用于 Double 参数,类型不匹配。



请注意,您仍然可以使用,但不会收到错误:

 加=(+)
x =加1.0 2

在这种情况下,加上的类型首先被推断为 Num a => a - > a - > a
,但随后用于 x 的定义中,其中 1.0 需要小数
约束,将其更改为小数a => a - > a - > a



理由



报告说:


规则1由于两个原因是必需的,这两个原因都很微妙。


    规则1防止计算意外重复。
    例如, genericLength 是一个标准函数(在库 Data.List
    中类型由

      genericLength :: Num a => [b]  - > a 

    现在考虑以下表达式:

    <$ (len,len)


$ b $中的len = genericLength xs
b

它看起来应该只计算一次 len ,但没有规则1的可能会计算两次
两次,每次计算两次不同的重载。

如果程序员确实希望重复计算,可以添加
显式类型签名:

  let len :: Num a => a 
len = genericLength xs
in(len,len)



对于这一点, wiki <我相信,这是更清楚的。
考虑函数:

  f xs =(len,len)
其中
len = genericLength xs

如果 len 是多态的,类型 f 应该是:

  f :: Num a,Num b => [c]  - > (a,b)

所以元组 len)实际上可能是
不同的值!但这意味着必须重复 genericLength
完成的计算以获得两个不同的值。



这里的基本原理是:代码包含一个函数调用,但不引入
这个规则可以产生两个隐藏的函数调用,这是违反直觉的。



通过单态限制, f 的类型变为:

  f :: Num a => [b]  - > (a,a)

以这种方式,不需要多次执行计算。规则1防止了歧义。规则1避免了含糊不清。例如,考虑声明组



[(n,s)] =读取t



回想一下读取是标准函数,其类型由签名给出

reads ::(Read a)=> String - > [(a,String)]



没有规则1, n 将被分配类型∀a。阅读⇒a s
类型∀a。阅读⇒字符串
后者是无效的类型,因为它本质上是不明确的。
无法确定使用何种重载 s
,也不能通过为取值

因此,当使用非简单模式绑定(第4.4.3.2节)时,
推断出的类型在它们的约束类型变量
中总是单态的,而不管是否提供了类型签名。
在这种情况下, n s 是单形的 a <





我相信这个例子是自定义的,解释。有些情况下,如果不是
应用规则,会导致类型不明确。



如果您按照上面的建议禁用了扩展名,那么获得当
尝试编译上述声明时出现类型错误。然而,这不是一个真正的问题:
你已经知道当使用 read 时,你必须以某种方式告诉编译器
它应该尝试的类型...

第二个规则






  1. 整个模块的类型推断完成时保留的任何单形态类型变量都被认为是不明确的,并且使用默认规则(见4.3.4节)将
    解析为特定类型。


这意味着。如果您有惯常的定义:

  plus =(+)

这将有一个类型 Num a => a - > a - > a 其中 a 是由于上述规则1而导致的
单形型变量。一旦推断出整个模块
,编译器将根据默认规则简单地选择一种将取代 a
的类型。



最终结果是: plus :: Integer - >整数 - >整数



请注意,这是在完成之后完成的。



这意味着如果您有以下声明:

  plus =(+)

x = plus 1.0 2.0

模块内部,之前默认类型加上将会是:
小数a => a - > a - >一个(见规则1为什么发生这种情况)。
此时,在违约规则之后, a 将被替换为 Double
和所以我们会有 plus :: Double - >双 - > Double x :: Double



违约



如前所述,存在一些违约规则,详见报告第4.3.4节
推理器可以采用并且将用单形态代替多态类型。
这种情况​​发生在类型不明确时


例如在表达式中:

  let x = read< something> in show x 

这里表达式是不明确的,因为 show read 是:

  show :: Show a => a  - > String 
read :: Read a =>字符串 - > a

所以 x 的类型为code>阅读a =>一个。但是这个约束被许多类型满足:
Int Double ()为例。哪一个可以选择?

在这种情况下,我们可以通过告诉编译器告诉编译器我们想要的类型来解决歧义,
添加一个类型签名:
p>

  let x = read< something> :: Int in show x 

现在问题是:由于Haskell使用 Num 类型类来处理数字,
有很多
数字表达式含有歧义的情况。



考虑:

  show 1 

结果应该如何?



与前面一样 1 的类型 Num a =>一个并且可以使用许多类型的数字。
哪一个可以选择?

几乎每次我们使用一个数字时都会出现编译错误,这不是一件好事,
因此是违约规则被引入。规则可以使用 default 声明来控制
。通过指定默认(T1,T2,T3),我们可以更改
推理器如何默认不同类型。



不明确的类型变量 v 在下列情况下是不可伪造的:


  • v 仅出现在 C C
    (即如果它出现在: Monad(mv)那么它是不是默认的)。

  • 至少其中一个类是 Num Num 的子类。 b $ b
  • 所有这些类都是在Prelude或标准库中定义的。


默认类型变量被替换为默认列表中的 first 类型
是所有不明确变量类的实例。



默认默认声明是 default(整数,双精度)



例如:

  plus =(+)
minus =( - )

x = plus 1.0 1
y = minus 2 1

推断出的类型为:

  plus :: Fractional a => a  - > a  - > a 
减去:: Num a => a - > a - > a

违约规则变成:

  plus :: Double  - >双 - > Double 
minus :: Integer - >整数 - >整数

请注意,这解释了为什么在问题的示例中只有 sort
定义引发错误。类型 Ord a => [a] - >因为 Ord 不是数字类,所以[a] 不能被默认为



延期违约



请注意,GHCi附带 extended 默认规则(或 herefor GHC8 ),
可以启用使用 ExtendedDefaultRules 扩展名。



可设置的类型变量不需要 >在限制条件下出现,其中所有
类都是标准的,并且必须至少有一个类在
Eq Ord 显示 Num 及其子类。



此外,默认默认声明是 default((),Integer,Double)



这可能会产生奇怪的结果。以问题为例:

  Prelude> :set -XMonomorphismRestriction 
Prelude> import Data.List(sortBy)
Prelude Data.List> let sort = sortBy compare
Prelude Data.List> :t sort
sort :: [()] - > [()]

在ghci中我们没有得到类型错误,但是 Ord a 约束条件导致
a默认为(),这几乎没用。



有用的链接



有很多关于单态限制的资源和讨论。



以下是一些我认为有用的链接,可帮助您理解或深入探讨该主题: Haskell的wiki页面:单形制约 report

  • 一个可访问且很好的
  • 第6.2节和第6.3节<
  • a href =http://research.microsoft.com/en-us/um/people/simonpj/papers/histo Haskell的历史:懒惰随着类别处理单态限制和类型违约


  • I'm puzzled by how the haskell compiler sometimes infers types that are less polymorphic than what I'd expect, for example when using point-free definitions.

    It seems like the issue is the "monomorphism restriction", which is on by default on older versions of the compiler.

    Consider the following haskell program:

    {-# LANGUAGE MonomorphismRestriction #-}
    
    import Data.List(sortBy)
    
    plus = (+)
    plus' x = (+ x)
    
    sort = sortBy compare
    
    main = do
      print $ plus' 1.0 2.0
      print $ plus 1.0 2.0
      print $ sort [3, 1, 2]
    

    If I compile this with ghc I obtain no erros and the output of the executable is:

    3.0
    3.0
    [1,2,3]
    

    If I change the main body to:

    main = do
      print $ plus' 1.0 2.0
      print $ plus (1 :: Int) 2
      print $ sort [3, 1, 2]
    

    I get no compile time errors and the output becomes:

    3.0
    3
    [1,2,3]
    

    as expected. However if I try to change it to:

    main = do
      print $ plus' 1.0 2.0
      print $ plus (1 :: Int) 2
      print $ plus 1.0 2.0
      print $ sort [3, 1, 2]
    

    I get a type error:

    test.hs:13:16:
        No instance for (Fractional Int) arising from the literal ‘1.0’
        In the first argument of ‘plus’, namely ‘1.0’
        In the second argument of ‘($)’, namely ‘plus 1.0 2.0’
        In a stmt of a 'do' block: print $ plus 1.0 2.0
    

    The same happens when trying to call sort twice with different types:

    main = do
      print $ plus' 1.0 2.0
      print $ plus 1.0 2.0
      print $ sort [3, 1, 2]
      print $ sort "cba"
    

    produces the following error:

    test.hs:14:17:
        No instance for (Num Char) arising from the literal ‘3’
        In the expression: 3
        In the first argument of ‘sort’, namely ‘[3, 1, 2]’
        In the second argument of ‘($)’, namely ‘sort [3, 1, 2]’
    

    • Why does ghc suddenly think that plus isn't polymorphic and requires an Int argument? The only reference to Int is in an application of plus, how can that matter when the definition is clearly polymorphic?
    • Why does ghc suddenly think that sort requires a Num Char instance?

    Moreover if I try to place the function definitions into their own module, as in:

    {-# LANGUAGE MonomorphismRestriction #-}
    
    module TestMono where
    
    import Data.List(sortBy)
    
    plus = (+)
    plus' x = (+ x)
    
    sort = sortBy compare
    

    I get the following error when compiling:

    TestMono.hs:10:15:
        No instance for (Ord a0) arising from a use of ‘compare’
        The type variable ‘a0’ is ambiguous
        Relevant bindings include
          sort :: [a0] -> [a0] (bound at TestMono.hs:10:1)
        Note: there are several potential instances:
          instance Integral a => Ord (GHC.Real.Ratio a)
            -- Defined in ‘GHC.Real’
          instance Ord () -- Defined in ‘GHC.Classes’
          instance (Ord a, Ord b) => Ord (a, b) -- Defined in ‘GHC.Classes’
          ...plus 23 others
        In the first argument of ‘sortBy’, namely ‘compare’
        In the expression: sortBy compare
        In an equation for ‘sort’: sort = sortBy compare
    

    • Why isn't ghc able to use the polymorphic type Ord a => [a] -> [a] for sort?
    • And why does ghc treat plus and plus' differently? plus should have the polymorphic type Num a => a -> a -> a and I don't really see how this is different from the type of sort and yet only sort raises an error.

    Last thing: if I comment the definition of sort the file compiles. However if I try to load it into ghci and check the types I get:

    *TestMono> :t plus
    plus :: Integer -> Integer -> Integer
    *TestMono> :t plus'
    plus' :: Num a => a -> a -> a
    

    Why isn't the type for plus polymorphic?


    This is the canonical question about monomorphism restriction in Haskell as discussed in the meta question.

    解决方案

    What is the monomorphism restriction?

    The monomorphism restriction as stated by the Haskell wiki is:

    a counter-intuitive rule in Haskell type inference. If you forget to provide a type signature, sometimes this rule will fill the free type variables with specific types using "type defaulting" rules.

    What this means is that, in some circumstances, if your type is ambiguous (i.e. polymorphic) the compiler will choose to instantiate that type to something not ambiguous.

    How do I fix it?

    First of all you can always explicitly provide a type signature and this will avoid the triggering of the restriction:

    plus :: Num a => a -> a -> a
    plus = (+)    -- Okay!
    
    -- Runs as:
    Prelude> plus 1.0 1
    2.0
    

    Alternatively, if you are defining a function, you can avoid point-free style, and for example write:

    plus x y = x + y
    

    Turning it off

    It is possible to simply turn off the restriction so that you don't have to do anything to your code to fix it. The behaviour is controlled by two extensions: MonomorphismRestriction will enable it (which is the default) while NoMonomorphismRestriction will disable it.

    You can put the following line at the very top of your file:

    {-# LANGUAGE NoMonomorphismRestriction #-}
    

    If you are using GHCi you can enable the extension using the :set command:

    Prelude> :set -XNoMonomorphismRestriction
    

    You can also tell ghc to enable the extension from the command line:

    ghc ... -XNoMonomorphismRestriction
    

    Note: You should really prefer the first option over choosing extension via command-line options.

    Refer to GHC's page for an explanation of this and other extensions.

    A complete explanation

    I'll try to summarize below everything you need to know to understand what the monomorphism restriction is, why it was introduced and how it behaves.

    An example

    Take the following trivial definition:

    plus = (+)
    

    you'd think to be able to replace every occurrence of + with plus. In particular since (+) :: Num a => a -> a -> a you'd expect to also have plus :: Num a => a -> a -> a.

    Unfortunately this is not the case. For example in we try the following in GHCi:

    Prelude> let plus = (+)
    Prelude> plus 1.0 1
    

    We get the following output:

    <interactive>:4:6:
        No instance for (Fractional Integer) arising from the literal ‘1.0’
        In the first argument of ‘plus’, namely ‘1.0’
        In the expression: plus 1.0 1
        In an equation for ‘it’: it = plus 1.0 1
    

    You may need to :set -XMonomorphismRestriction in newer GHCi versions.

    And in fact we can see that the type of plus is not what we would expect:

    Prelude> :t plus
    plus :: Integer -> Integer -> Integer
    

    What happened is that the compiler saw that plus had type Num a => a -> a -> a, a polymorphic type. Moreover it happens that the above definition falls under the rules that I'll explain later and so he decided to make the type monomorphic by defaulting the type variable a. The default is Integer as we can see.

    Note that if you try to compile the above code using ghc you won't get any errors. This is due to how ghci handles (and must handle) the interactive definitions. Basically every statement entered in ghci must be completely type checked before the following is considered; in other words it's as if every statement was in a separate module. Later I'll explain why this matter.

    Some other example

    Consider the following definitions:

    f1 x = show x
    
    f2 = \x -> show x
    
    f3 :: (Show a) => a -> String
    f3 = \x -> show x
    
    f4 = show
    
    f5 :: (Show a) => a -> String
    f5 = show
    

    We'd expect all these functions to behave in the same way and have the same type, i.e. the type of show: Show a => a -> String.

    Yet when compiling the above definitions we obtain the following errors:

    test.hs:3:12:
        No instance for (Show a1) arising from a use of ‘show’
        The type variable ‘a1’ is ambiguous
        Relevant bindings include
          x :: a1 (bound at blah.hs:3:7)
          f2 :: a1 -> String (bound at blah.hs:3:1)
        Note: there are several potential instances:
          instance Show Double -- Defined in ‘GHC.Float’
          instance Show Float -- Defined in ‘GHC.Float’
          instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
            -- Defined in ‘GHC.Real’
          ...plus 24 others
        In the expression: show x
        In the expression: \ x -> show x
        In an equation for ‘f2’: f2 = \ x -> show x
    
    test.hs:8:6:
        No instance for (Show a0) arising from a use of ‘show’
        The type variable ‘a0’ is ambiguous
        Relevant bindings include f4 :: a0 -> String (bound at blah.hs:8:1)
        Note: there are several potential instances:
          instance Show Double -- Defined in ‘GHC.Float’
          instance Show Float -- Defined in ‘GHC.Float’
          instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
            -- Defined in ‘GHC.Real’
          ...plus 24 others
        In the expression: show
        In an equation for ‘f4’: f4 = show
    

    So f2 and f4 don't compile. Moreover when trying to define these function in GHCi we get no errors, but the type for f2 and f4 is () -> String!

    Monomorphism restriction is what makes f2 and f4 require a monomorphic type, and the different behaviour bewteen ghc and ghci is due to different defaulting rules.

    When does it happen?

    In Haskell, as defined by the report, there are two distinct type of bindings. Function bindings and pattern bindings. A function binding is nothing else than a definition of a function:

    f x = x + 1
    

    Note that their syntax is:

    <identifier> arg1 arg2 ... argn = expr
    

    Modulo guards and where declarations. But they don't really matter.

    where there must be at least one argument.

    A pattern binding is a declaration of the form:

    <pattern> = expr
    

    Again, modulo guards.

    Note that variables are patterns, so the binding:

    plus = (+)
    

    is a pattern binding. It's binding the pattern plus (a variable) to the expression (+).

    When a pattern binding consists of only a variable name it's called a simple pattern binding.

    The monomorphism restriction applies to simple pattern bindings!

    Well, formally we should say that:

    A declaration group is a minimal set of mutually dependent bindings.

    Section 4.5.1 of the report.

    And then (Section 4.5.5 of the report):

    a given declaration group is unrestricted if and only if:

    1. every variable in the group is bound by a function binding (e.g. f x = x) or a simple pattern binding (e.g. plus = (+); Section 4.4.3.2 ), and

    2. an explicit type signature is given for every variable in the group that is bound by simple pattern binding. (e.g. plus :: Num a => a -> a -> a; plus = (+)).

    Examples added by me.

    So a restricted declaration group is a group where, either there are non-simple pattern bindings (e.g. (x:xs) = f something or (f, g) = ((+), (-))) or there is some simple pattern binding without a type signature (as in plus = (+)).

    The monomorphism restriction affects restricted declaration groups.

    Most of the time you don't define mutual recursive functions and hence a declaration group becomes just a binding.

    What does it do?

    The monomorphism restriction is described by two rules in Section 4.5.5 of the report.

    First rule

    The usual Hindley-Milner restriction on polymorphism is that only type variables that do not occur free in the environment may be generalized. In addition, the constrained type variables of a restricted declaration group may not be generalized in the generalization step for that group. (Recall that a type variable is constrained if it must belong to some type class; see Section 4.5.2 .)

    The highlighted part is what the monomorphism restriction introduces. It says that if the type is polymorphic (i.e. it contain some type variable) and that type variable is constrained (i.e. it has a class constraint on it: e.g. the type Num a => a -> a -> a is polymorphic because it contains a and also contrained because the a has the constraint Num over it.) then it cannot be generalized.

    In simple words not generalizing means that the uses of the function plus may change its type.

    If you had the definitions:

    plus = (+)
    
    x :: Integer
    x = plus 1 2
    
    y :: Double
    y = plus 1.0 2
    

    then you'd get a type error. Because when the compiler sees that plus is called over an Integer in the declaration of x it will unify the type variable a with Integer and hence the type of plus becomes:

    Integer -> Integer -> Integer
    

    but then, when it will type check the definition of y, it will see that plus is applied to a Double argument, and the types don't match.

    Note that you can still use plus without getting an error:

    plus = (+)
    x = plus 1.0 2
    

    In this case the type of plus is first inferred to be Num a => a -> a -> a but then its use in the definition of x, where 1.0 requires a Fractional constraint, will change it to Fractional a => a -> a -> a.

    Rationale

    The report says:

    Rule 1 is required for two reasons, both of which are fairly subtle.

    • Rule 1 prevents computations from being unexpectedly repeated. For example, genericLength is a standard function (in library Data.List) whose type is given by

      genericLength :: Num a => [b] -> a
      

      Now consider the following expression:

      let len = genericLength xs
      in (len, len)
      

      It looks as if len should be computed only once, but without Rule 1 it might be computed twice, once at each of two different overloadings. If the programmer does actually wish the computation to be repeated, an explicit type signature may be added:

      let len :: Num a => a
          len = genericLength xs
      in (len, len)
      

    For this point the example from the wiki is, I believe, clearer. Consider the function:

    f xs = (len, len)
      where
        len = genericLength xs
    

    If len was polymorphic the type of f would be:

    f :: Num a, Num b => [c] -> (a, b)
    

    So the two elements of the tuple (len, len) could actually be different values! But this means that the computation done by genericLength must be repeated to obtain the two different values.

    The rationale here is: the code contains one function call, but not introducing this rule could produce two hidden function calls, which is counter intuitive.

    With the monomorphism restriction the type of f becomes:

    f :: Num a => [b] -> (a, a)
    

    In this way there is no need to perform the computation multiple times.

    • Rule 1 prevents ambiguity. For example, consider the declaration group

      [(n,s)] = reads t

      Recall that reads is a standard function whose type is given by the signature

      reads :: (Read a) => String -> [(a,String)]

      Without Rule 1, n would be assigned the type ∀ a. Read a ⇒ a and s the type ∀ a. Read a ⇒ String. The latter is an invalid type, because it is inherently ambiguous. It is not possible to determine at what overloading to use s, nor can this be solved by adding a type signature for s. Hence, when non-simple pattern bindings are used (Section 4.4.3.2 ), the types inferred are always monomorphic in their constrained type variables, irrespective of whether a type signature is provided. In this case, both n and s are monomorphic in a.

    Well, I believe this example is self-explanatory. There are situations when not applying the rule results in type ambiguity.

    If you disable the extension as suggest above you will get a type error when trying to compile the above declaration. However this isn't really a problem: you already know that when using read you have to somehow tell the compiler which type it should try to parse...

    Second rule

    1. Any monomorphic type variables that remain when type inference for an entire module is complete, are considered ambiguous, and are resolved to particular types using the defaulting rules (Section 4.3.4 ).

    This means that. If you have your usual definition:

    plus = (+)
    

    This will have a type Num a => a -> a -> a where a is a monomorphic type variable due to rule 1 described above. Once the whole module is inferred the compiler will simply choose a type that will replace that a according to the defaulting rules.

    The final result is: plus :: Integer -> Integer -> Integer.

    Note that this is done after the whole module is inferred.

    This means that if you have the following declarations:

    plus = (+)
    
    x = plus 1.0 2.0
    

    inside a module, before type defaulting the type of plus will be: Fractional a => a -> a -> a (see rule 1 for why this happens). At this point, following the defaulting rules, a will be replaced by Double and so we will have plus :: Double -> Double -> Double and x :: Double.

    Defaulting

    As stated before there exist some defaulting rules, described in Section 4.3.4 of the Report, that the inferencer can adopt and that will replace a polymorphic type with a monomorphic one. This happens whenever a type is ambiguous.

    For example in the expression:

    let x = read "<something>" in show x
    

    here the expression is ambiguous because the types for show and read are:

    show :: Show a => a -> String
    read :: Read a => String -> a
    

    So the x has type Read a => a. But this constraint is satisfied by a lot of types: Int, Double or () for example. Which one to choose? There's nothing that can tell us.

    In this case we can resolve the ambiguity by telling the compiler which type we want, adding a type signature:

    let x = read "<something>" :: Int in show x
    

    Now the problem is: since Haskell uses the Num type class to handle numbers, there are a lot of cases where numerical expressions contain ambiguities.

    Consider:

    show 1
    

    What should the result be?

    As before 1 has type Num a => a and there are many type of numbers that could be used. Which one to choose?

    Having a compiler error almost every time we use a number isn't a good thing, and hence the defaulting rules were introduced. The rules can be controlled using a default declaration. By specifying default (T1, T2, T3) we can change how the inferencer defaults the different types.

    An ambiguous type variable v is defaultable if:

    • v appears only in contraints of the kind C v were C is a class (i.e. if it appears as in: Monad (m v) then it is not defaultable).
    • at least one of these classes is Num or a subclass of Num.
    • all of these classes are defined in the Prelude or a standard library.

    A defaultable type variable is replaced by the first type in the default list that is an instance of all the ambiguous variable’s classes.

    The default default declaration is default (Integer, Double).

    For example:

    plus = (+)
    minus = (-)
    
    x = plus 1.0 1
    y = minus 2 1
    

    The types inferred would be:

    plus :: Fractional a => a -> a -> a
    minus :: Num a => a -> a -> a
    

    which, by defaulting rules, become:

    plus :: Double -> Double -> Double
    minus :: Integer -> Integer -> Integer
    

    Note that this explains why in the example in the question only the sort definition raises an error. The type Ord a => [a] -> [a] cannot be defaulted because Ord isn't a numeric class.

    Extended defaulting

    Note that GHCi comes with extended defaulting rules (or herefor GHC8), which can be enabled in files as well using the ExtendedDefaultRules extensions.

    The defaultable type variables need not only appear in contraints where all the classes are standard and there must be at least one class that is among Eq, Ord, Show or Num and its subclasses.

    Moreover the default default declaration is default ((), Integer, Double).

    This may produce odd results. Taking the example from the question:

    Prelude> :set -XMonomorphismRestriction
    Prelude> import Data.List(sortBy)
    Prelude Data.List> let sort = sortBy compare
    Prelude Data.List> :t sort
    sort :: [()] -> [()]
    

    in ghci we don't get a type error but the Ord a constraints results in a default of () which is pretty much useless.

    Useful links

    There are a lot of resources and discussions about the monomorphism restriction.

    Here are some links that I find useful and that may help you understand or deep further into the topic:

    这篇关于什么是单态限制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆