在Stream.reduce()这样的API中选择不变性有什么好理由? [英] What are good reasons for choosing invariance in an API like Stream.reduce()?

查看:157
本文介绍了在Stream.reduce()这样的API中选择不变性有什么好理由?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

回顾Java 8 Stream API设计,我对 Stream。 reduce() 参数:

Reviewing Java 8 Stream API design, I was surprised by the generic invariance on the Stream.reduce() arguments:

<U> U reduce(U identity,
             BiFunction<U,? super T,U> accumulator,
             BinaryOperator<U> combiner)

相同API的看似更通用的版本可能会对 U 的个别引用应用协方差/逆变,例如:

A seemingly more versatile version of the same API might have applied covariance / contravariance on individual references to U, such as:

<U> U reduce(U identity,
             BiFunction<? super U, ? super T, ? extends U> accumulator,
             BiFunction<? super U, ? super U, ? extends U> combiner)

这将允许以下目前无法实现的目标:

This would allow for the following, which isn't possible, currently:

// Assuming we want to reuse these tools all over the place:
BiFunction<Number, Number, Double> numberAdder =
    (t, u) -> t.doubleValue() + u.doubleValue();

// This currently doesn't work, but would work with the suggestion
Stream<Number> stream = Stream.of(1, 2L, 3.0);
double sum = stream.reduce(0.0, numberAdder, numberAdder);

解决方法,使用方法引用将类型强制为目标类型:

Workaround, use method references to "coerce" the types into the target type:

double sum = stream.reduce(0.0, numberAdder::apply, numberAdder::apply);

C#没有这个特殊问题,因为 Func(T1,T2,TResult) 使用声明 - 站点差异定义如下,这意味着使用 Func 的任何API都可以免费获得此行为:

C# doesn't have this particular problem, as Func(T1, T2, TResult) is defined as follows, using declaration-site variance, which means that any API using Func gets this behaviour for free:

public delegate TResult Func<in T1, in T2, out TResult>(
    T1 arg1,
    T2 arg2
)

现有设计的优点(可能是EG决策的原因)有哪些?超过建议的设计?

What are the advantages (and possibly, the reasons for EG decisions) of the existing design over the suggested design?

或者,换句话说,我可能忽略的建议设计的注意事项(例如类型推断困难,并行化约束或特定约束)减少操作,例如关联性,预期未来Java的声明 - 站点方差 BiFunctio n< in T,in U,out R> ,...)?

Or, asked differently, what are the caveats of the suggested design that I might be overlooking (e.g. type inference difficulties, parallelisation constraints, or constraints specific to the reduction operation such as e.g. associativity, anticipation of a future Java's declaration-site variance on BiFunction<in T, in U, out R>, ...)?

推荐答案

抓取通过lambda开发的历史和孤立THE这个决定的原因很难 - 所以最终,人们将不得不等待其中一个开发人员回答这个问题。

Crawling through the history of the lambda development and isolating "THE" reason for this decision is difficult - so eventually, one will have to wait for one of the developers to answer this question.

一些提示可能如下:


  • 流接口经历了几次迭代和重构。在 Stream 界面的最早版本之一中,有专门的 reduce 方法,以及最接近的方法问题中的 reduce 方法仍被称为 Stream#fold 。这个已经收到 BinaryOperator 作为组合器参数。

  • The stream interfaces have undergone several iterations and refactorings. In one of the earliest versions of the Stream interface, there have been dedicated reduce methods, and the one that is closest to the reduce method in the question was still called Stream#fold back then. This one already received a BinaryOperator as the combiner parameter.

有趣的是,很长一段时间,lambda提案包含一个专用接口 Combiner< T,U,R> 。与直觉相反,这不是 Stream#reduce 函数中的组合器。相反,它被用作 reducer ,这似乎是现在所谓的累加器。但是, Combiner 界面是在以后的版本中用 BiFunction 替换

Interestingly, for quite a while, the lambda proposal included a dedicated interface Combiner<T,U,R>. Counterintuitively, this was not used as the combiner in the Stream#reduce function. Instead, it was used as the reducer, which seems to be what nowadays is referred to as the accumulator. However, the Combiner interface was replaced with BiFunction in a later revision.

最引人注目的相似之处这里的问题可以在关于<$ c的帖子中找到$ c>流#flatMap 邮件列表上的签名,然后转换为关于流方法签名的差异的一般问题。他们在某些地方固定了这些,例如

The most striking similarity to the question here is found in a thread about the Stream#flatMap signature at the mailing list, which is then turned into the general question about the variances of the stream method signatures. They fixed these in some places, for example


正如Brian纠正我:

As Brian correct me:

< R>流< R> flatMap(函数<?super T,?extends Stream<?extends R>>
mapper);

而不是:

< R>流< R> flatMap(函数< T,Stream<?extends R>> mapper);

但是注意到在某些地方,这是不可能的:

But noticed that in some places, this was not possible:


T reduced(T identity,BinaryOperator< T> accumulator);

可选< T> reduce(BinaryOperator< T> accumulator);

无法修复,因为他们使用'BinaryOperator',但是如果'BiFunction '是
,然后我们有更大的灵活性

< U> U reduce(U identity,BiFunction<?super U,?super T,?extends U>
accumulator,BinaryOperator< U> combiner)

而不是:

< U> U reduced(U identity,BiFunction< U,?super T,U> accumulator,BinaryOperator< U> combiner);

关于'BinaryOperator'的相同评论

(我强调)。

我找到替换的唯一理由带有 BiFunction 的BinaryOperator 最终在在同一个帖子中回复此声明

The only justification that I found for not replacing the BinaryOperator with a BiFunction was eventually given in the response to this statement, in the same thread:


BinaryOperator不会被BiFunction取代,即使如你所说,
它引入了更多的灵活性,BinaryOperator要求两个参数
和返回类型相同,所以它在概念上更加重要
(EG已经对此投票)。

BinaryOperator will not be replaced by BiFunction even if, as you said, it introduce more flexibility, a BinaryOperator ask that the two parameters and the return type to be the same so it has conceptually more weight (the EG already votes on that).

也许有人可以挖出一个特定的参考管理这一决定的专家组的投票结果,但也许这个引言已经充分回答了为什么它是这样的问题......

Maybe someone can dig out a perticular reference of the vote of the Expert Group that governed this decision, but maybe this quote already sufficiently answers the question of why it is the way it is...

这篇关于在Stream.reduce()这样的API中选择不变性有什么好理由?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆