换能器扁平化和 uniq [英] Transducer flatten and uniq
问题描述
我想知道是否有办法使用转换器来展平列表并过滤唯一值?
通过链接,很容易:
从 'lodash' 导入 {uniq, flattenDeep};|const arr = [1, 2, [2, 3], [1, [4, 5]]];uniq(flattendDeep(arr));//->[1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
但是这里我们在列表上循环了两次(深度层+n).不理想.
我想要实现的是在这种情况下使用传感器.我已经阅读了关于它的 Ramda 文档https://ramdajs.com/docs/#transduce,但我仍然找不到正确编写它的方法.
目前,我使用了一个带有递归函数的reduce函数:
import {isArray} from 'lodash';const arr = [1, 2, [2, 3], [1, [4, 5]]];const flattenDeepUniq = (p, c) =>{如果(isArray(c)){c.forEach(o => p = flattenDeepUniq(p, o));}别的 {p = !p.includes(c) ?[...p, c] : p;}返回 p;};arr.reduce(flattenDeepUniq, [])//->[1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
我们在元素上有一个循环(+ n 个具有深度层的循环),这看起来更好、更优化.
在这种情况下,甚至可以使用转换器和迭代器吗?有关 Ramda 换能功能的更多信息:https://gist.github.com/craigdallimore/8b5b956e3e4e3e4b9e445bfa1e383c569e458c3e26>
转换器在这里没有多大意义.您的数据结构是递归的.处理递归结构的最佳代码通常需要递归算法.
传感器的工作原理
(Roman Liutikov 写了一篇关于传感器的很好的介绍.)
Transducers 就是用一个单一的数据替换多次遍历相同的数据,将这些步骤的原子操作组合成一个单一的操作.
转换器非常适合转换此代码:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)//^ ^ ^ ^//\ \ \ `------ 迭代 4//\ \ `--------------------- 迭代 3//\ `-------------------------------------- 迭代 2//`----------------------------------------------------- 迭代 1
变成这样:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res,[])//^//`------------------------------------------------------- 一次迭代
在 Ramda 中,因为 map
、filter
和 take
都启用了转换器,我们可以转换
const foo = pipe(地图(乘(7)),地图(添加(3)),过滤器(isOdd),拿(3))foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])//=>[17, 31, 45]
(对数据进行四次迭代)转化为
const bar = compose(地图(乘(7)),地图(添加(3)),过滤器(isOdd),拿(3))into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])//=>[17, 31, 45]
只迭代一次.(注意从 pipe
到 compose
的切换.转换器的组合顺序与普通函数相反.)
请注意,此类传感器的关键在于它们的操作方式相似.map
将一个列表转换为另一个列表,就像 filter
和 take
一样.虽然您可以拥有对不同类型进行操作的转换器,并且 map
和 filter
也可能以多态方式对此类类型工作,但只有在组合操作的函数时,它们才能协同工作同类型.
Flatten
不适合换能器
您的结构更复杂.虽然我们当然可以创建一个以某种方式(前序、后序)抓取它的函数,因此可能会用它开始一个转换器管道,但处理递归结构的合乎逻辑的方法是使用递归算法.>
扁平化这种嵌套结构的简单方法是这样的:
const flatten = xs =>xs.reduce((a, x) =>concat(a, isArray(x) ? flatten(x) : [x]),[]);
(由于各种技术原因,Ramda 的代码要复杂得多.)
不过,这个递归版本不太适合与换能器一起工作,而换能器基本上必须逐步工作.
Uniq
不适合换能器
另一方面,uniq
对此类转换器意义不大.问题是 uniq
使用的容器,如果你想从传感器中获得任何好处,必须是一个具有快速插入和快速查找的容器,一个 Set
或Object
最有可能.假设我们使用 Set
.然后我们有一个问题,因为我们的 flatten
对列表进行操作.
另一种方法
由于我们无法轻松地将现有函数合并为一个满足您要求的函数,因此我们可能需要编写一个一次性函数.
早期解决方案的结构使得添加唯一性约束相当容易.再说一次:
const flatten = xs =>xs.reduce((a, x) =>concat(a, isArray(x) ? flatten(x) : [x]),[]);
使用辅助函数将所有元素添加到 Set
:
const addAll = (set, xs) =>xs.reduce((s, x) => s.add(x), set)
我们可以编写一个扁平化的函数,只保留唯一值:
const flattenUniq = xs =>xs.reduce((s, x) =>addAll(s, isArray(x) ? flattenUniq(x) : [x]),新集())
请注意,这与上面的结构有很多相似之处,仅切换到使用 Set
,因此从 concat
切换到我们的 addAll
.
当然,最后你可能想要一个数组.我们可以通过用 Set -> 包装它来做到这一点.数组
函数,像这样:
const flattenUniq = xs =>Array.from(xs.reduce((s, x) =>addAll(s, isArray(x) ? flattenUniq(x) : [x]),新集()))
您也可以考虑将此结果保留为 Set
.如果您真的想要一组唯一值,Set
是合乎逻辑的选择.
这样的函数没有无点转换函数的优雅,但它可以工作,并且暴露的管道使与原始数据结构和普通flatten
函数的关系更加紧密清楚.
我想您可以将整个冗长的答案视为一种冗长的方式来指出 user633183 在评论中所说的话:扁平化和 uniq 都不是传感器的好用例."
I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case. I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case? For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map
, filter
, and take
are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe
to compose
. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map
converts a list to another list, as do filter
and take
. While you could have transducers that operate on different types, and map
and filter
might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten
is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq
poorly suited for transducers
uniq
, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq
, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set
or an Object
most likely. Let's say we use a Set
. Then we have a problem, since our flatten
operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set
:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set
and therefore switching from concat
to our addAll
.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array
function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set
. If you really want a collection of unique values, a Set
is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten
function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
这篇关于换能器扁平化和 uniq的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!