给定边,如何以矢量化方式找到由两个边组成的路线? [英] Given edges, how can find routes that consists of two edges in a vectorised way?

查看:45
本文介绍了给定边,如何以矢量化方式找到由两个边组成的路线?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有许多城镇及其邻居.我想得到一组至少有一条由恰好两个不同边组成的路线的所有成对的城镇.有矢量化的方法可以做到这一点吗?如果没有,为什么?例如:edges [3,0],[0,4],[5,0] 的入射节点为0,因此将其隔离为 [3,4],[4,5]],[3,5] 是可以通过以下路线连接的成对城镇: 3-0-4 4-0-5 3-0-5 .它们由两个边缘组成.

I have an array of towns and their neighbours. I want to get a set all the pairs of towns that have at least one route that consists of exactly two different edges. Is there a vectorized way to do this? If no, why? For example: edges [3,0], [0,4], [5,0] has an incident node 0 so it's quaranteed that [3,4], [4,5], [3,5] are pairs of towns that can be connected in routes likes so: 3-0-4, 4-0-5 and 3-0-5. They consist of two edges.

输入示例: np.array([[3,0],[0,4],[5,0],[2,1],[1,4],[2,3],[5,2]])

预期输出: array([4,3],[4,5],[3,5],[4,2],[1,3],[1,5],[3,5],[0,2],[0,1],[0,2])(如果顺序不同,不必担心边缘方向相反或重复)

Expected output: array([4,3], [4,5], [3,5], [4,2], [1,3], [1,5], [3,5], [0,2], [0,1], [0,2]) (No worries if order is different, any of edge directions are reversed or there are duplicates)

到目前为止,我已经做了什么:

There is what I have done so far:

from itertools import chain, combinations

def get_incidences(roads):
    roads = np.vstack([roads, roads[:,::-1]])
    roads_sorted = roads[np.argsort(roads[:,0])]
    marker_idx = np.flatnonzero(np.diff(roads_sorted[:,0]))+1
    source = roads_sorted[np.r_[marker_idx-1,-1],0]
    target = np.split(roads_sorted[:,1], marker_idx)
    return source, target

def get_combinations_chain(target):
    #I know this could be improved with `np.fromiter`
    return np.array(list(chain(*[combinations(n,2) for n in target])))

def get_combinations_triu(target):
    def combs(t):
        x, y = np.triu_indices(len(t),1)
        return np.transpose(np.array([t[x], t[y]]))
    return np.concatenate([combs(n) for n in target])

roads = np.array([[3,0], [0,4], [5,0], [2,1], [1,4], [2,3], [5,2]])

>>> get_incidences(roads)
(array([0, 1, 2, 3, 4, 5]),
 [array([4, 3, 5]),
  array([4, 2]),
  array([1, 3, 5]),
  array([0, 2]),
  array([0, 1]),
  array([0, 2])])
>>> get_combinations_chain(get_incidences(roads)[1])
array([[4, 3], [4, 5], [3, 5], [4, 2], [1, 3], [1, 5], [3, 5], [0, 2], [0, 1], [0, 2]])
>>> get_combinations_triu(get_incidences(roads)[1])
array([[4, 3], [4, 5], [3, 5], [4, 2], [1, 3], [1, 5], [3, 5], [0, 2], [0, 1], [0, 2]])

最后两个给出预期的输出,但是它们需要列表理解.可以对这种计算进行向量化吗?

The last two ones give an expected output but they require a list comprehension. Is it possible to vectorize this calculation:

np.concatenate([combs(n) for n in target])

更新我以一种可能的矢量化方法告终,但我需要重新组织输入数据( get_incidences 的输出):

Update I ended with a possible way of vectorization but I needed to reorganize an input data (output of get_incidences):

INPUT:
target: [array([4, 3, 5]), array([4, 2]), array([1, 3, 5]), array([0, 2]), array([0, 1]), array([0, 2])]
stream: [4 3 5 4 2 1 3 5 0 2 0 1 0 2]
lengths: [3 2 3 2 2 2]
OUTPUT:
array([[3, 4], [4, 5], [3, 5], [2, 4], [1, 3], [1, 5], [3, 5], [0, 2], [0, 1], [0, 2]])

它似乎比直接组合所有组合要快:

It also appears to be faster than straightforward concatenation of all the combinations:

def get_incidences(roads):
    roads = np.vstack([roads, roads[:,::-1]])
    roads_sorted = roads[np.argsort(roads[:,0])]
    marker_idx = np.flatnonzero(np.diff(roads_sorted[:,0]))+1
    lengths = np.diff(marker_idx, prepend=0, append=len(roads_sorted))
    stream = roads_sorted[:,1]
    target = np.split(stream, marker_idx)
    return target, stream, lengths

def get_combinations_vectorized(data):
    target, stream, lengths = data
    idx1 = np.concatenate(np.repeat(target, lengths))
    idx2 = np.repeat(stream, np.repeat(lengths, lengths))
    return np.array([idx1, idx2]).T[idx1 < idx2]

def get_combinations_triu(data):
    target, stream, lengths = data
    def combs(t):
        x, y = np.triu_indices(len(t),1)
        return np.transpose(np.array([t[x], t[y]]))
    return np.concatenate([combs(n) for n in target])

def get_combinations_chain(data):
    target, stream, lengths = data
    return np.array(list(chain(*[combinations(n,2) for n in target])))

def get_combinations_scott(data):
    target, stream, lengths = data
    return np.array([x for i in target for x in combinations(i,2)])

def get_combinations_index(data):
    target, stream, lengths = data
    index = np.fromiter(chain.from_iterable(chain.from_iterable(combinations(n,2) for n in target)), 
                        dtype=int, count=np.sum(lengths*(lengths-1)))
    return index.reshape(-1,2)

roads = np.array([[64, 53], [94, 90], [24, 60], [45, 44], [83, 17], [10, 88], [14, 6], [56, 93], [98, 93], [86, 77], [12, 85], [58, 2], [19, 80], [48, 26], [11, 51], [16, 83], [45, 96], [35, 54], [47, 23], [81, 57], [52, 34], [88, 11], [18, 4], [35, 90], [41, 45], [2, 7], [58, 68], [58, 11], [46, 38], [32, 93], [44, 41], [26, 39], [20, 58], [44, 4], [8, 96], [74, 71], [34, 35], [91, 72], [28, 58], [53, 73], [66, 5], [84, 97], [24, 29], [43, 63], [96, 63], [20, 57], [1, 74], [4, 89], [10, 89], [98, 22]])
data = get_incidences(roads)

%timeit get_combinations_vectorized(data)
%timeit get_combinations_chain(data)
%timeit get_combinations_triu(data)
%timeit get_combinations_scott(data)
%timeit get_combinations_index(data)

92 µs ± 18.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
123 µs ± 3.67 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
1.8 ms ± 9.44 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
126 µs ± 2.45 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
140 µs ± 4.48 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

但是,这在很大程度上取决于数据. roads = np.array(list(combinations(range(100),2)))的时间

However, it depends a lot on data. Timings for roads = np.array(list(combinations(range(100),2)))

44.2 ms ± 4.36 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
277 ms ± 8.26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
21.2 ms ± 1.84 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
369 ms ± 17.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
43.2 ms ± 911 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

推荐答案

您可以使用networkx库:

You can use the networkx library:

import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from itertools import combinations

a = np.array([[3,0], [0,4], [5,0], [2,1], [1,4], [2,3], [5,2]])

G = nx.Graph()

G.add_edges_from(a)

#Creates this newtork
nx.draw_networkx(G)

# Create pairs of all nodes in network
c = combinations(G.nodes, 2)

# Find all routes between each pair in the network
routes = [list(nx.all_simple_paths(G, i, j, cutoff=2))[0] for i, j in c]

# Select only routes with three nodes/two edges the show first and last node
paths_2_edges = [(i[0], i[-1]) for i in routes if len(i) == 3]
print(paths_2_edges)

输出:

[(3, 4), (3, 5), (3, 1), (0, 2), (0, 1), (4, 5), (4, 2), (5, 1)]


每个评论

向量化此语句: np.concatenate([combs(n)for target in n]] :

对于 t = get_incidences(道路)[1]

s2 = get_combinations_triu(t)

输出s2:

array([[4, 3],
       [4, 5],
       [3, 5],
       [4, 2],
       [1, 3],
       [1, 5],
       [3, 5],
       [0, 2],
       [0, 1],
       [0, 2]])

%timeit get_combinations_triu(t)

每个循环96.9 µs±3.44 µs(平均±标准偏差,共运行7次,每个10000个循环)

96.9 µs ± 3.44 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)


然后

s1 = np.array([x for i in t for x in combinations(i,2)])

输出s1:

array([[4, 3],
       [4, 5],
       [3, 5],
       [4, 2],
       [1, 3],
       [1, 5],
       [3, 5],
       [0, 2],
       [0, 1],
       [0, 2]])

而且(s1 == s2).all()

And, (s1 == s2).all()

True

Timeit:

%timeit np.array([x for i in t for x in list(combinations(i,2))])

每个循环14.7 µs±577 ns(平均±标准偏差,共运行7次,每个循环100000次)

14.7 µs ± 577 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

这篇关于给定边,如何以矢量化方式找到由两个边组成的路线?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆