高效的算法,而不是循环 [英] efficient algorithm instead of looping

查看:150
本文介绍了高效的算法,而不是循环的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个数据集,其中每个样本都有类似这样的结构

I have a data set where each samples has a structure similar to this

X=[ [[],[],[],[]], [[],[]] , [[],[],[]] ,[[][]]]

例如:

X=np.array([ [ [1,2,3], [2,4,5] ,[2,3,4] ] , [ [5,6], [6,6] ] , [[2,3,1],[2,3,10],[23,1,2],[1,4,5]]  ] ,"object")

Y=np.array([ [ [12,14,15] ,[12,13,14] ] , [ [15,16], [16,16] ] , [[22,23,21],[32,33,11],[12,44,55]]  ] ,"object")

所以对于每个采样我需要计算x的每一个元素之间的点产品,同样的指数Y的相应元素,总结的结果。即:

so for every sample I need to calculate the dot product between every element of x with corresponding element of y of same index and sum the result. i.e:

result=0

for i in range(3):
    for n,m in itertools.product(X[i],Y[i]):
        print "%s, %s" % (n,m)
        result+=np.dot(n,m)

   .....:         
[1, 2, 3], [12, 14, 15]
[1, 2, 3], [12, 13, 14]
[2, 4, 5], [12, 14, 15]
[2, 4, 5], [12, 13, 14]
[2, 3, 4], [12, 14, 15]
[2, 3, 4], [12, 13, 14]
[5, 6], [15, 16]
[5, 6], [16, 16]
[6, 6], [15, 16]
[6, 6], [16, 16]
[2, 3, 1], [22, 23, 21]
[2, 3, 1], [32, 33, 11]
[2, 3, 1], [12, 44, 55]
[2, 3, 10], [22, 23, 21]
[2, 3, 10], [32, 33, 11]
[2, 3, 10], [12, 44, 55]
[23, 1, 2], [22, 23, 21]
[23, 1, 2], [32, 33, 11]
[23, 1, 2], [12, 44, 55]
[1, 4, 5], [22, 23, 21]
[1, 4, 5], [32, 33, 11]
[1, 4, 5], [12, 44, 55] 

这是我的整个code:

This is my whole code:

 print "***build kernel***"
        K = np.zeros((n_samples, n_samples))
        for i in range(n_samples):
            for j in range(n_samples):

                K[i,j] = self.kernel(X[i], X[j])

def kernel(x1,x2):
    N=8 #number of objects
    result=0 
    for i in xrange(N):
        for n,m in itertools.product(x1[i],x2[i]):
            result+=np.dot(n,m)
    return result    

正如你所看到的这个算法的复杂度太高,也是我的样品比这大得多。因此,即使一个小的数据集,即包含400个样本,我要等4小时得到的结果。我正在寻找一种更好的方式来实现这个算法。 PS:?!我还想着多线程或multiproccessing但我不知道这是否有助于

as you can see the complexity of this algorithm is too high and also my samples are much bigger than this. so for even a small data set, i.e. contains 400 samples, I have to wait 4 hours to get the result. I am looking for a better way to implement this algorithm. P.S: I was thinking about multithreading or multiproccessing but I am not sure if it helps?!

我AP preciate任何建议!

I appreciate any suggestion!

推荐答案

相反求和每一对,这就要求 N * M 操作的点积,可以总结所有在每个列表中的载体,只是做最后的点产品,这将只需要 N + M 操作。

Instead of summing the dot product of each pair, which requires n * m operations, you can sum all of the vectors in each list and just do the final dot product, which will only take n + m operations.

def calc_slow(L1, L2):
    result = 0
    for n, m in itertools.product(L1, L2):
        result += np.dot(n, m)
    return result

def calc_fast(L1, L2):
    L1_sums = np.zeros(len(L1[0]))
    L2_sums = np.zeros(len(L2[0]))
    for vec in L1:
        L1_sums += vec
    for vec in L2:
        L2_sums += vec
    return np.dot(L1_sums, L2_sums)

编辑:更快速的版本,以numpy的优势:

Even faster version, taking advantage of numpy:

def calc_superfast(L1, L2):
    return np.dot(np.array(L1).sum(0),
                  np.array(L2).sum(0))

的输出是相同的:

The output is the same:

print X[0], Y[0], calc_slow(X[0], Y[0])
print X[0], Y[0], calc_fast(X[0], Y[0])

打印:

[[1, 2, 3], [2, 4, 5], [2, 3, 4]] [[12, 14, 15], [12, 13, 14]] 711
[[1, 2, 3], [2, 4, 5], [2, 3, 4]] [[12, 14, 15], [12, 13, 14]] 711.0

时序它表明,有显著改善。定时code:

Timing it shows that there is significant improvement. Timing code:

import random
import time
def rand_vector(size=3):
    return [random.randint(1, 100) for _ in xrange(3)]
def rand_list(length=200):
    return [rand_vector() for _ in xrange(length)]

print "Generating lists..."
L1 = rand_list(200)
L2 = rand_list(200)

print "Running slow..."
s = time.time()
print calc_slow(L1, L2)
print "Slow for (%d, %d) took %.2fs" % (len(L1), len(L2), time.time() - s)

print "Running fast..."
s = time.time()
print calc_fast(L1, L2)
print "Fast for (%d, %d) took %.2fs" % (len(L1), len(L2), time.time() - s)

示例输出:

Generating lists...
Running slow...
75715569
Slow for (100, 100) took 1.48s
Running fast...
75715569.0
Fast for (100, 100) took 0.03s

Generating lists...
Running slow...
309169971
Slow for (200, 200) took 5.29s
Running fast...
309169971.0
Fast for (200, 200) took 0.04s

Running fast...
3.05185703539e+12
Fast for (20000, 20000) took 1.94s

为20000元两个列表的操作每个只花了约2秒,而我predict它会需要几个小时的老方法。

The operation for two lists of 20000 elements each only took ~2 seconds, whereas I predict it would take several hours with the old method.

这部作品的原因是,你可以把这些操作组合在一起。假设你有以下列表:

The reason this works is that you can group the operations together. Assuming you have the following lists:

L1 = [a, b, c], [d, e, f], [g, h, i] 
L2 = [u, v, w], [x, y, z]

然后总结每对的点积是这样的:

Then summing the dot product of each pair would look like this:

a*u + b*v + c*w + a*x + b*y + c*z +
d*u + e*v + f*w + d*x + e*y + f*z +
g*u + h*v + i*w + g*x + h*y + i*z

您可以分解出 U v 是W X 以Z 元素:

You can factor out the u, v, w, x, y, and z elements:

u*(a + d + g) + v*(b + e + h) + w*(c + f + i) +
x*(a + d + g) + y*(b + e + h) + z*(c + f + i)

然后就可以进一步分解出的款项:

Then you can further factor out those sums:

(u + x)*(a + d + g) + (v + y)*(b + e + h) + (w + z)*(c + f + i)

这是从每个初始列表。

Which is just the dot product of the summed vectors from each initial list.

这篇关于高效的算法,而不是循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆