动态归一化2D numpy数组 [英] Dynamically normalise 2D numpy array
问题描述
我有一个2D numpy数组"signals",形状(100000,1024).每行都包含信号幅度的迹线,我希望将其标准化为0-1之内.
每个信号都有不同的幅度,所以我不能只除以一个公因子,所以我想知道是否有一种方法可以对每个信号进行归一化,以使它们中的每个值都在0-1之间?>
假设信号看起来像[[0,1,2,3,5,8,2,1],[0,2,5,10,7,4,2,1]]和I希望他们成为[[0.125,0.25,0.375,0.625,1,0.25,0.125],[0,0.2,0.5,0.7,0.4,0.2,0.1]].
有没有一种方法可以不循环所有100,000个信号,因为这肯定会很慢?
谢谢!
要做的一件简单的事就是生成一个新的numpy数组,该数组按轴取最大值并除以该值:
将numpy导入为npa = np.array([[0,1,2,3,5,8,2,1],[0,2,5,10,7,4,2,1]])b = np.max(a,轴= 1)打印(a/b [:,np.newaxis])
输出:
[[0.0.125 0.25 0.375 0.625 1. 0.25 0.125][0.0.2 0.5 1. 0.7 0.4 0.2 0.1]]
I have a 2D numpy array "signals" of shape (100000, 1024). Each row contains the traces of amplitude of a signal, which I want to normalise to be within 0-1.
The signals each have different amplitudes, so I can't just divide by one common factor, so I was wondering if there's a way to normalise each of the signals so that each value within them is between 0-1?
Let's say that the signals look something like [[0,1,2,3,5,8,2,1],[0,2,5,10,7,4,2,1]] and I want them to become [[0.125,0.25,0.375,0.625,1,0.25,0.125],[0,0.2,0.5,0.7,0.4,0.2,0.1]].
Is there a way to do it without looping over all 100,000 signals, as this will surely be slow?
Thanks!
Easy thing to do would be to generate a new numpy array with max values by axis and divide by it:
import numpy as np
a = np.array([[0,1,2,3,5,8,2,1],[0,2,5,10,7,4,2,1]])
b = np.max(a, axis = 1)
print(a / b[:,np.newaxis])
output:
[[0. 0.125 0.25 0.375 0.625 1. 0.25 0.125]
[0. 0.2 0.5 1. 0.7 0.4 0.2 0.1 ]]
这篇关于动态归一化2D numpy数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!