使用 CUDA 进行希尔伯特变换 [英] Hilbert Transform using CUDA

查看:89
本文介绍了使用 CUDA 进行希尔伯特变换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为了对一维数组进行希尔伯特变换,必须:

  • 对数组进行FFT
  • 将数组的一半加倍,将另一半归零
  • 对结果进行逆 FFT

我使用 PyCuLib 进行 FFT.到目前为止我的代码

def htransforms(data):N = data.shape[0]transforms = nb.cuda.device_array_like(data) # 用信号的大小/维度在 GPU 上分配内存transforms.dtype = np.complex64 # 将 GPU 数组类型更改为复数进行 FFTpyculib.fft.fft(signal.astype(np.complex64), transforms) # 在 GPU 上做 FFT变换[1:N/2] *= 2.0 # 这一步不起作用transforms[N/2 + 1: N] = 0+0j # 这一个都没有pyculib.fft.ifft_inplace(transforms) # 在 GPU 上执行 IFFT:就地(相同内存)信封函数 = transforms.copy_to_host() # 复制结果到主机(计算机)内存返回 abs(envelope_function)

我有一种感觉,它可能与 Numba 的 CUDA 接口本身有关......它是否允许像这样修改数组(或数组切片)的单个元素?我假设它可能,因为变量 transforms 是一个 numba.cuda.cudadrv.devicearray.DeviceNDArray,所以我想它可能有一些与 numpy 的 ndarray.

简而言之,使用 Numba 的 device_arrays,对切片进行简单操作的最简单方法是什么?我得到的错误是

<块引用>

*= 不支持的操作数类型:'DeviceNDArray' 和 'float'

解决方案

我会使用

data = torch.zeros((1, 2**10))数据[:, 2**9] = 1;tdata = htransforms_wikipedia(data).data;plt.plot(tdata.real.T, '-');plt.plot(tdata.imag.T, '-');plt.xlim([500, 525])plt.legend(['真实', '虚构'])plt.title('维基百科版本的脉冲响应')

您的版本的脉冲响应是 1 + 1j * h[k] 其中 h[k] 是维基百科版本的脉冲响应.如果您正在处理真实数据,维基百科版本很好,因为您可以使用 rfftirfft 产生一个线性版

def real_htransforms_wikipedia(data):N = data.shape[-1]# 使用信号的大小/维度在 GPU 上分配内存转换 = torch.tensor(data).cuda()变换 = -1j * torch.fft.rfft(变换,轴=-1)变换[0] = 0;返回 torch.fft.irfft(变换,轴=-1)

In order to do a Hilbert transform on a 1D array, one must:

  • FFT the array
  • Double half the array, zero the other half
  • Inverse-FFT the result

I'm using PyCuLib for the FFTing. My code so far

def htransforms(data):
    N = data.shape[0]
    transforms       = nb.cuda.device_array_like(data)          # Allocates memory on GPU with size/dimensions of signal
    transforms.dtype = np.complex64                             # Change GPU array type to complex for FFT 

    pyculib.fft.fft(signal.astype(np.complex64), transforms)    # Do FFT on GPU

    transforms[1:N/2]      *= 2.0      # THIS STEP DOESN'T WORK
    transforms[N/2 + 1: N]  = 0+0j     # NEITHER DOES THIS ONE

    pyculib.fft.ifft_inplace(transforms)                        # Do IFFT on GPU: in place (same memory)    
    envelope_function      = transforms.copy_to_host()          # Copy results to host (computer) memory
    return abs(envelope_function)

I have a feeling it may have something to do with Numba's CUDA interface itself... does it allow individual elements of an array (or array slices) to be modified like this? I assumed it might, as the variable transforms is a numba.cuda.cudadrv.devicearray.DeviceNDArray, so I thought maybe it had some of the same operations as numpy's ndarray.

In short, using Numba's device_arrays, what's the easiest way to do a simple operation on a slice? The error I get is

unsupported operand type(s) for *=: 'DeviceNDArray' and 'float'

解决方案

I would do using pytorch

Your function using pytorch (and I removed the abs to return the complex values)

def htransforms(data):
    N = data.shape[-1]
    # Allocates memory on GPU with size/dimensions of signal
    transforms       = torch.tensor(data).cuda()
    
    torch.fft.fft(transforms, axis=-1)
    transforms[:, 1:N//2]      *= 2.0      # THIS STEP DOESN'T WORK
    transforms[:, N//2 + 1: N]  = 0+0j     # NEITHER DOES THIS ONE

    # Do IFFT on GPU: in place (same memory)
    return torch.abs(torch.fft.ifft(transforms)).cpu() 

But your transform is actually different of what I find on wikipedia

The wikipedia version

def htransforms_wikipedia(data):
    N = data.shape[-1]
    # Allocates memory on GPU with size/dimensions of signal
    transforms       = torch.tensor(data).cuda()
    
    transforms = torch.fft.fft(transforms, axis=-1)
    transforms[:, 1:N//2]      *= -1j      # positive frequency
    transforms[:, (N+2)//2 + 1: N] *= +1j # negative frequency
    transforms[:,0] = 0; # DC signal
    if N % 2 == 0:
        transforms[:, N//2] = 0; # the (-1)**n term
    
    # Do IFFT on GPU: in place (same memory)
    return torch.fft.ifft(transforms).cpu() 

data = torch.zeros((1, 2**10))
data[:, 2**9] = 1;
tdata = htransforms(data).data;
plt.plot(tdata.real.T, '-')
plt.plot(tdata.imag.T, '-')
plt.xlim([500, 525])
plt.legend(['real', 'imaginary'])
plt.title('inpulse response of your version')

data = torch.zeros((1, 2**10))
data[:, 2**9] = 1;
tdata = htransforms_wikipedia(data).data;
plt.plot(tdata.real.T, '-');
plt.plot(tdata.imag.T, '-');
plt.xlim([500, 525])
plt.legend(['real', 'imaginary'])
plt.title('inpulse response of Wikipedia version')

The impulse response for your version is 1 + 1j * h[k] where h[k] is the impulse response of the wikipedia version. If you are working with real data the wikipedia version is nice because you can work with the rfft and irfft producing a one linear version

def real_htransforms_wikipedia(data):
    N = data.shape[-1]
    # Allocates memory on GPU with size/dimensions of signal
    transforms       = torch.tensor(data).cuda()
    transforms = -1j * torch.fft.rfft(transforms, axis=-1)
    transforms[0] = 0;
    return torch.fft.irfft(transforms, axis=-1)

这篇关于使用 CUDA 进行希尔伯特变换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆