是否可以在GPU中实现霍夫曼解码? [英] Is it possible to achieve Huffman decoding in GPU?

查看:82
本文介绍了是否可以在GPU中实现霍夫曼解码?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个用霍夫曼编码编码的数据库。目的是将GPU及其关联的解码器复制到GPU上。然后在GPU上对数据库进行解码,然后在不将其复制回CPU的情况下对该解码后的数据库进行处理。

We have a database encoded with Huffman coding. The aim here is to copy on the GPU it with its associated decoder; then on the GPU, decod the database and do stuff on this decoded database without copying back it on the CPU.

我远非霍夫曼专家,但少数我知道表明它似乎是一种基本上基于控制结构的算法。使用基本算法,恐怕会有很多序列化操作。

I am far to be a Huffman specialist, but the few I know shows that it seems to be an algorithm essentially based on control structures. With the basic algorithm, I am afraid that there will be a lot of serialized operations.

我的两个问题是:


  • 您知道是否存在用于霍夫曼编码的高效GPU版本

  • 如果不存在,您是否认为存在适用于GPU的霍夫曼算法(即控制结构较少)。或者,也许您知道(并且您可以提供参考)高效的霍夫曼解码在GPU上不能有效。

我看到了其他限制,但它们并不严格:
-GPU处理树的效率不是很高:二进制树可以存储在经典数组
中-工作负载可能难以平衡:我们将在后面看到

I see other constraints, but they are not critical: - GPU could not be very efficient to handle tree: binary tree can be stored in a classical array - workload could be difficult to balance: we'll see after

推荐答案

霍夫曼编码的问题在于您无法快进。即:您必须线性地一点一点地解码。

The problem with Huffman coding is that you can't fast-forward. ie: you have to decode bit by bit, linearly.

因此,它不是并行性的理想选择。

As such it's not ideal for parallelism.

如果可以决定编码方式,则可以逐块完美地编码,以便能够独立解码每个块。

If you can decide on the encoding, you could perfectly encode chunk by chunk so as to be able to decode each chunk independently.

这篇关于是否可以在GPU中实现霍夫曼解码?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆