"dat"协议可以有效地支持视频的实时流式传输吗? [英] Can `dat` protocol efficiently support live streaming of video?

查看:240
本文介绍了"dat"协议可以有效地支持视频的实时流式传输吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望能够通过dat实时播放视频流(或其他大文件,并不断对其进行修改/添加).

I would like to be able to live stream video (or any other file that is large and continuously modified/appended) via dat.

在这里说,

dat://协议不支持文件级的部分更新, 这意味着在单个文件中有多个记录,每次 用户添加一条记录,跟随该用户的任何人都必须同步并 重新下载整个文件.随着文件的不断增长, 性能将下降.将每个记录放在一个单独的文件中是 效率更高:创建记录时,网络中的对等方 只会下载新创建的文件.

The dat:// protocol doesn't support partial updates at the file-level, which means that with multiple records in a single file, every time a user adds a record, anyone who follows that user must sync and re-download the entire file. As the file continues to grow, performance will degrade. Putting each record in an individual file is much more efficient: when a record is created, peers in the network will only download the newly-created file.

但是,它也说dat使用Rabin指纹创建确定性的文件块,因此dat客户端大概可以通过其哈希轻松识别它已经下载的块,因此应该能够仅下载文件的最后最后一块(如果这是唯一已更改的部分).

However, it also says here that dat uses Rabin fingerprinting to create deterministic chunks of files, so presumably a dat client would be able to easily identify the chunks that it has already downloaded by their hash, and should therefore be able to only download the latest final chunk of the file, if that is the only part that has changed.

在常见问题解答中,它表示:

Dat使用的Merkle树的类型使同伴可以比较 他们每个人都有的特定版本的数据集 交换增量以完成完全同步.

The type of Merkle tree used by Dat lets peers compare which pieces of a specific version of a dataset they each have and efficiently exchange the deltas to complete a full sync.

超视觉,但是从我对它的工作原理的基本了解来看,它似乎可以节省它是用于视频数据的"bundle.js"文件,我不确定它如何实现流式传输,但这与我要实现的功能并不完全相同,后者可以有效地流式传输任意大文件,扩展文件,例如.ts或.mkv视频流.

There is hypervision, but from my rudimentary understanding of how it works, it looks like it saves it's own "bundle.js" file for the video data, I'm not sure how it achieves streaming, but this is not exactly the same as what I'm trying to achieve, which is being able to efficiently stream an arbitrary large and expanding file, for example a .ts or .mkv video stream.

所以,我的问题是-有效地实时流式传输视频(即,无需重新下载已下载的块),目前尚不支持并可以在将来添加的某些内容,或者使用dat协议?

So, my question is - is efficient live-streaming of video (ie without redownloading already-downloaded chunks) something that is simply currently not supported and could be added in future, or is that something that is inherently unachievable using the dat protocol?

推荐答案

简而言之,Dat构建在其之上的低级超核协议应该可以很好地用于视频和其他软实时"协议串流用途.但是,Dat(应用程序)所建立的超级驱动器文件/目录抽象目前在这些用例中不能很好地工作.没有什么可以阻止Hyperdrive与单个任意大且正在扩展的文件"一起正常工作,但尚未针对该特定用例进行优化.

In short, the low-level hypercore protocol that Dat is built on top of should work well for video and other "soft realtime" streaming uses. However, the hyperdrive file/directory abstraction that Dat (the application) builds upon does not currently work well for these use cases. There is nothing blocking hyperdrive from working well with a single "arbitrary large and expanding file", but it has not been optimized for that specific use case.

据我所知,当前所有的视频流原型都通过将视频内容直接编码到hypercore中而不是在hyperdrive文件和目录"抽象中进行工作.这有点像将原始字节写入硬盘而不是使用文件系统之间的区别. P2P视频和音频流是超核的明确设计目标.请注意,可能存在或可能没有到现有文件格式或流协议的直接映射.超核抽象表示为字节块的流,每个字节块的上限大约为一兆字节.

As far as I know, all the current video streaming prototypes work by encoding video content directly into hypercore, not in a hyperdrive "files and directories" abstraction. This is sort of like the difference between writing raw bytes to a hard disk instead of using a file system. P2P video and audio streaming were explicit design goals for hypercore. Note that there may or may not be direct mappings to existing file formats or streaming protocols; the hypercore abstraction is presented as a stream of byte chunks, each capped at about a megabyte.

作为一个小细节,dat/hypercore 协议磁盘格式没有指定任何特定的分块"机制. Rabin分块已进行了试验,但是默认情况下,几乎所有客户端都使用固定大小的分块来简化和提高速度(这并不意味着将来不可能实现对性能敏感的分块).从理论上讲,客户端将能够在任何情况下检测重复的块,并避免重新下载(以及磁盘上的重复存储),但是从2018年夏季开始尚未实现此优化.

As a small detail, the dat/hypercore protocol and on-disk formats do not specify any particular "chunking" mechanism. Rabin-chunking has been experimented, but by default almost all clients use fixed-size chunking instead for simplicity and speed (which isn't to imply that it isn't possible to implement performant locality-sensitive chunking in the future). In theory the clients will be able to detect duplicate chunks in any case and avoid re-downloading (and duplicate storage on disk), but this optimization hasn't been implemented as of Summer 2018.

Hyperdrive当前要求将所有文件作为连续的块存储在内容"超核源中.这非常有效,但是使重复数据删除变得困难.作为一种特殊情况,应该有可能支持追加到最新文件(直接追加到内容提要),而无需复制整个文件.每当提要中的任何其他文件被更新或创建时,都会破坏连续的块,但是对于您的用例来说,它可能已经足够好了(如果要实施此优化).

Hyperdrive currently requires all files to be stored as contiguous chunks in the "content" hypercore feed. This is very performant, but makes de-duplication difficult. As a special case, it should be possible to support appending to the most recent file (which appends directly to the content feed) without copying the entire file. Any time any other file in the feed gets updated or created, that will break the contiguous chunk, but for your use case it might be good enough (if this optimization were to be implemented).

这篇关于"dat"协议可以有效地支持视频的实时流式传输吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆