Linux:通过网络进行屏幕桌面视频捕获和VNC帧率 [英] Linux: Screen desktop video capture over network, and VNC framerate

查看:749
本文介绍了Linux:通过网络进行屏幕桌面视频捕获和VNC帧率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

很抱歉,TL; DR:

  • VNC连接的帧速率是多少(以帧/秒为单位)-或更确切地说,由谁来确定:客户端还是服务器?
  • 任何其他有关桌面屏幕捕获的建议-但正确的时间编码"/具有无抖动的帧率(具有稳定的周期);并有可能获得未压缩(或无损)图像序列?

简而言之-我遇到一个典型的问题:我有时会开发硬件,并想录制一段视频,以显示在PC上输入的两者命令(桌面捕获"),硬件响应(实时视频").在介绍具体细节之前,请先介绍一下简介.  
 

简介/上下文

目前,我的策略是使用摄像机记录硬件测试的过程(为实时"视频)-同时进行桌面捕获.摄像机产生29.97(30)FPS MPEG-2 .AVI视频;我想以与视频相同的帧频获取桌面捕获的PNG图像序列.那么,这个想法将是:如果两个视频的帧速率相同,则为0.那我就可以

  • 将桌面捕获的开始时间与实时"视频中的匹配点对齐
  • 设置画中画,其中按比例缩小版本桌面捕获的图片(作为叠加)放在实时"视频的顶部
    • (实时"视频的一部分屏幕用作带有桌面捕获"覆盖图的视觉同步源)
  • 导出最终"组合视频,并对其进行适当压缩以适合Internet

原则上,我认为可以在此过程中使用诸如ffmpeg之类的命令行工具;但是,我希望使用GUI查找两个视频的对齐起点.

最后,我还想实现的目标是,在导出最终"视频时保持最高质量:当离开摄像机时,实时"视频已经被压缩,这意味着在通过Theora时会进一步劣化. ogv编解码器-这就是为什么我想保留原始视频,并在需要不同的压缩/分辨率的情况下使用命令行之类的东西来重新生成最终"视频的原因.这也是为什么我喜欢将桌面捕获"视频作为PNG序列进行的原因(尽管我猜想任何未压缩的格式都可以):我采取措施来调整"桌面,因此没有太多的渐变和无损编码(即PNG)将是适当的.  
 

桌面捕获选项

嗯,在我当前使用的Ubuntu Lucid下,此过程中有很多麻烦(,您可以在 did 具有相同的帧率和同步:原则上,您不需要花费几分钟就可以找到在视频编辑器中启动同步点-其余的合并"视频处理可以由单个命令行处理.这就是为什么在本文中,我要重点介绍桌面捕获部分.

据我所知,只有可行(相对于 recordMyDesktop

  • 设置 VNC服务器( vncserver )在要捕获的目标PC上;在另一台PC上使用VNC捕获软件(例如 vncrec )来捕获/记录VNC流(随后可以转换为视频).
  • ffmpegx11grab选项一起使用
  • *(使用目标PC上的某些工具,该工具会执行 DMA 直接将桌面图像帧传输-从图形卡帧缓冲存储器传输到网络适配器存储器)
  • 请注意,上述方法的实用性受到我的使用环境的限制:我要捕获的目标PC通常运行围绕海量数据移动的软件(利用经过测试的硬件);关于这种系统的描述,您可能会说得最好:几乎是稳定的" :)我想这与游戏玩家想要获取要求苛刻的游戏的视频时遇到的问题类似.而且,当我开始使用类似recordMyDesktop之类的东西时,它也使用了相当多的资源并想在本地硬盘上捕获-我立即遇到严重的内核崩溃(通常没有生成vmcore).

    因此,在我的上下文中,我通常确实假设要使用第二台计算机-运行目标" PC台式机的捕获和记录.除此之外,下面列出了我到目前为止使用上述选项所能看到的优点和缺点.

    (准备桌面)

    对于下面讨论的所有方法,我倾向于事先准备"桌面:

    ...为了最大程度地减少桌面捕获软件的负担.注意,在Ubuntu上更改颜色深度需要更改xorg.conf.但是,在/etc/X11(Ubuntu 10.04)中找不到xorg.conf()找不到xorg.conf. )"-因此,您可能需要先运行sudo Xorg -configure.

    为了保持较低的图形资源使用率,我通常也禁用了compiz-或者,我将系统/首选项/外观/视觉效果"设置为无".但是,在我尝试通过将"Visual Effects"设置为"Normal"()启用compiz后,其中 Intel Corporation N10系列集成图形控制器",并且在切换到compiz时,Ubuntu没有提供专有驱动程序选项)-不过,可能所有的模糊效果都欺骗了我的眼睛:)).

    克隆VGA

    嗯,这是最昂贵的选择(因为它不仅需要另外购买两个硬件,而且还需要另外购买两个硬件:VGA转换器和视频捕获卡);并且主要适用于笔记本电脑(笔记本电脑既有屏幕又有额外的VGA输出-对于台式机,可能还需要投资购买额外的图形卡或VGA克隆硬件).

    但是,它也是唯一不需要目标PC额外软件的唯一选择(因此使用目标CPU的0%处理能力),并且也是唯一一个提供视频的选项. true ,无抖动的30 fps帧速率(由单独的硬件执行-尽管假设各个硬件之间存在时钟域未对准的情况可以忽略不计).

    实际上,因为我已经拥有诸如采集卡之类的东西,所以我已经投资了VGA转换器-期望它最终将使我能够仅花费5分钟的时间寻找对准点来制作最终的合并"视频,并且单个命令行;但是我还没有看到这个过程是否会按预期进行.我还徘徊在以800x600、30 fps的速度将桌面捕获为未压缩视频的可能性.

    recordMyDesktop

    好吧,如果您在不带任何参数的情况下运行recordMyDesktop-它首先从捕获(看起来像)原始图像数据开始,并保存在/tmp/rMD-session-7247这样的文件夹中;然后按Ctrl-C中断它后,它将把原始图像数据编码为.ogv.显然,在与我的测试软件相同的硬盘上捕获大图像数据(该数据也会移动大量数据)通常是造成不稳定的原因:)

    因此,我尝试做的是设置Samba 以共享驱动器网络;然后在目标PC上,我将连接到该驱动器-并指示recordMyDesktop使用该网络驱动器(通过).然后,您可以运行,说:

    recordmydesktop --rescue /home/user/.gvfs/test\ on\ 192.168.1.100/capture/rMD-session-7247/
    

    ...为了转换原始临时数据.但是,recordMyDesktop通常会在执行此救援"的过程中自身发生段错误.虽然,我之所以要保留临时文件,是为了获得画中画蒙太奇的未压缩源.请注意,"--on-the-fly-encoding"将避免完全使用临时文件-在使用更多CPU处理能力的情况下(对我而言,这又是导致崩溃的原因.)

    然后是帧速率-显然,您可以使用'--fps N'选项设置请求的帧速率;但是,这不能保证您将实际获得该帧率;例如,我会得到:

    recordmydesktop --fps 25
    ...
    Saved 2983 frames in a total of 6023 requests
    ...
    

    ...在运行我的测试软件时进行捕获;这意味着实际达到的速率更像是25 * 2983/6032 = 12.3632 fps!

    很明显,帧已丢失-大多数情况下显示为视频播放速度太快 .但是,如果我将请求的fps降低到12,则根据保存的/总报告,我可以达到11 fps;在这种情况下,视频播放看起来不会加速".而且我还没有尝试将这种捕获与实况视频对齐-因此我不知道这些实际保存的帧是否也有准确的时间戳.

    VNC捕获

    对我来说,VNC捕获包括在目标" PC上运行VNC服务器,以及在记录器" PC上运行vncrec(twibright版).作为VNC服务器,我使用vino,即系统/首选项/远程桌面(首选项)".显然,即使 vino配置可能也不是最容易管理的事情,vino因为服务器似乎对目标" PC不太费劲;因为当它与我的测试软件并行运行时,我还没有遇到崩溃的问题.

    另一方面,当vncrec在记录器" PC上捕获时,它还会弹出一个窗口,向您显示目标"桌面,就像在实时"中看到的一样.当目标"上有较大的更新(即整个窗口都在移动)时-可以很明显地看到记录器"上的更新/刷新率问题.但是,仅进行少量更新(即,仅在静态背景上移动光标),一切似乎就可以了.

    这使我对本文的主要问题之一感到疑惑-它是什么设置了VNC连接中的帧速率?

    我还没有找到明确的答案,但是从点点滴滴的信息中(请参见下面的参考文献),我收集到以下信息:

    • VNC服务器在收到更改后,便会尽可能快地发送更改(屏幕更改,点击等);受服务器可用的最大网络带宽限制
    • VNC客户端接收那些由于网络连接而延迟和抖动的更改事件,并尝试以最快的速度重新构建桌面视频"流

    ...这意味着,无法以稳定的周期性帧速率(如视频中)陈述任何内容.

    就客户端而言,尽管获得的最终视频通常被声明为10 fps,尽管帧可能会发生位移/抖动(这需要视频编辑器中的剪切功能),但我通常将其声明为10 fps.请注意, vncrec-twibright/README 指出:"采样率电影的默认值是10,或者由VNCREC_MOVIE_FRAMERATE环境变量覆盖;如果未指定,则为10.;但是,该手册页还指出" VNCREC_MOVIE_FRAMERATE-指定输出影片的帧频.仅在-电影模式下有效.默认值为10.当转码器从10呕吐时,请尝试24.".如果查看"vncrec/sockets.c"源,就会看到:

    void print_movie_frames_up_to_time(struct timeval tv)
    {
      static double framerate;
      ....
      memcpy(out, bufoutptr, buffered);
      if (appData.record)
        {
          writeLogHeader (); /* Writes the timestamp */
          fwrite (bufoutptr, 1, buffered, vncLog);
        }
    

    ...这表明已写入一些时间戳-但是这些时间戳是源于原始"目标" PC还是记录器",我无法确定. 编辑:感谢@kanaka的回答,我检查了).

    无论如何,在我看来,服务器仍然会发送-并且vncrec当客户端收到-何时;并且只有在随后从原始捕获中对视频文件进行编码的过程中,才可以设置/插值某种形式的帧速率.

    我还要声明在我的目标"笔记本电脑上,有线网络连接为坏了;因此,无线是我访问路由器和本地网络的唯一选择-速度远低于路由器通过有线连接可以处理的100MB/s的速度.但是,如果捕获的帧中的抖动是由于目标" PC上的负载导致的错误时间戳导致的,则我认为良好的网络带宽不会有太大帮助.

    最后,就VNC而言,还可以尝试其他替代方法,例如 VNCast 服务器(很有希望,但需要一些时间才能从源代码进行构建,并且处于早期实验版本" );或 MultiVNC (尽管看起来像是客户端/查看器,但没有录制选项).

    带有x11grab的ffmpeg

    还没玩那么多,但是,我已经试过了netcat.这个:

    # 'target'
    ffmpeg -f x11grab -b 8000k -r 30 -s 800x600 -i :0.0 -f rawvideo - | nc 192.168.1.100 5678
    # 'recorder'
    nc -l 0.0.0.0 5678 > raw.video  #
    

    ...确实捕获了文件,但是ffplay无法正确读取捕获的文件;同时:

    # 'target'
    ffmpeg -f x11grab -b 500k -r 30 -s 800x600 -i :0.0 -f yuv4mpegpipe -pix_fmt yuv444p - | nc 192.168.1.100 5678
    # 'recorder'
    nc -l 0.0.0.0 5678 | ffmpeg -i - /path/to/samplimg%03d.png
    

    确实会生成.png图像-但是带有压缩伪像(我想是yuv4mpegpipe涉及的压缩结果).

    因此,我目前不太喜欢ffmpeg + x11grab-但是也许我根本不知道如何根据自己的需要进行设置.

    *(图形卡-> DMA->网络)

    坦率地说,我不确定是否存在类似的东西-实际上,我敢打赌它不:)而且我不是专家,但我推测:

    如果DMA内存传输可以从图形卡(或其保留当前桌面位图的缓冲区)作为 source 发起,而网络适配器作为 destination 发起,则原则上,应该有可能以正确的(和不错的)帧速率获得未压缩的桌面捕获.使用DMA传输的重点当然是要使处理器免于将桌面映像复制到网络接口的任务(,从而减少捕获软件对运行在'目标计算机-尤其是那些处理RAM或硬盘的计算机).

    当然,这样的建议是假设:网络带宽很大(对于800x600、30 fps,至少800 * 600 * 3 * 30 = 43200000 bps = 42 MiB/s,应该对于本地100 MB/s网络来说还可以);另一台进行记录"的PC上有大量硬盘-最后是可以随后读取原始数据并基于该原始数据生成图像序列或视频的软件:)

    我可以忍受的带宽和硬盘需求-只要能够保证稳定的帧速率和未压缩的数据即可;这就是为什么我想听听是否已经存在这样的原因.

    ----- 

    好吧,我想就是这样-尽可能简短:)关于桌面捕获的工具或过程的任何建议

    • 未压缩格式(最终可转换为未压缩/无损PNG图像序列),并且
    • 具有正确的时间编码",稳定的帧频

    ...,最终将使它本身很容易实现用于生成画中画"覆盖视频的简单",单个命令行处理-

    预先感谢您的任何评论,
    干杯!


    参考文献

    1. 在Linux上为CryptoTE制作屏幕录像的经验-idlebox.net
    2. VideoLAN论坛•查看主题-VNC客户端输入支持(如screen://)
    3. VNCServer会针对慢速客户端(Kyprianou,Mark-com.realvnc)限制用户输入. vnc-list-MarkMail
    4. Linux常见问题解答-X Windows:如何使用显示和控制远程桌面VNC
    5. VNC需要多少带宽? RealVNC-常见问题解答
    6. x11vnc:用于实际X显示的VNC服务器
    7. HowtoRecordVNC(X11会话)-Debian Wiki
    8. 在Ubuntu中替代gtk-RecordMyDesktop
    9. (FFmpeg用户)如何在ffmpeg中使用管道
    10. (ffmpeg-devel)(PATCH)绘制时修复x11grab中的segfault不支持XFixes扩展的Xserver上的光标

    解决方案

    对于如此漫长的问题,您应该获得一个徽章. ;-)

    为回答您的主要问题,VNC使用RFB协议,该协议是远程帧缓冲区协议(因此为首字母缩写),而不是流视频协议. VNC客户端将FrameBufferUpdateRequest消息发送到服务器,该消息包含客户端感兴趣的视口区域和增量标志.如果未设置增量标志,则服务器将以FrameBufferUpdate消息作为响应,该消息包含所请求区域的内容.如果设置了增量标志,则服务器可以使用FrameBufferUpdate消息进行响应,该消息包含自从上次向客户端发送该区域以来所请求的区域中已更改的任何部分.

    关于请求和更新如何交互的定义未明确定义.如果没有任何变化,服务器不一定会用更新来响应每个请求.如果服务器有来自客户端的多个请求排队,则还可以发送单个更新作为响应.此外,客户端确实需要能够响应来自服务器的异步更新消息(而不是响应请求),否则客户端将不同步(因为RFB不是成帧协议).

    通常,仅将客户端实现为定期发送整个帧缓冲区视口的增量更新请求,并在它们到达时处理任何服务器更新消息(即,不尝试将请求和更新捆绑在一起).

    此处是FrameBufferUpdateRequest消息的描述. /p>

    Sorry for the wall of text - TL;DR:

    • What is the framerate of VNC connection (in frames/sec) - or rather, who determines it: client or server?
    • Any other suggestions for desktop screen capture - but "correctly timecoded"/ with unjittered framerate (with a stable period); and with possibility to obtain it as uncompressed (or lossless) image sequence?

    Briefly - I have a typical problem that I am faced with: I sometimes develop hardware, and want to record a video that shows both commands entered on the PC ('desktop capture'), and responses of the hardware ('live video'). A chunk of an intro follows, before I get to the specific detail(s).  
     

    Intro/Context

    My strategy, for now, is to use a video camera to record the process of hardware testing (as 'live' video) - and do a desktop capture at the same time. The video camera produces a 29.97 (30) FPS MPEG-2 .AVI video; and I want to get the desktop capture as an image sequence of PNGs at the same frame rate as the video. The idea, then, would be: if the frame rate of the two videos is the same; then I could simply

    • align the time of start of the desktop capture, with the matching point in the 'live' video
    • Set up a picture-in-picture, where a scaled down version of the desktop capture is put - as overlay - on top of the 'live' video
      • (where a portion of the screen on the 'live' video, serves as a visual sync source with the 'desktop capture' overlay)
    • Export a 'final' combined video, compressed appropriately for the Internet

    In principle, I guess one could use a command line tool like ffmpeg for this process; however I would prefer to use a GUI for finding the alignment start point for the two videos.

    Eventually, what I also want to achieve, is to preserve maximum quality when exporting the 'final' video: the 'live' video is already compressed when out of the camera, which means additional degradation when it passes through the Theora .ogv codec - which is why I'd like to keep the original videos, and use something like a command line to generate a 'final' video anew, if a different compression/resolution is required. This is also why I like to have the 'desktop capture' video as a PNG sequence (although I guess any uncompressed format would do): I take measures to 'adjust' the desktop, so there aren't many gradients, and lossless encoding (i.e. PNG) would be appropriate.  
     

    Desktop capture options

    Well, there are many troubles in this process under Ubuntu Lucid, which I currently use (and you can read about some of my ordeals in 10.04: Video overlay/composite editing with Theora ogv - Ubuntu Forums). However, one of the crucial problems is the assumption, that the frame rate of the two incoming videos is equal - in reality, usually the desktop capture is of a lower framerate; and even worse, very often frames are out of sync.

    This, then, requires the hassle of sitting in front of a video editor, and manually cutting and editing less-than-a-second clips on frame level - requiring hours of work for what will be in the end a 5 minute video. On the other hand, if the two videos ('live' and 'capture') did have the same framerate and sync: in principle, you wouldn't need more than a couple of minutes for finding the start sync point in a video editor - and the rest of the 'merged' video processing could be handled by a single command line. Which is why, in this post, I would like to focus on the desktop capture part.

    As far as I can see, there are only few viable (as opposed to 5 Ways to Screencast Your Linux Desktop) alternatives for desktop capture in Linux / Ubuntu (note, I typically use a laptop as target for desktop capturing):

    1. Have your target PC (laptop) clone the desktop on its VGA output; use a VGA-to-composite or VGA-to-S-video hardware to obtain a video signal from VGA; use video capture card on a different PC to grab video
    2. Use recordMyDesktop on the target PC
    3. Set up a VNC server (vino on Ubuntu; or vncserver) on the target PC to be captured; use VNC capture software (such as vncrec) on a different PC to grab/record the VNC stream (which can, subsequently, be converted to video).
    4. Use ffmpeg with x11grab option
    5. *(use some tool on the target PC, that would do a DMA transfer of a desktop image frame directly - from the graphics card frame buffer memory, to the network adapter memory)

    Please note that the usefulness of the above approaches are limited by my context of use: the target PC that I want to capture, typically runs software (utilizing the tested hardware) that moves around massive ammounts of data; best you could say about describing such a system is "barely stable" :) I'd guess this is similar to problems gamers face, when wanting to obtain a video capture of a demanding game. And as soon as I start using something like recordMyDesktop, which also uses quite a bit of resources and wants to capture on the local hard disk - I immediately get severe kernel crashes (often with no vmcore generated).

    So, in my context, I typically do assume involvement of a second computer - to run the capture and recording of the 'target' PC desktop. Other than that, the pros and cons I can see so far with the above options, are included below.

    (Desktop preparation)

    For all of the methods discussed below, I tend to "prepare" the desktop beforehand:

    • Remove desktop backgrounds and icons
    • Set the resolution down to 800x600 via System/Preferences/Monitors (gnome-desktop-properties)
    • Change color depth down to 16 bpp (using xdpyinfo | grep "of root" to check)

    ... in order to minimize the load on desktop capture software. Note that changing color depth on Ubuntu requires changes to xorg.conf; however, "No xorg.conf (is) found in /etc/X11 (Ubuntu 10.04)" - so you may need to run sudo Xorg -configure first.

    In order to keep graphics resource use low, also I usually had compiz disabled - or rather, I'd have 'System/Preferences/Appearance/Visual Effects' set to "None". However, after I tried enabling compiz by setting 'Visual Effects' to "Normal" (which doesn't get saved), I can notice windows on the LCD screen are redrawn much faster; so I keep it like this, also for desktop capture. I find this a bit strange: how could more effects cause a faster screen refresh? It doesn't look like it's due to a proprietary driver (the card is "Intel Corporation N10 Family Integrated Graphics Controller", and no proprietary driver option is given by Ubuntu upon switch to compiz) - although, it could be that all the blurring and effects just cheat my eyes :) ).

    Cloning VGA

    Well, this is the most expencive option (as it requires additional purchase of not just one, but two pieces of hardware: VGA converter, and video capture card); and applicable mostly to laptops (which have both a screen + additional VGA output - for desktops one may also have to invest in an additional graphics card, or a VGA cloning hardware).

    However, it is also the only option that requires no additional software of the target PC whatsoever (and thus uses 0% processing power of the target CPU) - AND also the only one that will give a video with a true, unjittered framerate of 30 fps (as it is performed by separate hardware - although, with the assumption that clock domains misalignment, present between individual hardware pieces, is negligible).

    Actually, as I already own something like a capture card, I have already invested in a VGA converter - in expectation that it will eventually allow me to produce final "merged" videos with only 5 mins of looking for alignment point, and a single command line; but I am yet to see whether this process will work as intended. I'm also wandering how possible it will be to capture desktop as uncompressed video @ 800x600, 30 fps.

    recordMyDesktop

    Well, if you run recordMyDesktop without any arguments - it starts first with capturing (what looks like) raw image data, in a folder like /tmp/rMD-session-7247; and after you press Ctrl-C to interrupt it, it will encode this raw image data into an .ogv. Obviously, grabbing large image data on the same hard disk as my test software (which also moves large ammounts of data), is usually a cause for an instacrash :)

    Hence, what I tried doing is to setup Samba to share a drive on the network; then on the target PC, I'd connect to this drive - and instruct recordMyDesktop to use this network drive (via gvfs) as its temporary files location:

    recordmydesktop --workdir /home/user/.gvfs/test\ on\ 192.168.1.100/capture/ --no-sound --quick-subsampling --fps 30 --overwrite -o capture.ogv 
    

    Note that, while this command will use the network location for temporary files (and thus makes it possible for recordMyDesktop to run in parallel with my software) - as soon as you hit Ctrl-C, it will start encoding and saving capture.ogv directly on the local hard drive of the target (though, at that point, I don't really care :) )

    First of my nags with recordMyDesktop is that you cannot instruct it to keep the temporary files, and avoid encoding them, on end: you can use Ctrl+Alt+p for pause - or you can hit Ctrl-C quickly after the first one, to cause it to crash; which will then leave the temporary files (if you don't hit Ctrl-C quickly enough the second time, the program will "Cleanning up cache..."). You can then run, say:

    recordmydesktop --rescue /home/user/.gvfs/test\ on\ 192.168.1.100/capture/rMD-session-7247/
    

    ... in order to convert the raw temporary data. However, more often than not, recordMyDesktop will itself segfault in the midst of performing this "rescue". Although, the reason why I want to keep the temp files, is to have the uncompressed source for the picture-in-picture montage. Note that the "--on-the-fly-encoding" will avoid using temp files altogether - at the expence of using more CPU processing power (which, for me, again is cause for crashes.)

    Then, there is the framerate - obviously, you can set requested framerate using the '--fps N' option; however, that is no guarantee that you will actually obtain that framerate; for instance, I'd get:

    recordmydesktop --fps 25
    ...
    Saved 2983 frames in a total of 6023 requests
    ...
    

    ... for a capture with my test software running; which means that the actually achieved rate is more like 25*2983/6032 = 12.3632 fps!

    Obviously, frames are dropped - and mostly that shows as video playback is too fast. However, if I lower the requested fps to 12 - then according to saved/total reports, I achieve something like 11 fps; and in this case, video playback doesn't look 'sped up'. And I still haven't tried aligning such a capture with a live video - so I have no idea if those frames that actually have been saved, also have an accurate timestamp.

    VNC capture

    The VNC capture, for me, consists of running a VNC server on the 'target' PC, and running vncrec (twibright edition) on the 'recorder' PC. As VNC server, I use vino, which is "System/Preferences/Remote Desktop (Preferences)". And apparently, even if vino configuration may not be the easiest thing to manage, vino as a server seems not too taxing to the 'target' PC; as I haven't experienced crashes when it runs in parallel with my test software.

    On the other hand, when vncrec is capturing on the 'recorder' PC, it also raises a window showing you the 'target' desktop as it is seen in 'realtime'; when there are large updates (i.e. whole windows moving) on the 'target' - one can, quite visibly, see problems with the update/refresh rate on the 'recorder'. But, for only small updates (i.e. just a cursor moving on a static background), things seem OK.

    This makes me wonder about one of my primary questions with this post - what is it, that sets the framerate in a VNC connection?

    I haven't found a clear answer to this, but from bits and pieces of info (see refs below), I gather that:

    • The VNC server simply sends changes (screen changes + clicks etc) as fast as it can, when it receives them ; limited by the max network bandwidth that is available to the server
    • The VNC client receives those change events delayed and jittered by the network connection, and attempts to reconstruct the desktop "video" stream, again as fast as it can

    ... which means, one cannot state anything in terms of a stable, periodic frame rate (as in video).

    As far as vncrec as a client goes, the end videos I get usually are declared as 10 fps, although frames can be rather displaced/jittered (which then requires the cutting in video editors). Note that the vncrec-twibright/README states: "The sample rate of the movie is 10 by default or overriden by VNCREC_MOVIE_FRAMERATE environment variable, or 10 if not specified."; however, the manpage also states "VNCREC_MOVIE_FRAMERATE - Specifies frame rate of the output movie. Has an effect only in -movie mode. Defaults to 10. Try 24 when your transcoder vomits from 10.". And if one looks into "vncrec/sockets.c" source, one can see:

    void print_movie_frames_up_to_time(struct timeval tv)
    {
      static double framerate;
      ....
      memcpy(out, bufoutptr, buffered);
      if (appData.record)
        {
          writeLogHeader (); /* Writes the timestamp */
          fwrite (bufoutptr, 1, buffered, vncLog);
        }
    

    ... which shows that some timestamps are written - but whether those timestamps originate from the "original" 'target' PC, or the 'recorder' one, I cannot tell. EDIT: thanks to the answer by @kanaka, I checked through vncrec/sockets.c again, and can see that it is the writeLogHeader function itself calling gettimeofday; so the timestamps it writes are local - that is, they originate from the 'recorder' PC (and hence, these timestamps do not accurately describe when the frames originated on the 'target' PC).

    In any case, it still seems to me, that the server sends - and vncrec as client receives - whenever; and it is only in the process of encoding a video file from the raw capture afterwards, that some form of a frame rate is set/interpolated.

    I'd also like to state that on my 'target' laptop, the wired network connection is broken; so the wireless is my only option to get access to the router and the local network - at far lower speed than the 100MB/s that the router could handle from wired connections. However, if the jitter in captured frames is caused by wrong timestamps due to load on the 'target' PC, I don't think good network bandwidth will help too much.

    Finally, as far as VNC goes, there could be other alternatives to try - such as VNCast server (promising, but requires some time to build from source, and is in "early experimental version"); or MultiVNC (although, it just seems like a client/viewer, without options for recording).

    ffmpeg with x11grab

    Haven't played with this much, but, I've tried it in connection with netcat; this:

    # 'target'
    ffmpeg -f x11grab -b 8000k -r 30 -s 800x600 -i :0.0 -f rawvideo - | nc 192.168.1.100 5678
    # 'recorder'
    nc -l 0.0.0.0 5678 > raw.video  #
    

    ... does capture a file, but ffplay cannot read the captured file properly; while:

    # 'target'
    ffmpeg -f x11grab -b 500k -r 30 -s 800x600 -i :0.0 -f yuv4mpegpipe -pix_fmt yuv444p - | nc 192.168.1.100 5678
    # 'recorder'
    nc -l 0.0.0.0 5678 | ffmpeg -i - /path/to/samplimg%03d.png
    

    does produce .png images - but with compression artifacts (result of the compression involved with yuv4mpegpipe, I guess).

    Thus, I'm not liking ffmpeg+x11grab too much currently - but maybe I simply don't know how to set it up for my needs.

    *( graphics card -> DMA -> network )

    I am, admittedly, not sure something like this exists - in fact, I would wager it doesn't :) And I'm no expert here, but I speculate:

    if DMA memory transfer can be initiated from the graphics card (or its buffer that keeps the current desktop bitmap) as source, and the network adapter as destination - then in principle, it should be possible to obtain an uncompressed desktop capture with a correct (and decent) framerate. The point in using DMA transfer would be, of course, to relieve the processor from the task of copying the desktop image to the network interface (and thus, reduce the influence the capturing software can have on the processes running on the 'target' PC - especially those dealing with RAM or hard-disk).

    A suggestion like this, of course, assumes that: there are massive ammounts of network bandwidth (for 800x600, 30 fps at least 800*600*3*30 = 43200000 bps = 42 MiB/s, which should be OK for local 100 MB/s networks); plenty of hard disk on the other PC that does the 'recording' - and finally, software that can afterwards read that raw data, and generate image sequences or videos based on it :)

    The bandwidth and hard disk demands I could live with - as long as there is guarantee both for a stable framerate and uncompressed data; which is why I'd love to hear if something like this already exists.

    -- -- -- -- -- 

    Well, I guess that was it - as brief as I could put it :) Any suggestions for tools - or process(es), that can result with a desktop capture

    • in uncompressed format (ultimately convertible to uncompressed/lossless PNG image sequence), and
    • with a "correctly timecoded", stable framerate

    ..., that will ultimately lend itself to 'easy', single command-line processing for generating 'picture-in-picture' overlay videos - will be greatly appreciated!

    Thanks in advance for any comments,
    Cheers!


    References

    1. Experiences Producing a Screencast on Linux for CryptoTE - idlebox.net
    2. The VideoLAN Forums • View topic - VNC Client input support (like screen://)
    3. VNCServer throttles user inpt for slow client - Kyprianou, Mark - com.realvnc.vnc-list - MarkMail
    4. Linux FAQ - X Windows: How do I Display and Control a Remote Desktop using VNC
    5. How much bandwidth does VNC require? RealVNC - Frequently asked questions
    6. x11vnc: a VNC server for real X displays
    7. HowtoRecordVNC (an X11 session) - Debian Wiki
    8. Alternative To gtk-RecordMyDesktop in Ubuntu
    9. (Ffmpeg-user) How do I use pipes in ffmpeg
    10. (ffmpeg-devel) (PATCH) Fix segfault in x11grab when drawing Cursor on Xservers that don't support the XFixes extension

    解决方案

    You should get a badge for such a long well though out question. ;-)

    In answer to your primary question, VNC uses the RFB protocol which is a remote frame buffer protocol (thus the acronym) not a streaming video protocol. The VNC client sends a FrameBufferUpdateRequest message to the server which contains a viewport region that the client is interested in and an incremental flag. If the incremental flag is not set then the server will respond with a FrameBufferUpdate message that contains the content of the region requested. If the incremental flag is set then the server may respond with a FrameBufferUpdate message that contains whatever parts of the region requested that have changed since the last time the client was sent that region.

    The definition of how requests and updates interact is not crisply defined. The server won't necessarily respond to every request with an update if nothing has changed. If the server has multiple requests queued from the client it is also allowed to send a single update in response. In addition, the client really needs to be able to respond to an asynchronous update message from the server (not in response to a request) otherwise the client will fall out of sync (because RFB is not a framed protocol).

    Often clients are simply implemented to send incremental update requests for the entire frame buffer viewport at a periodic interval and handle any server update messages as they arrive (i.e. no attempt is made to tie requests and updates together).

    Here is a description of FrameBufferUpdateRequest messages.

    这篇关于Linux:通过网络进行屏幕桌面视频捕获和VNC帧率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆