C#屏幕流方案 [英] C# Screen streaming program

查看:634
本文介绍了C#屏幕流方案的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近,我一直在一个简单的屏幕共享程序。



其实,程序工作在 TCP协议和使用桌面复制API - 一个很酷的服务,支持非常快速的屏幕捕捉并提供有关 MovedRegions (即只改变了自己的立场领域在屏幕上,但仍然存在)和 UpdatedRegions (改变区域)。



桌面复制有2 improtant属性 - 2字节数组 previouspixels NewPixels 数组的数组。每4个字节代表在 RGBA 形式的像素因此,举例来说,如果我的屏幕为1920×1080的缓冲区大小为1920 x 1080 * 4。



下面是我的战略的重要亮点




  1. 在初始状态(第一次)我将整个像素缓冲区(在我情况下,它是1920×1080×3) - alpha分量始终为255在屏幕上:)

  2. 从现在开始,我遍历UpdatedRegions(这是一个矩形阵列)和我发送区域边界和Xo'r像素它是这样的:

      writer.Position = 0; 
    变种N = frame._newPixels;
    无功W = 1920 * 4; //帧边界。
    变种P = frame._previousPixels;
    的foreach(在frame.UpdatedRegions VAR区域)
    {
    writer.WriteInt(region.Top);
    writer.WriteInt(region.Height);
    writer.WriteInt(region.Left);
    writer.WriteInt(region.Width);
    为(INT Y = region.Top,yOffset = Y *宽; Y< region.Bottom; Y ++,yOffset + = W)
    {
    为(INT X = region.Left ,xOffset = X * 4,I = yOffset + xOffset; X< region.Right; X +,I + = 4)
    {
    writer.WriteByte(N [I] ^ p [I]) ; //'N'是newpixels缓冲和P是差异previous.xoring。
    writer.WriteByte(正第[i + 1] ^ P [I + 1]);
    writer.WriteByte(正第[i + 2] ^ P [I + 2]);

    }
    }
    }


  3. 我压缩使用C#编写的LZ4包装缓冲(参阅 lz4.NET@github )。 。)
  4. $ B $: - 于是,我写了数据上的NetworkStream
  5. 我合并在接收端方面获得更新的映像今天这个是不是我们的问题b


'作家'是'QuickBinaryWriter类比如我写的(只是再次重复使用相同的缓冲区)。

 公共类QuickBinaryWriter 
{
私人只读的byte [] _buffer;
私人诠释_position;

公共QuickBinaryWriter(字节[]缓冲区)
{
_buffer =缓冲;
}

公众诠释的位置
{
{返回_position; }
集合{_position =价值; }
}

公共无效WriteByte(双字节值)
{
_buffer [_position ++] =价值;
}


公共无效WriteInt(int值)
{

字节[] = ARR BitConverter.GetBytes(值);
的for(int i = 0; I< arr.Length;我++)
WriteByte(ARR [I]);
}

}

从很多措施,我已经看出,发送的数据是真巨大的,并且有时对于单个帧更新的数据可以得到高达200KB(压缩后!)。
让说实话,200KB真的不算什么,但如果我想顺利流屏幕,并能在高fps的速度来观看我将不得不在这方面努力一点点 - 为最大限度地减少网络流量和带宽使用



我在寻找的建议和创意理念,提高编程的效率主要是对网络部分发送的数据(通过其他方式或任何其他的想法),我会感谢所有帮助和ideas.Thanks包装它。


解决方案

有关屏幕1920×1080,具有4个字节的颜色,你在看每帧大约8 MB。随着20 FPS,你有160 MB /秒。因此,从8 MB获得200 KB(4 MB /秒@ 20 FPS)是一个很大的进步。



我希望得到你的关注,我的某些方面不知道你的重点是,希望它可以帮助。




  1. 你越是压缩你的银幕形象,更多的处理,可能需要

  2. 您确实需要把重点放在一系列不断变化的图像,类似视频编解码器的设计(没有音频虽然)压缩机制。例如:H.264

  3. 记住,你需要使用某种类型的实时协议传输数据。背后的想法是,如果你的框架之一,使得它到目标计算机具有滞后性,你可能也下降,未来数帧追赶。否则你将在一个常年滞后的局面,我怀疑的用户都将享受。

  4. 您可以随时牺牲性能质量。您在类似的技术(如MS远程桌面,VNC等)看到的最简单的这样的机制是发送8位色(ARGB各2位),而不是您正在使用3个字节的颜色。

  5. 来改善你的情况另一种方式是集中的,而不是流了整个桌面上要流在屏幕上特定的矩形。这将降低帧本身的大小。

  6. 另一种方法是在发送之前缩放屏幕图像到一个较小的图像,然后在显示之前缩放恢复正常。

  7. 发送初始画面之后,您可以随时发送 newpixels previouspixels 的差异。不用说,原来的屏幕和屏幕差异都将被压缩LZ4 /解压缩。几乎每隔一段时间,你应该送全阵列,而不是差异,如果你使用一些有损算法压缩的差异。

  8. 是否UpdatedRegions,有重叠的地方?可以在被优化以不发送重复的象素信息?



  9. 上面的想法可以在另一个的顶部施加一个获得更好的用户体验。最终,它取决于你的应用程序和最终用户的具体



    编辑:




    Lately I have been working on a simple screen sharing program.

    Actually the program works on a TCP protocol and uses the Desktop duplication API- a cool service that support very fast screen capturing and also provide information about MovedRegions(areas that only changed their position on the screen but still exist) and UpdatedRegions(changed areas).

    The Desktop duplication has 2 improtant properties-2 byte arrays an array for the previouspixels and a NewPixels array. Every 4 bytes represent a pixel in the RGBA form so for example if my screen is 1920 x 1080 the buffer size is 1920 x 1080 * 4.

    Below are the important highlights of my strategy

    1. In the initial state (the first time) I send the entire pixel buffer (in my case it's 1920 x 1080 * 3) - the alpha component is always 255 on screens :)
    2. From now on, I iterate over the UpdatedRegions (it's a rectangles array) and I send the regions bounds and Xo'r the pixels in it something like this:

          writer.Position = 0;
          var n = frame._newPixels;
          var w = 1920 * 4; //frame boundaries.
          var p = frame._previousPixels;
          foreach (var region in frame.UpdatedRegions)
              {
                  writer.WriteInt(region.Top);
                  writer.WriteInt(region.Height);
                  writer.WriteInt(region.Left);
                  writer.WriteInt(region.Width);
                  for (int y = region.Top, yOffset = y * w; y < region.Bottom; y++, yOffset += w)
                  {
                      for (int x = region.Left, xOffset = x * 4, i = yOffset + xOffset; x < region.Right; x++, i += 4)
                      {
                          writer.WriteByte(n[i] ^ p[i]); //'n' is the newpixels buffer and 'p' is the previous.xoring for differences.
                          writer.WriteByte(n[i+1] ^ p[i+1]);
                          writer.WriteByte(n[i + 2] ^ p[i + 2]);
      
                      }
                  }
              }
      

    3. I Compress the buffer using lz4 wrapper written in c# (refer to lz4.NET@github). Then, I write the data on a NetworkStream.
    4. I merge the areas in the receiver side to get the updated image - this is not our problem today :)

    'writer' is a instance of 'QuickBinaryWriter' class i wrote (simply to reuse the same buffer again).

        public class QuickBinaryWriter
    {
        private readonly byte[] _buffer;
        private int _position;
    
        public QuickBinaryWriter(byte[] buffer)
        {
            _buffer = buffer;
        }
    
        public int Position
        {
            get { return _position; }
            set { _position = value; }
        }
    
        public void WriteByte(byte value)
        {
            _buffer[_position++] = value;
        }
    
    
        public void WriteInt(int value)
        {
    
            byte[] arr = BitConverter.GetBytes(value);
            for (int i = 0; i < arr.Length; i++)
                WriteByte(arr[i]);
        }
    
    }
    

    From many measures, I've seen that the data sent is really huge, and sometimes for a single frame update the data could get up to 200kb (after compression!). Lets be honest-200kb is really nothing,but if i want to smoothly stream the screen and be able to watch in high Fps rate i would have to work on this a little bit - to minimize the network traffic and the bandwidth usage.

    I'm looking for suggestion and creative ideas to improve the efficiency of the program- mainly the data sent on the network part (by packing it in other ways or any other idea)i'll appreciate any help and ideas.Thanks.

    解决方案

    For your screen of 1920 x 1080, with 4 byte color, you are looking at approximately 8 MB per frame. With 20 FPS, you have 160 MB/s. So getting from 8 MB to 200 KB (4 MB/s @ 20 FPS) is a great improvement.

    I would like to get your attention to certain aspects that I am not sure you are focusing on, and hopefully it helps.

    1. The more you compress your screen image, the more processing it might need
    2. You actually need to focus on compression mechanisms designed for series of continuously changing images, similar to video codecs (sans audio though). For example: H.264
    3. Remember, you need to use some kind of real-time protocol for transferring your data. The idea behind that is, if one of your frame makes it to the destination machine with a lag, you might as well drop the next few frames to play catch-up. Else you will be in a perennially lagging situation, which I doubt the users are going to enjoy.
    4. You can always sacrifice quality for performance. The simplest such mechanism that you see in similar technologies (like MS remote desktop, VNC, etc) is to send a 8 bit color (ARGB each of 2 bits) instead of 3 byte color that you are using.
    5. Another way to improve your situation would be to focus on a specific rectangle on the screen that you want to stream, instead of streaming the whole desktop. This will reduce the size of the frame itself.
    6. Another way would be to scale your screen image to a smaller image before transmitting and then scale it back to normal before displaying.
    7. After sending the initial screen, you can always send the diff between newpixels and previouspixels. Needless to say the the original screen and the diff screen will all be LZ4 compressed/decompressed. Every so often you should send the full array instead of the diff, if you use some lossy algorithm to compress the diff.
    8. Does UpdatedRegions, have overlapping areas? Can that be optimized to not send duplicate pixel information?

    The ideas above can be applied one on top of the other to get a better user experience. Ultimately, it depends on the specifics of your application and end-users.

    EDIT:

    这篇关于C#屏幕流方案的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆