为什么iPad / iOS上原生相机分辨率-vs- getUserMedia的差异? [英] Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?

查看:319
本文介绍了为什么iPad / iOS上原生相机分辨率-vs- getUserMedia的差异?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经为使用getUserMedia的iPad构建了这个Web应用程序,并将生成的视频流式传输到网站上的视频元素。我正在使用的型号是iPad Air,后置摄像头分辨率为1936x2592。目前getUserMedia方法的约束是:

I've built this web app for iPads that uses getUserMedia and streams the resulting video through to a video element on a website. The model I'm using is an iPad Air with a rear-camera resolution of 1936x2592. Currently the constraints for the getUserMedia method are:

video: {
    facingMode: 'environment',
    width: { ideal: 1936 },
    height: { ideal: 2592 }
}

然而,当我拉入视频时,它看起来相当颗粒感。通过控制台日志挖掘以获取流,视频轨道以及该轨道的设置,看起来视频的分辨率已缩小到720x1280。这有什么特别的原因吗? webRTC / getUserMedia可以处理的最大分辨率是多少?

However, when I pull in the video it looked fairly grainy. Digging through the console log to grab the stream, video track, and then settings of that track, it appears that the video's resolution has been scaled down to 720x1280. Is there any particular reason for this? Is there a max resolution that webRTC/getUserMedia can handle?

推荐答案

编辑 - ImageCapture

如果60FPS视频不是一项硬性要求,并且您对兼容性有所了解,您可以轮询ImageCapture来模拟相机并从相机接收更清晰的图像。

If 60FPS video isn't a hard requirement and you have leway on compatibility you can poll ImageCapture to emulate a camera and receive a much clearer image from the camera.

您必须检查客户端支持,然后可能回退MediaCapture。

You would have to check for clientside support and then potentially fallback on MediaCapture.


API可以控制相机功能,如变焦,亮度,对比度,ISO和白平衡。最重要的是,Image Capture允许您访问任何可用设备摄像头或网络摄像头的全分辨率功能。之前在网上拍照的技术使用了视频快照(MediaCapture渲染到Canvas),其分辨率低于静止图像的分辨率。

The API enables control over camera features such as zoom, brightness, contrast, ISO and white balance. Best of all, Image Capture allows you to access the full resolution capabilities of any available device camera or webcam. Previous techniques for taking photos on the Web have used video snapshots (MediaCapture rendered to a Canvas), which are lower resolution than that available for still images.

https://developers.google.com/web/updates/2016 / 12 / imagecapture

及其polyfill:

and its polyfill:

https://github.com/GoogleChromeLabs/imagecapture-polyfill

MediaCapture

有点长的回答......主要是从AR Web和过去几年的原生应用程序。

Bit of a long answer... and mostly learning from looking at AR Web and Native apps for the last few years.

如果您的相机仅允许1920x1080,1280x720和640x480分辨率,则浏览器实现媒体捕获可以模拟来自1280x720的480x640馈送。从测试(主要是Chrome)开始,浏览器通常将720缩小到640,然后裁剪中心。有时当我使用虚拟相机软件时,我看到Chrome在非支持的分辨率周围添加了人工黑色填充。客户看到成功消息和正确尺寸的馈送,但一个人会看到质量下降。由于此仿真,您无法保证Feed正确或不缩放。但是,它通常会具有所需的正确尺寸。

If you have a camera which allows a 1920x1080, 1280x720, and 640x480 resolutions only, the browser implementation of Media Capture can emulate a 480x640 feed from the 1280x720. From testing (primarily Chrome) the browser typically scales 720 down to 640 and then crops the center. Sometimes when I have used virtual camera software I see Chrome has added artificial black padding around a non supported resolution. The client sees a success message and a feed of the right dimensions but a person would see a qualitative degradation. Because of this emulation you cannot guarantee the feed is correct or not scaled. However it will typically have the correct dimensions requested.

您可以阅读约束此处。它基本归结为:给我一个接近x的分辨率。然后,浏览器根据自己的实现确定拒绝约束并抛出错误,获取分辨率或模拟分辨率。

You can read about constraints here. It basically boils down to: Give me a resolution as close to x. Then the browser determines by its own implementation to reject the constraints and throw an error, get the resolution, or emulate the resolution.

此设计的更多信息详见 mediacapture 规范。特别是:

More information of this design is detailed in the mediacapture specification. Especially:


RTCPeerConnection是一个有趣的对象,因为它同时作为网络流的接收器和源。作为接收器,它具有源转换功能(例如,降低比特率,放大/缩小分辨率和调整帧速率),并且作为源,它可以通过轨道源改变其自己的设置。

The RTCPeerConnection is an interesting object because it acts simultaneously as both a sink and a source for over-the-network streams. As a sink, it has source transformational capabilities (e.g., lowering bit-rates, scaling-up / down resolutions, and adjusting frame-rates), and as a source it could have its own settings changed by a track source.

主要原因是允许n个客户端访问相同的媒体源但可能需要不同的分辨率,比特率等,因此仿真/ scaling / transformationing试图解决这个问题。对此的否定是你永远不知道源解析是什么。

The main reason for this is allowing n clients to have access to the same media source but may require different resolutions, bit rate, etc, thus emulation/scaling/transforming attempts to solve this problem. A negative to this is that you never truly know what the source resolution is.

因此,要回答您的具体问题:Apple已在Safari中确定应在何时何地缩放分辨率。如果你不够具体,你可能会遇到这种颗粒状的外观。我发现如果你使用min,max和exact的约束,你会得到一个更清晰的iOS摄像头。如果不支持该分辨率,它将尝试并模拟它,或拒绝它。

Thus to answer your specific question: Apple has determined within Safari what resolutions should be scaled where and when. If you are not specific enough you may encounter this grainy appearance. I have found if you use constraints with min, max, and exact you get a clearer iOS camera feed. If the resolution is not supported it will try and emulate it, or reject it.

这篇关于为什么iPad / iOS上原生相机分辨率-vs- getUserMedia的差异?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆