根据与相机的距离缩放 AR 图片 [英] Scaling AR pictures based on the distance from the camera

查看:30
本文介绍了根据与相机的距离缩放 AR 图片的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发增强现实 iPhone 应用程序.

I’m developing an augmented reality iPhone app.

它基本上应该做的是在您通过相机查看它们时显示分配给地理位置的图片.每张这样的图片可以被理解为具有其地理位置和标题(理解为其平面和北方向轴之间的角度)的广告牌.

What it should basically do is display pictures assigned to geographical locations when you look at them by the camera. Each such picture may be understood as a billboard which has its geographical position and a heading (understood as an angle between its plane and the north direction axis).

目标是让这些广告牌或多或少像实物一样显示.如果您靠近它们,它们应该更大,而在更远时它们应该更小.当你不直接站在他们面前时,他们也应该以适当的视角出现.

The goal is to make these billboards display more or less like they were physical objects. They should be larger if you are close to them and smaller when farther. They should as well appear in a proper perspective when you don’t stand directly in front of them.

我想我或多或少地实现了那个目标.通过测量从 iPhone 到图片的初始方向,我可以决定相机看到的图片的旋转角度(以正确的视角查看它们).

I think I have achieved that goal more or less. By measuring the initial heading from the iPhone to a picture I can decide about the rotation angle of the pictures as viewed by the camera (to see them in a proper perspective).

但是,如果要根据与手机的距离来缩放它们,我想我的方法搞砸了.我假设最大视距为 200 m.然后,距手机 100 m 的广告牌以其原始尺寸的 50% 显示.就是这样.基于最大距离的线性缩放.

However, if it comes to scaling them based on the distance from the phone, I think I screwed my approach. I made an assumption that the maximum view distanse is, let’s say, 200 m. Then billboards being 100 m from the phone are displayed in 50% of their original size. That’s it. A linear scaling based on the maximum distance.

我通过这种方法错过的是广告牌(理解为物理对象)的大小.它们在屏幕上的显示方式仅取决于它们的像素大小.这意味着显示器的分辨率是决定您如何看待它们的一个因素.所以我假设如果你有两部屏幕尺寸相同但分辨率不同的手机,那么相同的图片在这两部手机上会有不同的尺寸.我说得对吗?

最后,我的问题是如何缩放图片以使其在 AR 视图中看起来不错?

Then finally, my question is how to approach scaling pictures to make them look good on the AR view?

我想我应该考虑一些相机参数.当一个 10x10 厘米的物体正好在相机前面时,它可能会覆盖整个屏幕.但是当你把它放远几米时,它就变成了一个小细节.那么如何进行缩放呢?如果我决定为我的虚拟广告牌分配物理尺寸,那么如何根据与相机的距离来缩放它们?

I think I should take some camera parameters into consideration. When a 10x10 cm object is just in front of the camera it may cover the whole screen. But when you put it a few metres farther, it becomes a minor detail. Then how to approach scaling? If I decide to assign physical dimensions to my virtual billboards, then how to scale them based on the distance from the camera?

我应该为每张图片分配以米为单位的物理尺寸(无论它们的像素大小是多少)并根据尺寸和一些与相机相关的缩放因子显示它们,我这样做对吗?

你能帮我解决这个问题吗?任何线索都会有所帮助.谢谢!

Could you please help me on that? Any clues will be helpful. Thank you!

推荐答案

我想我已经解决了我的问题.让我解释一下我是如何做的,如果可能对其他人有用.如果您发现这种方法有误,我将非常感谢您的反馈或任何进一步的线索.

I think I managed to solve my problem. Let me explain how I did it for if might be of use to others. If you find this approach wrong, I will be grateful for your feedback or any further clues.

我决定为我的虚拟广告牌分配以米为单位的物理尺寸. 这个讨论帮我找出了iPhone 4相机的参数:焦距和CCD传感器的尺寸.更重要的是,这些值还帮助我为我的 AR 应用计算了合适的 FOV(请参阅 计算相机的视角).

I decided to assign physical dimensions in metres to my virtual billboards. This discussion helped me find out the parameters of the iPhone 4 camera: focal length and the dimensions of the CCD sesnor. What's more, these values also helped me calculate a proper FOV for my AR app (see Calculating a camera's angle of view).

本网站帮助我计算了在CCD传感器.因此,如果我的广告牌的宽度和高度以米为单位,并且它们与相机的距离以及相机的焦距是已知的,我就可以计算它们在传感器上的大小.

This website helped me calculate the size in millimeters of a physical object image produced on a CCD sensor. So if my billboards have width and height in metres and their distance from the camera is known as well as the focal length of the camera, I can calculate their size on the sensor.

(焦距 * 物体尺寸)/镜头到物体的距离 = 图像尺寸(在传感器上)

(Focal Length * Object Dimension) / Lens-to-object distance = Image Size (on the sensor)

double CalculatePhysicalObjectImageDimensionOnCCD(double cameraFocalLength_mm, double physicalObjectDimension_m, double distanceFromPhysicalObject_m)
{
    double physicalObjectDimension_mm = physicalObjectDimension_m * 1000;
    double distanceFromPhysicalObject_mm = distanceFromPhysicalObject_m * 1000;
    return (cameraFocalLength_mm * physicalObjectDimension_mm) / distanceFromPhysicalObject_mm;
}

我对此知之甚少,所以我不确定我当时采取的方法是否可行,但我只是决定计算 iPhone 屏幕与 CCD 传感器的尺寸相比有多大.所以通过一个简单的数学运算,我得到了一个传感器与屏幕尺寸的比例.因为传感器和屏幕的宽高比似乎不一样,所以我用一种胡思乱想的方式计算了这个比例:

I have little knowledge on that matter so I’m not sure if the approach I took then is OK, but I just decided to calculate how much larger the iPhone screen is compared to the dimensions of the CCD sensor. So by a simple mathematical operation I get a sensor-to-screen size ratio. Because the width-to-height ratio of the sensor and the screen seem to be different, I calculated the ratio in a kind of cranky way:

double GetCCDToScreenSizeRatio(double sensorWidth, double sensorHeight, double screenWidth, double screenHeight)
{
    return sqrt(screenWidth * screenHeight) / sqrt(sensorWidth * sensorHeight);
}

那么我得到的比率就可以看作是一个乘数.首先,我计算传感器上虚拟广告牌的尺寸,然后将其乘以比率.这样我就可以得到广告牌的实际大小(以像素为单位).就是这样.因此,当我仅通过提供广告牌的宽度和距离来调用下面的函数时,它会返回在屏幕上查看的广告牌的宽度(以像素为单位).广告牌的高度相同以获得两个尺寸.

Then the ratio I get can be treated as a multiplier. First I calculate a dimension of my virtual billboard on the sensor and then multiply it by the ratio. This way I get the actual size of the billboard in pixels. That’s it. So when I call the function below just by providing the width of my billboard and the distance from it, it returns the width in pixels of the billboard as viewed on the screen. Same for the height of the billboard to get both dimensions.

const double CCD_DIM_LONGER_IPHONE4 = 4.592; //mm
const double CCD_DIM_SHORTER_IPHONE4 = 3.450; //mm
const double FOCAL_LENGTH_IPHONE4 = 4.28; //mm

double CalculatePhysicalObjectImageDimensionOnScreen_iPhone4(double physicalObjectDimension_m, double distanceFromPhysicalObject_m)
{
    double screenWidth = [UIScreen mainScreen].bounds.size.width;
    double screenHeight = [UIScreen mainScreen].bounds.size.height;

    return CalculatePhysicalObjectImageDimensionOnScreen(FOCAL_LENGTH_IPHONE4, physicalObjectDimension_m, distanceFromPhysicalObject_m, CCD_DIM_LONGER_IPHONE4, CCD_DIM_SHORTER_IPHONE4, screenWidth, screenHeight);
}

double CalculatePhysicalObjectImageDimensionOnScreen(double cameraFocalLength_mm, double physicalObjectDimension_m, double distanceFromPhysicalObject_m, double ccdSensorWidth, double ccdSensorHeight, double screenWidth, double screenHeight)
{
    double ccdToScreenSizeRatio = GetCCDToScreenSizeRatio(ccdSensorWidth, ccdSensorHeight, screenWidth, screenHeight);
    double dimensionOnCcd = CalculatePhysicalObjectImageDimensionOnCCD(cameraFocalLength_mm, physicalObjectDimension_m, distanceFromPhysicalObject_m);

    return dimensionOnCcd * ccdToScreenSizeRatio;
}

与我之前愚蠢的线性缩放方法相比,它似乎完美无缺.顺便说一下,我还注意到,在 AR 视图上注册虚拟对象时,了解相机的 FOV 非常重要.这里是如何计算基于CCD 传感器尺寸和焦距.

It seems that it works perfect compared ty my previous, stupid approach of linear scaling. I also noticed, by the way, that it is really important to know the FOV for your camera when registering virtual objects on an AR view. Here’s how to calculate the FOV based on CCD sensor dimensions and the focal length.

在任何地方都很难找到这些值!我想知道为什么它们不能以编程方式访问(至少我的研究表明它们不是).似乎有必要准备硬编码值,然后检查运行应用程序的设备的模型,以决定在执行上述所有计算时选择哪个值:-/.

It’s so difficult to find these values anywhere! I wonder why they are not accessible programmatically (at least my research showed me that they are not). It seems that it is necessary to prepare hard-coded values and then check the model of the device the app is running on to decide which of the values to choose when doing all the calculations above :-/.

这篇关于根据与相机的距离缩放 AR 图片的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆