ARKit镜面/翻转相机层 [英] ARKit mirror/flip camera layer

查看:94
本文介绍了ARKit镜面/翻转相机层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经制作了一个iPad ARKit应用程序,该应用程序在地板上显示了3D对象,并且效果很好,非常喜欢ARKit!但是,iPad将通过HDMI连接到电视,这样人们就可以在对象旁边的电视上看到他们的自我.我现在遇到的问题是视频已镜像到电视上,并且我无法找到特定的图层或设置翻转视频流和/或ARConfig的功能……将不胜感激,这是很大的帮助吗?甚至可能不会对性能造成太大影响?

P.S在Swift中:)

更新1:我已经像这样镜像了sceneView sceneView.transform = CGAffineTransform(scaleX:-1,y:1)但这似乎对性能有很大影响...

谢谢.

解决方案

您可以使用

I've made an iPad ARKit application that shows a 3D object on the floor and it works great, love ARKit! However, the iPad gonna be connected to a TV through HDMI so that people can see them selfs on the TV besides the object. The issue I have now is that the video is mirrored on the TV and I'm not able to find the specific layer or setting the flip the video stream and/or the ARConfig... any help would be greatly appreciated, or is it even possible to do so without to much performance impact?

P.S it's in Swift :)

Update 1: I've mirrored the sceneView like so sceneView.transform = CGAffineTransform(scaleX: -1, y: 1) but this seems to have quite a performance impact...

Thanks in advance.

解决方案

You can achieve this using SCNTechnique and Metal shaders.

The gist of it is to create a vertex shader that creates a fullscreen quad and mirrors the u coordinate:

vertex VertexOut mirrorVertex(VertexIn in [[stage_in]])
{
    VertexOut out;
    out.position = in.position;
    // Mirror the U coordinate: (1.0 - ..)
    out.uv = float2(1.0 - (in.position.x + 1.0) * 0.5, 1.0 - (in.position.y + 1.0) * 0.5);
    return out;
};

The fragment shaders is a simple passthrough shader:

fragment float4 mirrorFragment(VertexOut vert [[stage_in]],
                                texture2d<float, access::sample> colorSampler [[texture(0)]])
{
    constexpr sampler s = sampler(coord::normalized,
                                  address::clamp_to_edge,
                                  filter::linear);
    return colorSampler.sample( s, vert.uv);
}

You can create an SCNTechnique by created a technique definition in a .plist file first. Here you specify passes, shaders, input and outputs targets and a sequence. In this case the definition is quite simple:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>passes</key>
        <dict>
            <key>pass_mirror_camera</key>
            <dict>
                <key>draw</key>
                <string>DRAW_SCENE</string>
                <key>metalVertexShader</key>
                <string>mirrorVertex</string>
                <key>metalFragmentShader</key>
                <string>mirrorFragment</string>
                <key>inputs</key>
                <dict>
                    <key>colorSampler</key>
                    <string>COLOR</string>
                </dict>
                <key>outputs</key>
                <dict>
                    <key>color</key>
                    <string>COLOR</string>
                </dict>
                <key>draw</key>
                <string>DRAW_QUAD</string>
            </dict>
        </dict>
        <key>sequence</key>
        <array>
            <string>pass_mirror_camera</string>
        </array>
    </dict>
</plist>

You also need to create a Metal shader (.metal) file containing the vertex and fragment shader:

//
//  MirrorShaders.metal
//  MirrorCamera
//
//  Created by Dennis Ippel on 14/05/2019.
//  Copyright © 2019 Dennis Ippel. All rights reserved.
//
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

struct VertexIn
{
    float4 position [[attribute(SCNVertexSemanticPosition)]];
};

struct VertexOut
{
    float4 position [[position]];
    float2 uv;
};

vertex VertexOut mirrorVertex(VertexIn in [[stage_in]])
{
    VertexOut out;
    out.position = in.position;
    // Mirror the U coordinate: (1.0 - ..)
    out.uv = float2(1.0 - (in.position.x + 1.0) * 0.5, 1.0 - (in.position.y + 1.0) * 0.5);
    return out;
};

fragment float4 mirrorFragment(VertexOut vert [[stage_in]],
                                texture2d<float, access::sample> colorSampler [[texture(0)]])
{
    constexpr sampler s = sampler(coord::normalized,
                                  address::clamp_to_edge,
                                  filter::linear);
    return colorSampler.sample( s, vert.uv);
}

You can then put everything together in your view controller and assign the technique to your SCNView instance:

if let path = Bundle.main.path(forResource: "MirrorCamera", ofType: "plist") {
    if let dict = NSDictionary(contentsOfFile: path)  {
        let dict2 = dict as! [String : AnyObject]
        let technique = SCNTechnique(dictionary:dict2)
        sceneView.technique = technique
    }
}

Resulting in:

这篇关于ARKit镜面/翻转相机层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆