ARKit 镜像/翻转相机层 [英] ARKit mirror/flip camera layer

查看:18
本文介绍了ARKit 镜像/翻转相机层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我制作了一个 iPad ARKit 应用程序,它可以在地板上显示 3D 对象,而且效果很好,喜欢 ARKit!但是,iPad 将通过 HDMI 连接到电视,这样人们就可以在电视上看到自己,除了物体.我现在遇到的问题是视频被镜像到电视上,我无法找到特定层或设置翻转视频流和/或 ARConfig ......任何帮助将不胜感激,或者是甚至有可能在没有太大性能影响的情况下这样做吗?

P.S 它在 Swift 中 :)

更新1:我像这样镜像了sceneViewsceneView.transform = CGAffineTransform(scaleX: -1, y: 1)但这似乎对性能产生了相当大的影响...

提前致谢.

解决方案

您可以使用

I've made an iPad ARKit application that shows a 3D object on the floor and it works great, love ARKit! However, the iPad gonna be connected to a TV through HDMI so that people can see them selfs on the TV besides the object. The issue I have now is that the video is mirrored on the TV and I'm not able to find the specific layer or setting the flip the video stream and/or the ARConfig... any help would be greatly appreciated, or is it even possible to do so without to much performance impact?

P.S it's in Swift :)

Update 1: I've mirrored the sceneView like so sceneView.transform = CGAffineTransform(scaleX: -1, y: 1) but this seems to have quite a performance impact...

Thanks in advance.

解决方案

You can achieve this using SCNTechnique and Metal shaders.

The gist of it is to create a vertex shader that creates a fullscreen quad and mirrors the u coordinate:

vertex VertexOut mirrorVertex(VertexIn in [[stage_in]])
{
    VertexOut out;
    out.position = in.position;
    // Mirror the U coordinate: (1.0 - ..)
    out.uv = float2(1.0 - (in.position.x + 1.0) * 0.5, 1.0 - (in.position.y + 1.0) * 0.5);
    return out;
};

The fragment shaders is a simple passthrough shader:

fragment float4 mirrorFragment(VertexOut vert [[stage_in]],
                                texture2d<float, access::sample> colorSampler [[texture(0)]])
{
    constexpr sampler s = sampler(coord::normalized,
                                  address::clamp_to_edge,
                                  filter::linear);
    return colorSampler.sample( s, vert.uv);
}

You can create an SCNTechnique by created a technique definition in a .plist file first. Here you specify passes, shaders, input and outputs targets and a sequence. In this case the definition is quite simple:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>passes</key>
        <dict>
            <key>pass_mirror_camera</key>
            <dict>
                <key>draw</key>
                <string>DRAW_SCENE</string>
                <key>metalVertexShader</key>
                <string>mirrorVertex</string>
                <key>metalFragmentShader</key>
                <string>mirrorFragment</string>
                <key>inputs</key>
                <dict>
                    <key>colorSampler</key>
                    <string>COLOR</string>
                </dict>
                <key>outputs</key>
                <dict>
                    <key>color</key>
                    <string>COLOR</string>
                </dict>
                <key>draw</key>
                <string>DRAW_QUAD</string>
            </dict>
        </dict>
        <key>sequence</key>
        <array>
            <string>pass_mirror_camera</string>
        </array>
    </dict>
</plist>

You also need to create a Metal shader (.metal) file containing the vertex and fragment shader:

//
//  MirrorShaders.metal
//  MirrorCamera
//
//  Created by Dennis Ippel on 14/05/2019.
//  Copyright © 2019 Dennis Ippel. All rights reserved.
//
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>

struct VertexIn
{
    float4 position [[attribute(SCNVertexSemanticPosition)]];
};

struct VertexOut
{
    float4 position [[position]];
    float2 uv;
};

vertex VertexOut mirrorVertex(VertexIn in [[stage_in]])
{
    VertexOut out;
    out.position = in.position;
    // Mirror the U coordinate: (1.0 - ..)
    out.uv = float2(1.0 - (in.position.x + 1.0) * 0.5, 1.0 - (in.position.y + 1.0) * 0.5);
    return out;
};

fragment float4 mirrorFragment(VertexOut vert [[stage_in]],
                                texture2d<float, access::sample> colorSampler [[texture(0)]])
{
    constexpr sampler s = sampler(coord::normalized,
                                  address::clamp_to_edge,
                                  filter::linear);
    return colorSampler.sample( s, vert.uv);
}

You can then put everything together in your view controller and assign the technique to your SCNView instance:

if let path = Bundle.main.path(forResource: "MirrorCamera", ofType: "plist") {
    if let dict = NSDictionary(contentsOfFile: path)  {
        let dict2 = dict as! [String : AnyObject]
        let technique = SCNTechnique(dictionary:dict2)
        sceneView.technique = technique
    }
}

Resulting in:

这篇关于ARKit 镜像/翻转相机层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆