如何设置视频帧捕捉使用位图数据 [英] How to set Video Frame Capture Using Bitmap Data

查看:172
本文介绍了如何设置视频帧捕捉使用位图数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我实现了使用Flash的Andr​​oid的增强现实应用程序。为了让工作在我的Andr​​oid手机(Nexus One的)的手机相机必须被激活,以及应用程序。所以,我需要2层之一是我的电话相机的进料和另一种在它的上面是从Away3D中在这种情况下,视图的背景

I'm implementing an Augmented Reality application for android using Flash. In order to get the application working on my Android Phone (nexus One) the Phone Camera must be activated as well. So I need 2 layers one for the background which is the feed of my phone camera and an other one on top of it which is the view from away3d in this case.

所以,建立一个BitmapData对象来保存最新的网络摄像头的静帧我可以做这项工作的信息。

So setting up a BitmapData object to hold the information of the most recent webcam still-frame I can make this work.

如果我使用Papervision3D的图书馆和FLARToolkit我们建立的BitmapData使用的发现从的这个视频教程

If I use papervision3D library and FLARToolkit we setting up the BitmapData using the following part of the code found from this video tutorial:

//import libraries
 import org.libspark.flartoolkit.core.raster.rgb.FLARRgbRaster_BitmapData;   
 import org.libspark.flartoolkit.detector.FLARSingleMarkerDetector;


 private function setupCamera():void
            {
                vid = new Video(640, 480);
                cam = Camera.getCamera();
                cam.setMode(320, 240, 10);
                vid.attachCamera(cam);
                addChild(vid);
            }   

            private function setupBitmap():void
            {
                bmd = new BitmapData(640, 480);
                bmd.draw(vid);
                raster = new FLARRgbRaster_BitmapData(bmd);
                detector = new FLARSingleMarkerDetector(fparams, mpattern, 80);
            }



private function loop(e:Event):void
            {
                if(detector.detectMarkerLite(raster, 80) && detector.getConfidence() > 0.5)
                {
                    vp.visible = true;
                    detector.getTransformMatrix(trans);
                    container.setTransformMatrix(trans);
                    bre.renderScene(scene, camera, vp);
                    }
                    else{
                    vp.visible = false}
                    }
            catch(e:Error){}}}}

不过,要实现我的应用程序即时通讯使用的Away3D引擎和FLARManager和这样做的方式是非常不同的,我能理解。我有实现以下code,但只认为它是只显示闪光相机在3D视图的前面,我不能检查,如果我的应用程序是工作还是没有,因为它并没有告诉我任何3D对象时,我把标志在屏幕的正前方。

However, to implement my application Im using Away3D engine and FLARManager and the way of doing that is very different as I can understand. I have implement the following code but the only think it does is just show the Flash Camera in the front of the 3D view and I can't check if my application is work or not since it doesn't show me any 3D Object when I place the marker in front of the screen.

我的code是:

//Setting Up Away3DLite Camera3D
import com.transmote.flar.camera.FLARCamera_Away3DLite;

private var camera3D:FLARCamera_Away3DLite;

this.camera3D = new FLARCamera_Away3DLite(this.flarManager, new Rectangle(0, 0, this.stage.stageWidth, this.stage.stageHeight));

  //Setting Up the bitmapData 
  private function bitmap():void
{
    c = Camera.getCamera();
    c.setMode(320,240,10)
    this.v.attachCamera(c);
    addChild(this.v);         
    bmd = new BitmapData(640,480); 
    bmd.draw(this.v);
   }

能否请你帮我找到了我如何能结合这两个?

Can you please help me to find out how can I combine those two?

我会真的AP preciate任何意见,我可以从你。

I will really appreciate any advice i can get from you.

感谢您

推荐答案

要找出你的问题,我会试图打破这两个东西,并确保各部分工作第一。它听起来像是你有摄像头的部分工作,尝试做只是一些3D(无AR)绘制一个立方体或东西。然后尝试实现AR但它做一些简单的像跟踪出来的东西或者使物体可见或不可见。然后开始将它们结合起来。

To isolate your problem, I'd try to break these two things up and make sure each part works first. It's sounds like you've got the camera part working, try doing just some 3D (no AR) draw a cube or something. Then try implementing the AR but have it do something simple like trace something out or making an object visible or invisible. Then start combining them.

这篇关于如何设置视频帧捕捉使用位图数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆