带有Google Vision API的Media Recorder [英] Media Recorder with Google Vision API

查看:107
本文介绍了带有Google Vision API的Media Recorder的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Android愿景中的 FaceTracker 示例API。但是,当在其上绘制叠加层时,在录制视频时遇到困难。

I am using the FaceTracker sample from the Android vision API. However, I am experiencing difficulty in recording videos while the overlays are drawn on them.

一种方法是将位图存储为图像,并使用FFmpeg或Xuggler将其作为视频进行处理,但是我想知道是否存在更好的解决方案,如果我们可以在投影预览时在运行时录制视频。

One way is to store bitmaps as images and process them using FFmpeg or Xuggler to merge them as videos, but I am wondering if there is a better solution to this problem if we can record video at runtime as the preview is projected.

更新1:
我更新了跟随类与媒体记录器一起使用,但是记录仍然无法正常工作。当我调用triggerRecording()函数时,它将引发以下错误:

Update 1: I updated the following class with media recorder, but the recording is still not working. It is throwing the following error when I call triggerRecording() function:

MediaRecorder:以无效状态开始调用:4

,并且清单文件中有外部存储权限。

and I have external storage permission in the Manifest file.

更新2:

我已在代码中修复了上述问题,并在onSurfaceCreated回调中移动了setupMediaRecorder()。但是,当我停止记录时,它会抛出运行时异常。如果没有视频,请根据文档 /音频数据运行时异常将被抛出。

I have fixed the above issue in the code and moved the setupMediaRecorder() in the onSurfaceCreated callback. However, when I stop recording it throws the runtime-exception. According to the documentation if there is no video/audio data Runtime exception will be thrown.

那么,我在这里错过了什么?

So, what am I missing here?

public class CameraSourcePreview extends ViewGroup {
    private static final String TAG = "CameraSourcePreview";

    private static final SparseIntArray ORIENTATIONS = new SparseIntArray();

    static {
        ORIENTATIONS.append(Surface.ROTATION_0, 90);
        ORIENTATIONS.append(Surface.ROTATION_90, 0);
        ORIENTATIONS.append(Surface.ROTATION_180, 270);
        ORIENTATIONS.append(Surface.ROTATION_270, 180);
    }

    private MediaRecorder mMediaRecorder;
    /**
     * Whether the app is recording video now
     */
    private boolean mIsRecordingVideo;

    private Context mContext;
    private SurfaceView mSurfaceView;
    private boolean mStartRequested;
    private boolean mSurfaceAvailable;
    private CameraSource mCameraSource;

    private GraphicOverlay mOverlay;

    public CameraSourcePreview(Context context, AttributeSet attrs) {
        super(context, attrs);
        mContext = context;
        mStartRequested = false;
        mSurfaceAvailable = false;

        mSurfaceView = new SurfaceView(context);

        mSurfaceView.getHolder().addCallback(new SurfaceCallback());

        addView(mSurfaceView);

        mMediaRecorder = new MediaRecorder();
    }

    private void setUpMediaRecorder() throws IOException {
        mMediaRecorder.setPreviewDisplay(mSurfaceView.getHolder().getSurface());
        mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
        mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
        mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);

        mMediaRecorder.setOutputFile(Environment.getExternalStorageDirectory() + File.separator + Environment.DIRECTORY_DCIM + File.separator + System.currentTimeMillis() + ".mp4");
        mMediaRecorder.setVideoEncodingBitRate(10000000);
        mMediaRecorder.setVideoFrameRate(30);
        mMediaRecorder.setVideoSize(480, 640);
        mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
        mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
        //int rotation = mContext.getWindowManager().getDefaultDisplay().getRotation();
        //int orientation = ORIENTATIONS.get(rotation);
        mMediaRecorder.setOrientationHint(ORIENTATIONS.get(0));
        mMediaRecorder.prepare();

        mMediaRecorder.setOnErrorListener(new MediaRecorder.OnErrorListener() {
            @Override
            public void onError(MediaRecorder mr, int what, int extra) {
                Timber.d(mr.toString() + " : what[" + what + "]" + " Extras[" + extra + "]");
            }
        });
    }

    public void start(CameraSource cameraSource) throws IOException {
        if (cameraSource == null) {
            stop();
        }

        mCameraSource = cameraSource;

        if (mCameraSource != null) {
            mStartRequested = true;
            startIfReady();
        }
    }

    public void start(CameraSource cameraSource, GraphicOverlay overlay) throws IOException {
        mOverlay = overlay;
        start(cameraSource);
    }

    public void stop() {
        if (mCameraSource != null) {
            mCameraSource.stop();
        }
    }

    public void release() {
        if (mCameraSource != null) {
            mCameraSource.release();
            mCameraSource = null;
        }
    }

    private void startIfReady() throws IOException {
        if (mStartRequested && mSurfaceAvailable) {
            mCameraSource.start(mSurfaceView.getHolder());
            if (mOverlay != null) {
                Size size = mCameraSource.getPreviewSize();
                int min = Math.min(size.getWidth(), size.getHeight());
                int max = Math.max(size.getWidth(), size.getHeight());
                if (isPortraitMode()) {
                    // Swap width and height sizes when in portrait, since it will be rotated by
                    // 90 degrees
                    mOverlay.setCameraInfo(min, max, mCameraSource.getCameraFacing());
                } else {
                    mOverlay.setCameraInfo(max, min, mCameraSource.getCameraFacing());
                }
                mOverlay.clear();
            }

            mStartRequested = false;
        }
    }

    private class SurfaceCallback implements SurfaceHolder.Callback {
        @Override
        public void surfaceCreated(SurfaceHolder surface) {
            mSurfaceAvailable = true;
            surface.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

            // setup the media recorder
            try {
                setUpMediaRecorder();
            } catch (IOException e) {
                e.printStackTrace();
            }

            try {
                startIfReady();
            } catch (IOException e) {
                Timber.e(TAG, "Could not start camera source.", e);
            }
        }

        @Override
        public void surfaceDestroyed(SurfaceHolder surface) {
            mSurfaceAvailable = false;
        }

        @Override
        public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
        }
    }

    @Override
    protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
        int width = 320;
        int height = 240;
        if (mCameraSource != null) {
            Size size = mCameraSource.getPreviewSize();
            if (size != null) {
                width = size.getWidth();
                height = size.getHeight();
            }
        }

        // Swap width and height sizes when in portrait, since it will be rotated 90 degrees
        if (isPortraitMode()) {
            int tmp = width;
            width = height;
            height = tmp;
        }

        final int layoutWidth = right - left;
        final int layoutHeight = bottom - top;

        // Computes height and width for potentially doing fit width.
        int childWidth = layoutWidth;
        int childHeight = (int) (((float) layoutWidth / (float) width) * height);

        // If height is too tall using fit width, does fit height instead.
        if (childHeight > layoutHeight) {
            childHeight = layoutHeight;
            childWidth = (int) (((float) layoutHeight / (float) height) * width);
        }

        for (int i = 0; i < getChildCount(); ++i) {
            getChildAt(i).layout(0, 0, childWidth, childHeight);
        }

        try {
            startIfReady();
        } catch (IOException e) {
            Timber.e(TAG, "Could not start camera source.", e);
        }
    }

    private boolean isPortraitMode() {
        int orientation = mContext.getResources().getConfiguration().orientation;
        if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
            return false;
        }
        if (orientation == Configuration.ORIENTATION_PORTRAIT) {
            return true;
        }

        Timber.d(TAG, "isPortraitMode returning false by default");
        return false;
    }

    private void startRecordingVideo() {
        try {
            // Start recording
            mMediaRecorder.start();
            mIsRecordingVideo = true;
        } catch (IllegalStateException e) {
            e.printStackTrace();
        }
    }

    private void stopRecordingVideo() {
        // UI
        mIsRecordingVideo = false;
        // Stop recording
        mMediaRecorder.stop();
        mMediaRecorder.reset();
    }

    public void triggerRecording() {
        if (mIsRecordingVideo) {
            stopRecordingVideo();
            Timber.d("Recording stopped");
        } else {
            startRecordingVideo();
            Timber.d("Recording starting");
        }
    }
}


推荐答案

解决方案1:从Android Lollipop, MediaProjection API,该API与MediaRecorder一起可用于将 SurfaceView 保存到视频文件中。 此示例显示了如何输出 SurfaceView 到视频文件。

Solution 1: From Android Lollipop, a MediaProjection API was introduced which in conjunction with MediaRecorder can be used to save a SurfaceView to a video file. This example shows how to output a SurfaceView to a video file.

解决方案2:使用 Grafika存储库中提供的一种简洁的Encoder类。请注意,这将需要您移植 FaceTracker 应用程序,以便它使用OpenGL执行所有渲染。这是因为Grafika示例利用OpenGL管道来快速读取和写入纹理数据。

Solution 2: Alternatively, you can use one of the neat Encoder classes provided in the Grafika repository. Note that this will require you to port the FaceTracker application so that it is using OpenGL to perform all rendering. This is because Grafika samples utilise the OpenGL pipeline for fast read and write of texture data.

有一个最小的示例可以使用<$ c $来实现所需的功能。 CircularEncoder rel = nofollow noreferrer> ContinuousCaptureActivity 类。这提供了一个 Frame Blitting 的示例,同时在屏幕上显示了帧缓冲区数据并输出到视频。

There is a minimal example which achieves exactly what you want using a CircularEncoder in the ContinuousCaptureActivity class. This provides an example of Frame Blitting, simultaneously displaying frame buffer data to the screen and outputting to a video.

主要更改将是使用Grafika WindowSurface 代替 SurfaceView 用于FaceTracker应用程序,这将设置EGL上下文,使您可以通过编码器将帧缓冲区数据保存到文件中。将所有内容渲染到WindowSurface之后,以与 ContinuousCaptureActivity 类相同的方式设置录制就很简单了。

The major change would be to utilise a Grafika WindowSurface instead of a SurfaceView for the FaceTracker application, this sets up the EGL Context allowing you to save frame buffer data to a file via the Encoder. Once you can render everything to the WindowSurface, it is trivial to set up recording in the same way as the ContinuousCaptureActivity class.

这篇关于带有Google Vision API的Media Recorder的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆