记录录像的问题 [英] Issue in recording video

查看:257
本文介绍了记录录像的问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用javacv来录制480 * 480分辨率的图像,如藤。作为起点,我使用了



接下来我试图用destWidth捕获一个视频,指定为639.结果是





当destWidth为639时,视频正在重复内容两次。当它是480,内容重复5次,绿色覆盖和失真更多。



此外,当destWidth = imageWidth时,视频被正确捕获。即,对于640 * 480,视频内容没有重复,没有绿色覆盖。



将框转换为IplImage



当这个问题第一次被问到时,我错过了提及FFmpegFrameRecorder中的记录方法现在接受Frame类型的对象,而之前它是IplImage对象。所以我试图通过将框架转换为IplImage来应用Alex Cohn的解决方案。

  // ---------- ----------------------------- 
//初始化ffmpeg_recorder
// ------- --------------------------------
private void initRecorder(){

Log.w(LOG_TAG,init recorder);

imageWidth = 640;
imageHeight = 480;

if(RECORD_LENGTH> 0){
imagesIndex = 0;
images = new Frame [RECORD_LENGTH * frameRate];
timestamps = new long [images.length]; (int i = 0; i< images.length; i ++){
images [i] = new Frame(imageWidth,imageHeight,Frame.DEPTH_UBYTE,2);

时间戳[i] = -1;
}
} else if(yuvImage == null){
yuvImage = new Frame(imageWidth,imageHeight,Frame.DEPTH_UBYTE,2);
Log.i(LOG_TAG,创建yuvImage);
OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
yuvIplimage = converter.convert(yuvImage);

}

Log.i(LOG_TAG,ffmpeg_url:+ ffmpeg_link);
recorder = new FFmpegFrameRecorder(ffmpeg_link,imageWidth,imageHeight,1);
recorder.setFormat(flv);
recorder.setSampleRate(sampleAudioRateInHz);
//在表面更改方法
recorder.setFrameRate(frameRate);

Log.i(LOG_TAG,记录器初始化成功);

audioRecordRunnable = new AudioRecordRunnable();
audioThread = new Thread(audioRecordRunnable);
runAudioThread = true;
}



@Override
public void onPreviewFrame(byte [] data,Camera camera){
if(audioRecord == null || audioRecord.getRecordingState()!= AudioRecord.RECORDSTATE_RECORDING){
startTime = System.currentTimeMillis();
返回;
}
if(RECORD_LENGTH> 0){
int i = imagesIndex ++%images.length;
yuvImage = images [i];
时间戳[i] = 1000 *(System.currentTimeMillis() - startTime);
}
/ *获取视频数据* /
int destWidth = 640;

if(yuvIplimage!= null&&& amp;&&&&&& amp;&&&& amp;&&录音){
ByteBuffer bb = yuvIplimage.getByteBuffer //重置缓冲区
int start = 2 *((imageWidth-destWidth)/ 4); //对于(int row = 0; row< imageHeight * 3/2; row ++){
bb.put(data,start,destWidth);
start + = imageWidth;
}
try {
long t = 1000 *(System.currentTimeMillis() - startTime);
if(t> recorder.getTimestamp()){
recorder.setTimestamp(t);
}
recorder.record(yuvImage);
} catch(FFmpegFrameRecorder.Exception e){
Log.e(LOG_TAG,与记录器()的问题:,e);
}
}


}

但是用这种方法生成的视频只包含绿色框架。

解决方案

首先,它是预处理的,而不是post处理视频。



我不知道您需要调整新版本的 javacv 所需的更改,我希望他们保持图书馆向后兼容。



您的缓冲区宽为640像素,高480像素。你想裁掉480x480。





这意味着你需要一个循环将每行复制到IplImage,如下所示:

  private int imageWidth = 640; 
private int imageHeight = 480;
private int destWidth = 480;

@Override
public void onPreviewFrame(byte [] data,Camera camera){

if(data.length!= imageWidth * imageHeight){
Camera.Size sz = camera.getPreviewSize();
imageWidth = sz.width;
imageHeight = sz.height;
destWidth = imageHeight;
}

ByteBuffer bb =(ByteBuffer)yuvImage.image [0] .position(0); //重置缓冲区
int start = 2 *((imageWidth-destWidth)/ 4); //对于(int row = 0; row< imageHeight * 3/2; row ++){
bb.put(data,start,destWidth);
start + = imageWidth;
}
recorder.record(yuvImage);


I am trying to record video in 480*480 resolution like in vine using javacv. As a starting point I used the sample provided in https://github.com/bytedeco/javacv/blob/master/samples/RecordActivity.java Video is getting recorded (but not in the desired resolution) and saved.

But the issue is that 480*480 resolution is not supported natively in android. So some pre processing needs to be done to get the video in desired resolution.

So once I was able to record video using code sample provided by javacv, next challenge was on how to pre process the video. On research it was found that efficient cropping is possible when final image width required is same as recorded image width. Such a solution was provided in the SO question,Recording video on Android using JavaCV (Updated 2014 02 17). I changed onPreviewFrame method as suggested in that answer.

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
            startTime = System.currentTimeMillis();
            return;
        }
        if (RECORD_LENGTH > 0) {
            int i = imagesIndex++ % images.length;
            yuvImage = images[i];
            timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
        }
        /* get video data */
        imageWidth = 640;
        imageHeight = 480    
        int finalImageHeight = 360;
        if (yuvImage != null && recording) {
            ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
            final int startY = imageWidth*(imageHeight-finalImageHeight)/2;
            final int lenY = imageWidth*finalImageHeight;
            bb.put(data, startY, lenY);
            final int startVU = imageWidth*imageHeight + imageWidth*(imageHeight-finalImageHeight)/4;
            final int lenVU = imageWidth* finalImageHeight/2;
            bb.put(data, startVU, lenVU);
            try {
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(yuvImage);
            } catch (FFmpegFrameRecorder.Exception e) {
                Log.e(LOG_TAG, "problem with recorder():", e);
            }
        }


    }
}

Please also note that this solution was provided for an older version of javacv. The resulting video had a yellowish overlay covering 2/3rd part. Also there was empty section on left side as the video was not cropped correctly.

So my question is what is the most appropriate solution for cropping videos using latest version of javacv?

Code after making change as suggested by Alex Cohn

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
            startTime = System.currentTimeMillis();
            return;
        }
        if (RECORD_LENGTH > 0) {
            int i = imagesIndex++ % images.length;
            yuvImage = images[i];
            timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
        }
        /* get video data */
        imageWidth = 640;
        imageHeight = 480;       
        destWidth = 480;

        if (yuvImage != null && recording) {
            ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
            int start = 2*((imageWidth-destWidth)/4); // this must be even
            for (int row=0; row<imageHeight*3/2; row++) {
                bb.put(data, start, destWidth);
                start += imageWidth;
            }
            try {
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(yuvImage);
            } catch (FFmpegFrameRecorder.Exception e) {
                Log.e(LOG_TAG, "problem with recorder():", e);
            }
        }


    }

Screen shot from video generated with this code (destWidth 480) is

Next I tried capturing a video with destWidth speciified as 639. The result is

When destWidth is 639 video is repeating contents twice. When it is 480, contents are repeated 5 times and the green overlay and distortion is more.

Also When the destWidth = imageWidth, video is captured properly. ie, for 640*480 there is no repetition of video contents and no green overlay.

Converting frame to IplImage

When this question was asked first, I missed to mention that the record method in FFmpegFrameRecorder is now accepting object of type Frame whereas earlier it was IplImage object. So I tried to apply Alex Cohn's solution by converting Frame to IplImage.

//---------------------------------------
// initialize ffmpeg_recorder
//---------------------------------------
private void initRecorder() {

    Log.w(LOG_TAG,"init recorder");

    imageWidth = 640;
    imageHeight = 480; 

    if (RECORD_LENGTH > 0) {
        imagesIndex = 0;
        images = new Frame[RECORD_LENGTH * frameRate];
        timestamps = new long[images.length];
        for (int i = 0; i < images.length; i++) {
            images[i] = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
            timestamps[i] = -1;
        }
    } else if (yuvImage == null) {
        yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
        Log.i(LOG_TAG, "create yuvImage");
        OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
        yuvIplimage = converter.convert(yuvImage);

    }

    Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link);
    recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
    recorder.setFormat("flv");
    recorder.setSampleRate(sampleAudioRateInHz);
    // Set in the surface changed method
    recorder.setFrameRate(frameRate);

    Log.i(LOG_TAG, "recorder initialize success");

    audioRecordRunnable = new AudioRecordRunnable();
    audioThread = new Thread(audioRecordRunnable);
    runAudioThread = true;
}



@Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
            startTime = System.currentTimeMillis();
            return;
        }
        if (RECORD_LENGTH > 0) {
            int i = imagesIndex++ % images.length;
            yuvImage = images[i];
            timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
        }
        /* get video data */
        int destWidth = 640;

        if (yuvIplimage != null && recording) {
            ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer
            int start = 2*((imageWidth-destWidth)/4); // this must be even
            for (int row=0; row<imageHeight*3/2; row++) {
                bb.put(data, start, destWidth);
                start += imageWidth;
            }
            try {
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(yuvImage);
            } catch (FFmpegFrameRecorder.Exception e) {
                Log.e(LOG_TAG, "problem with recorder():", e);
            }
        }


    }

But the videos generated with this method contained only green frames.

解决方案

To begin with, it's pre-processing, not post-processing the video.

I don't know what changes you need to tune the solution for new version of javacv, I hope they keep the library backwards compatible.

Your buffer is 640 pixels wide, and 480 pixels high. You want to crop out 480x480.

This means that you need a loop that will copy every line to the IplImage, something like this:

private int imageWidth = 640;
private int imageHeight = 480;
private int destWidth = 480;

@Override
public void onPreviewFrame(byte[] data, Camera camera) {

if (data.length != imageWidth*imageHeight) {
    Camera.Size sz = camera.getPreviewSize();
    imageWidth = sz.width;
    imageHeight = sz.height;
    destWidth = imageHeight;
}

ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
int start = 2*((imageWidth-destWidth)/4); // this must be even
for (int row=0; row<imageHeight*3/2; row++) {
    bb.put(data, start, destWidth);
    start += imageWidth;
}
recorder.record(yuvImage);

这篇关于记录录像的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆