处理 - 如何在相同的处理草图中录制,保存和播放视频? [英] processing - how can I record,save and play the video in the same processing sketch?

查看:307
本文介绍了处理 - 如何在相同的处理草图中录制,保存和播放视频?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经能够通过按下按钮并再次按下按钮来录制和保存视频,以停止录制和导出视频。如果我停止处理草图并再次启动,我可以播放视频。那是因为当我录制视频并停止录制时,视频文件正在数据文件夹中进行,但尚未完成。像视频的大小是大约50个字节,没有缩略图可见,而我的处理草图仍然是活动的。但是,一旦我停止我的处理草图,视频将被制作。然后,缩略图将在我的文件夹中显示,大小增加到大约600kb,文件可以播放。所以我需要停止并重新启动我的草图来完成视频。有没有其他方式来完成我的视频,并能够在录音完成后立即播放视频?所以简而言之,我想我的草图能够打开网络摄像头图像。录制视频并播放视频时,我按下一个按钮或单击鼠标。这是可能的吗?



这是迄今为止的代码:

  import com.hamoid。*; 
import processing.video。*;
import ddf.minim。*;

Minim minim;
AudioPlayer播放器;
AudioInput in;
AudioRecorder录音机;

电影myMovie;
电影myMovie1;
电影myMovie2;
电影myMovie3;

int currentScreen;
int videoCounter = 0;

VideoExport videoExport;
boolean recording = false;

捕获theCap;

捕捉凸轮;

int i = 0;

int countname; //更改名称
int name = 000000; //在键的'function

中设置数字//更改文件名
void newFile()
{
countname =(name + 1);
recorder = minim.createRecorder(in,file / Sound+ countname +.wav,true);
// println(file /+ countname +.wav);
}

void setup(){
size(500,500);
frameRate(30);
noStroke();
smooth();

// myMovie = new Movie(this,video0.mp4);
//myMovie.loop();

// myMovie1 = new Movie(this,video1.mp4);
//myMovie1.loop();

// myMovie2 = new Movie(this,video2.mp4);
//myMovie1.loop();

// myMovie3 = new Movie(this,video3.mp4);
//myMovie1.loop();

// if(videoCounter> = 1){
// myMovie = new Movie(this,video0.mp4);
//myMovie.loop();
//}

String [] cameras = Capture.list();

if(cameras.length == 0){
println(没有相机可用于捕获);
exit();
} else {
println(可用相机:); (int i = 0; i< cameras.length; i ++){
println(cameras [i]);

}

//可以使用list()返回的数组中的
//元素直接初始化相机:
// cam = new Capture(这个,相机[3]); //内置mac camisight
cam = new Capture(this,1280,960,USB camera); // externe camera Lex,linker USB
cam.start();
}

println(Druk op R om geluid en video op te nemen.Druk nog een keer op R om het opnemen te stoppen en druk op S om het op te slaan Druk vervolgens op Zom verder te gaan);

videoExport = new VideoExport(this,data / video+ i +.mp4);

minim = new Minim(this);
player = minim.loadFile(file / Sound1.wav);

//获取立体声输入:样本缓冲区长度为2048
//默认采样率为44100,默认位深度为16
in = minim.getLineIn(Minim .STEREO,2048);
//创建一个记录器,将从输入记录到指定的文件名,使用缓冲记录
//缓冲记录表示所有捕获的音频将被写入样本缓冲区
// then当调用save()时,缓冲区的内容实际上将被写入文件
//该文件将位于草图的根文件夹中。

newFile(); //转到更改文件名
textFont(createFont(SanSerif,12));
}

void draw(){
switch(currentScreen){
case 0:drawScreenZero();打破; // camera
case 1:drawScreenOne();打破; // 1 video
case 2:drawScreenZero();打破; // camera
case 3:drawScreenTwo();打破; // 2 video's
case 4:drawScreenZero();打破; // camera
case 5:drawScreenThree();打破; // 3 video's
case 6:drawScreenZero();打破; // camera
case 7:drawScreenFour();打破; // 4 video的
默认值:background(0);打破;
}
}

void mousePressed(){
currentScreen ++;
if(currentScreen> 7){currentScreen = 0; }
}

void drawScreenZero(){
// println(drawScreenZero camera);

if(cam.available()== true){
cam.read();
}
image(cam,0,0,width,height);
//以下是一样的,当绘制图像
//时,没有任何额外的调整大小,转换或色调,速度更快。
// set(0,0,cam);

if(recording){
videoExport.saveFrame();
}

for(int i = 0; i< in.bufferSize() - 1; i ++)
{
line(i,50 + left.get(i)* 50,i + 1,50 + in.left.get(i + 1)* 50);
line(i,150 + in.right.get(i)* 50,i + 1,150 + in.right.get(i + 1)* 50);
}

if(recorder.isRecording())
{
文本(Aan het opnemen ...,5,15);
文本(Druk op R als je klaar flex met opnemen en druk op S om het op te slaan,5,30);
}
else
{
text(Gestopt meet opnemen。Druk op R om op teemem,druk op S om op te slaan,5,15);
}
}

void drawScreenOne(){
background(0,255,0);
// fill(0);
// rect(250,40,250,400);
// println(drawScreenOne 1 video);
if(videoCounter> = 1){
myMovie = new Movie(this,video0.mp4);
myMovie.loop();

image(myMovie,0,0,(width / 2),(height / 2));
player.play();

} else if(videoCounter == 0){
text(geen video,5,15);
}

}


void drawScreenTwo(){
background(0,0,255);
// println(drawScreenTwo 2 videos);
//三角形(150,100,150,400,450,250);
// image(myMovie,0,0,(width / 2),(height / 2));
// image(myMovie1,(width / 2),(height / 2),(width / 2),(height / 2));
}

void drawScreenThree(){
// fill(0);
// rect(250,40,250,400);
背景(255,0,0);
println(drawScreenThree 3 videos);
// image(myMovie,0,0,(width / 2),(height / 2));
// image(myMovie1,(width / 2),(height / 2),(width / 2),(height / 2));
// image(myMovie,(width / 2),0,(width / 2),(height / 2));
}

void drawScreenFour(){
// triangle(150,100,150,400,450,250);
背景(0,0,255);
// println(drawScreenFour 4 videos);
// image(myMovie,0,0,(width / 2),(height / 2));
// image(myMovie1,(width / 2),(height / 2),(width / 2),(height / 2));
// image(myMovie,(width / 2),0,(width / 2),(height / 2));
// image(myMovie1,0,(height / 2),(width / 2),(height / 2));
}

void keyPressed(){
if(key =='r'|| key =='R'){
recording =!recording;
println(录音是+(录音?开:关));
} else if(key =='s || key =='S'){
i ++;
videoExport = new VideoExport(this,video+ i +.mp4);
videoCounter ++;
println(videoCounter);
// currentScreen ++;
// if(currentScreen> 7){currentScreen = 0; }

} else if(key =='z'|| key =='Z'){
currentScreen ++;
if(currentScreen> 7){currentScreen = 0; }
}
}

void movieEvent(Movie m){
m.read();
}

void keyReleased()
{
if(key =='r')
{
//表示你要开始或停止捕获音频数据,您必须在AudioRecorder对象上调用
// beginRecord()和endRecord()。您可以根据需要启动和停止
//,音频数据将附加到缓冲区
//的结尾(在缓冲录制的情况下)或到文件(在流式录制的情况下)。
if(recorder.isRecording())
{
recorder.endRecord();
}
else
{
/ * ############################ ########## * /
newFile();
/ * ############################################ b recorder.beginRecord();
}
}
如果(key =='s')
{
//我们已经将文件填满了缓冲区,
//现在写入我们在createRecorder
//中指定的文件(在缓冲记录的情况下),如果缓冲区大,
//这将出现冻结草图某个时间
//流式录音的情况,
//它不会冻结,因为数据已经在文件中,所有这些都在完成
//正在关闭文件。
//该方法将记录的音频作为AudioRecording返回,
//参见示例AudioRecorder>> RecordAndPlayback更多关于

名称++; //更改文件名,每次+1
recorder.save();
println(完成保存);
println(name); //检查名称
}
}

void stop()
{
// always close Minim完成他们的音频课程
in.close();
minim.stop();

super.stop();
}


解决方案

href =http://funprogramming.org/VideoExport-for-Processing/reference/index.html =nofollow> VideoExport >图书馆,这只是一个类。



该引用显示了我们这个功能:

  dispose()




关闭前清理


然后我们可以看一下 VideoExport 类的源,以查看该功能确实:

  public void dispose(){
if(ffmpeg!= null){
try {
ffmpeg.flush();
ffmpeg.close();
} catch(Exception e){
e.printStackTrace();
}
}
if(process!= null){
process.destroy();
}
}

所以现在我们知道 dispose()函数在 ffmpeg 中调用 flush(),这是一个的OutputStream 。我们也知道, $ //
$ b

所以我想尝试的第一件事就是在你想完成视频时简单地调用 dispose()函数。



如果这不行,或者如果它导致其他异常,那么您可能想要找到一个不同的视频库,允许您将其保存在命令上,或者甚至可以使用 VideoExport 作为灵感。真的没有多少。


I'm already able to record and save the video by pressing a button and pressing the button again to stop recording and export the video. I can play the video if i stop my processing sketch and start it up again. That is because when I'm recording the video in processing and stop the recording, the video file is being made in the data folder but it's not complete yet. Like the size of the video is around 50 bytes and there is no thumbnail visible while my processing sketch is still active. But as soon as i stop my processing sketch the video will be made. Then a thumbnail image will be visible in my folder and the size increases to around 600kb and the file is playable. So i need to stop and restart my sketch to finish the video. Is there any other way to finish my video and being able to play my video back as soon as I'm done recording? So in short i want my sketch to be able to open the webcam image. Record the video and play the video back when i push a button or click the mouse. Would that be possible?

This is the code I have so far:

import com.hamoid.*;
import processing.video.*;
import ddf.minim.*;

Minim minim;
AudioPlayer player;
AudioInput in;
AudioRecorder recorder;

Movie myMovie;
Movie myMovie1;
Movie myMovie2;
Movie myMovie3;

int currentScreen;
int videoCounter = 0;

VideoExport videoExport;
boolean recording = false;

Capture theCap; 

Capture cam;

int i = 0;

int countname; //change the name
int name = 000000; //set the number in key's' function

// change the file name
void newFile()
{      
 countname =( name + 1);
 recorder = minim.createRecorder(in, "file/Sound" + countname + ".wav", true);
 // println("file/" + countname + ".wav");
}

void setup() {
   size(500,500);
   frameRate(30);
   noStroke();
   smooth();

   //myMovie = new Movie(this, "video0.mp4");
   //myMovie.loop();

   //myMovie1 = new Movie(this, "video1.mp4");
   //myMovie1.loop();

   //myMovie2 = new Movie(this, "video2.mp4");
   //myMovie1.loop();

   //myMovie3 = new Movie(this, "video3.mp4");
   //myMovie1.loop();

   //if (videoCounter >= 1){
   //myMovie = new Movie(this, "video0.mp4");
   //myMovie.loop();
   //}

   String[] cameras = Capture.list();

  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {
      println(cameras[i]);
    }

    // The camera can be initialized directly using an 
    // element from the array returned by list():
    //cam = new Capture(this, cameras[3]); //built in mac cam "isight"
    cam = new Capture(this, 1280, 960, "USB-camera"); //externe camera Lex, linker USB
    cam.start();
  }

  println("Druk op R om geluid en video op te nemen.Druk nog een keer op R om het opnemen te stoppen en druk op S om het op te slaan Druk vervolgens op Z om verder te gaan.");

  videoExport = new VideoExport(this, "data/video" + i + ".mp4");

   minim = new Minim(this);
   player = minim.loadFile("file/Sound1.wav");

 // get a stereo line-in: sample buffer length of 2048
 // default sample rate is 44100, default bit depth is 16
 in = minim.getLineIn(Minim.STEREO, 2048);
 // create a recorder that  will record from the input to the filename specified, using buffered recording
 // buffered recording means that all captured audio will be written into a sample buffer
 // then when save() is called, the contents of the buffer will actually be written to a file
 // the file will be located in the sketch's root folder.

 newFile();//go to change file name
 textFont(createFont("SanSerif", 12));
}

void draw() {
   switch(currentScreen){
   case 0: drawScreenZero(); break; //camera
   case 1: drawScreenOne(); break; //1 video
   case 2: drawScreenZero(); break; //camera
   case 3: drawScreenTwo(); break; // 2 video's
   case 4: drawScreenZero(); break; //camera
   case 5: drawScreenThree(); break; //3 video's
   case 6: drawScreenZero(); break; //camera
   case 7: drawScreenFour(); break; //4 video's
   default: background(0); break;
   }
}

void mousePressed() {
   currentScreen++;
   if (currentScreen > 7) { currentScreen = 0; }
}

void drawScreenZero() {
 //println("drawScreenZero camera");

 if (cam.available() == true) {
    cam.read();
  }
  image(cam, 0,0,width, height);
  // The following does the same, and is faster when just drawing the image
  // without any additional resizing, transformations, or tint.
  //set(0, 0, cam);

  if (recording) {
    videoExport.saveFrame();
  }

  for(int i = 0; i < in.bufferSize() - 1; i++)
 {
   line(i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50);
   line(i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50);
 }

 if ( recorder.isRecording() )
 {
   text("Aan het opnemen...", 5, 15);
   text("Druk op R als je klaar bent met opnemen en druk op S om het op te slaan.", 5, 30);
 }
 else
 {
   text("Gestopt met opnemen. Druk op R om op te nemen, druk op S om op te slaan.", 5, 15);
 }
}

void drawScreenOne() {
 background(0,255,0);
 //fill(0);
 //rect(250,40,250,400);
 //println("drawScreenOne 1 video");
   if (videoCounter >= 1){
   myMovie = new Movie(this, "video0.mp4");
   myMovie.loop();

   image(myMovie, 0,0, (width/2),(height/2));
   player.play();

   } else if (videoCounter == 0) {
      text("geen video", 5, 15); 
   }

}


void drawScreenTwo(){
 background(0,0,255);
 //println("drawScreenTwo 2 videos");
 //triangle(150,100,150,400,450,250);
 //image(myMovie, 0,0, (width/2),(height/2));
 //image(myMovie1, (width/2),(height/2),(width/2),(height/2));
}

void drawScreenThree(){
  //fill(0);
 //rect(250,40,250,400);
  background(255,0,0);
 println("drawScreenThree 3 videos");
  //image(myMovie, 0,0, (width/2),(height/2));
  //image(myMovie1, (width/2),(height/2),(width/2),(height/2));
  //image(myMovie, (width/2),0, (width/2),(height/2));
}

void drawScreenFour(){
  //triangle(150,100,150,400,450,250);
  background(0,0,255);
 //println("drawScreenFour 4 videos");
  //image(myMovie, 0,0, (width/2),(height/2));
  //image(myMovie1, (width/2),(height/2),(width/2),(height/2));
  //image(myMovie, (width/2),0, (width/2),(height/2));
  //image(myMovie1, 0,(height/2),(width/2),(height/2));
}

void keyPressed() {
  if (key == 'r' || key == 'R') {
    recording = !recording;
    println("Recording is " + (recording ? "ON" : "OFF"));
  } else   if (key == 's' || key == 'S') {
    i++;
    videoExport = new VideoExport(this, "video" + i + ".mp4");
    videoCounter++;
    println(videoCounter);
    //currentScreen++;
    //if (currentScreen > 7) { currentScreen = 0; } 

  } else if (key == 'z' || key == 'Z') {
    currentScreen++;
    if (currentScreen > 7) { currentScreen = 0; } 
  }
}

void movieEvent(Movie m) {
 m.read();
}

void keyReleased()
{
 if ( key == 'r' ) 
 {
   // to indicate that you want to start or stop capturing audio data, you must call
   // beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
   // as many times as you like, the audio data will be appended to the end of the buffer 
   // (in the case of buffered recording) or to the end of the file (in the case of streamed recording). 
   if ( recorder.isRecording() ) 
   {
     recorder.endRecord();
   }
   else 
   {
     /*#######################################*/
     newFile();
     /*#######################################*/
     recorder.beginRecord();
   }
 }
 if ( key == 's' )
 {
   // we've filled the file out buffer, 
   // now write it to the file we specified in createRecorder
   // in the case of buffered recording, if the buffer is large, 
   // this will appear to freeze the sketch for sometime
   // in the case of streamed recording, 
   // it will not freeze as the data is already in the file and all that is being done
   // is closing the file.
   // the method returns the recorded audio as an AudioRecording, 
   // see the example  AudioRecorder >> RecordAndPlayback for more about that

   name++; //change the file name, everytime +1
   recorder.save();
   println("Done saving.");
   println(name);//check the name
 }
}

void stop()
{
 // always close Minim audio classes when you are done with them
 in.close();
 minim.stop();

 super.stop();
}

解决方案

Take a look at the reference for the VideoExport library, which is really just one class.

That reference shows us this function:

dispose()

Called automatically by Processing to clean up before shut down

We can then take a look at the source of the VideoExport class to see what that function does:

public void dispose() {
        if (ffmpeg != null) {
            try {
                ffmpeg.flush();
                ffmpeg.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        if (process != null) {
            process.destroy();
        }
    }

So now we know that the dispose() function is calling flush() on ffmpeg, which is an OutputStream. We also know that the dispose() function is only called at the end of the sketch.

So the first thing I would try is simply calling the dispose() function when you want to finalize the video.

If that doesn't work, or if it causes other Exceptions, then you might want to find a different video library that allows you to save them on command, or you could even create your own using the source of VideoExport as inspiration. There really isn't much to it.

这篇关于处理 - 如何在相同的处理草图中录制,保存和播放视频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆