混合两个音频流成Android的一个单一的音频流? [英] Mixing two audio streams into a single audio stream in android?

查看:160
本文介绍了混合两个音频流成Android的一个单一的音频流?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图混合两个音频流得到单个输出流是否有可能在android系统?就我而言,我有一个输入流来自麦克风的,即,我使用记录的AudioRecord用户讲话。我想用短声音片段混合此记录,然后创建一个新的流这既是流的混合,然后将其发送了数据报套接字。
我已经研究了很多,这里是什么我就知道了。

首先,可能的Soundpool帮我实现我的目标,但我想我不能提供麦克风输入源。

目前,我从节约MIC在缓冲区中的记录,然后流过来的数据报套接字。我以为我可以保存在另一个缓冲区的声音片段,然后添加两个缓冲区(我知道是哑巴一个想法,因为是我将不得不管理声音的各种属性)。

也可能是我可以从麦克风保存录音文件和声音剪辑到一个不同的文件记录和混合使用它们,但是我觉得我不能这样做,因为我想流在数据报套接字记录

我想什么,我想实现可以使用Java的API的声音成为可能。但它不是由Android支持。

要总结,我想实现我的最终目标是注入在VoIP(SIP)的调用(音效果蟋蟀一样用我的声音听起来一起)的声音效果。

我希望我给了一个明确的解释我的问题。

问题1:我怎样才能做到这一点?
问题2:我可以使用Java的API健全创建一个JAR文件,并在我的项目中使用它? (这一点,我认为这是不可能的)

下面是我的录音和音频播放的一些code。

这是我的code音频记录:

 公共无效的run(){
                // TODO自动生成方法存根
                尝试{
                    INT minbuffer = AudioRecord.getMinBufferSize(样品,配置,格式);
                    DatagramSocket的插座=新的DatagramSocket();
                    Log.d(TAG插座创建);
                    socket.setBroadcast(真);
                    字节[] = ubuff新的字节[minbuffer]                    DatagramPacket类包;
                    Log.d(TAG,数据包创建);
                    InetAddress类DEST = InetAddress.getByName(10.10.1.126);
                    // InetAddress类DEST =的InetAddress。
                            //的InetSocketAddress DEST =新的InetSocketAddress(主机,端口);
                    Log.d(TAG,地址+ DEST);                    REC =新AudioRecord(MediaRecorder.AudioSource.MIC,来样,
                                配置,格式,minbuffer);                    rec.startRecording();
                    而(状态==真){                        minbuffer = rec.read(ubuff,0,ubuff.length);
                        Log.d(TAG,读而);
                        包=新的DatagramPacket(ubuff,ubuff.length,DEST,口);
                        socket.send(包);
                    }
                }赶上(例外五){
                    Log.d(TAG,坏数据报);
                }
            }
        });
        stream.start();

这是我的code音频播放:

  @覆盖
        公共无效的run(){
            // TODO自动生成方法存根
            尝试{                android.os.Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
                AudioManager毫米=(AudioManager)getSystemService(AUDIO_SERVICE);
                DatagramSocket的方法RSocket =新的DatagramSocket(8080);
                Log.d(TAGRecive插座);                INT m_buf = AudioRecord.getMinBufferSize(样品,配置,格式);
                字节[] = _buff新的字节[m_buf]
                AudioTrack rSpeaker =新AudioTrack(mm.STREAM_MUSIC,来样,配置,
                            格式,m_buf,AudioTrack.MODE_STREAM);
                mm.setSpeakerphoneOn(假);
                mm.setStreamVolume(AudioManager.STREAM_MUSIC,1​​00,AudioManager.MODE_IN_COMMUNICATION);
                Log.d(TAGzzRecorder);
                rSpeaker.setPlaybackRate(样品);
                rSpeaker.play();
                而(真){
                    尝试{
                        DatagramPacket类rPacket =新的DatagramPacket(_buff,_buff.length);
                        rSocket.receive(rPacket);
                        _buff = rPacket.getData();
                        rSpeaker.write(_buff,0,m_buf);
                        Log.d(TAG,哟开始写);
                    }赶上(例外五){                    }
                }
            }赶上(例外五){            }
        }
    });
    rvStrm.start();


解决方案

我想,这一定对您有所帮助,


  

http://www.jsresources.org/examples/AudioConcat.html


链接是如何连接/混合音频文件,一个开源的例子。我认为你将是最感兴趣的将是源


  

MixingAudioInputStream.java


  
  

http://www.jsresources.org/examples/MixingAudioInputStream.java.html


我不知道关于Java的Sound API的支持,但据我所知的Java API的声音是基本的音频采集和回放。你还是会做自己的混合方式,难道不好吗?

ATB

I am trying to mix two audio streams to get a single output stream is it possible in android? In my case, I have one input stream coming from Microphone, i.e, I am recording the users speech using AudioRecord. I want to mix this recording with a short sound clip and then create a new stream which is a mix of both the stream and then send it over a Datagram socket. I have researched a lot and here is what I came to know.

Firstly, SoundPool may help me achieve my goal, but I think I cannot provide Microphone as input source.

Currently I am saving the recording from MIC in a buffer and then streaming it over the datagram socket. I thought I can save the sound clip in another buffer and then add both the buffer(which I know is dumb a idea, as there are various properties of sound that I will have to manage).

Also may be I can save the recording from Microphone to a file and the recording of sound clip to a different file and mix them, however I think I cannot do this, as I am trying to stream the recording over the Datagram socket.

I think what I am trying to achieve may be possible using Java's sound API. But it is not supported by Android.

To summarize, what I am trying to achieve as my end goal is to inject a sound effect in a VoIP (SIP) based call (sound effect like crickets sound along with my voice).

I hope I gave a clear explanation about my problem.

Question 1: How can I achieve this? Question 2: Can I create a JAR file using Java's Sound API and use it in my project? (about this, I think it is not possible)

Here is some code of my Audio Recording and Audio Playback.

This is my code for audio recording:

            public void run() {
                // TODO Auto-generated method stub
                try{
                    int minbuffer = AudioRecord.getMinBufferSize(sample, config, format);
                    DatagramSocket socket = new DatagramSocket();
                    Log.d(TAG, "Socket Created");
                    socket.setBroadcast(true);
                    byte[] ubuff = new byte[minbuffer];

                    DatagramPacket packet;
                    Log.d(TAG, "Packet Created");
                    InetAddress dest = InetAddress.getByName("10.10.1.126");
                    //InetAddress dest = InetAddress.
                            //InetSocketAddress dest= new InetSocketAddress(host, port);
                    Log.d(TAG, "Address"+dest);

                    rec = new AudioRecord(MediaRecorder.AudioSource.MIC,sample,
                                config,format,minbuffer);

                    rec.startRecording();
                    while(status == true){

                        minbuffer = rec.read(ubuff, 0,ubuff.length);
                        Log.d(TAG, "Reading While");
                        packet = new DatagramPacket(ubuff, ubuff.length,dest,port);
                        socket.send(packet);
                    }
                }catch(Exception e){
                    Log.d(TAG, "Bad Datagram");
                }
            }
        });
        stream.start();     

This is my code for audio playback:

        @Override
        public void run() {
            // TODO Auto-generated method stub
            try{

                android.os.Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
                AudioManager mm = (AudioManager)getSystemService(AUDIO_SERVICE);
                DatagramSocket rSocket = new DatagramSocket(8080);
                Log.d(TAG, "Recive Socket");

                int m_buf = AudioRecord.getMinBufferSize(sample, config, format);
                byte[] _buff = new byte[m_buf];
                AudioTrack rSpeaker = new AudioTrack(mm.STREAM_MUSIC,sample,config,
                            format,m_buf,AudioTrack.MODE_STREAM);
                mm.setSpeakerphoneOn(false);
                mm.setStreamVolume(AudioManager.STREAM_MUSIC, 100, AudioManager.MODE_IN_COMMUNICATION);
                Log.d(TAG, "zzRecorder");
                rSpeaker.setPlaybackRate(sample);
                rSpeaker.play();
                while(true){
                    try{
                        DatagramPacket rPacket = new DatagramPacket(_buff, _buff.length);
                        rSocket.receive(rPacket);
                        _buff = rPacket.getData();
                        rSpeaker.write(_buff, 0, m_buf);
                        Log.d(TAG, "Yo Start Write");
                    }catch(Exception e){

                    }
                }
            }catch(Exception e){

            }
        }
    });
    rvStrm.start();

解决方案

I think this must be helpful to you,

http://www.jsresources.org/examples/AudioConcat.html

The link is an open source example for how to concatenate/mix audio files. I think the source that you will be most interested in will be

MixingAudioInputStream.java

http://www.jsresources.org/examples/MixingAudioInputStream.java.html

I do not know about the Support of Java Sound API, but AFAIK Java Sound API is for basic audio capture and playback. You will still have to do your own way of mixing, wouldnt you?

ATB

这篇关于混合两个音频流成Android的一个单一的音频流?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆