点对点Android上的音频呼叫:语音中断和延迟(延迟接收数据包)增加 [英] Peer to Peer Audio Calling on Android : Voice breaks and lag(delay in receiving packets) increases

查看:180
本文介绍了点对点Android上的音频呼叫:语音中断和延迟(延迟接收数据包)增加的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图建立对等的音频呼叫在Android上。我使用的是Android手机和平板电脑进行通信,但接收约40包手机几乎停止收到报文后,然后突然收到了几包并播放它们,等等,但这个等待时间增加。同样,平板电脑最初接收数据包,并发挥他们,但滞后的增加,也声音开始后的一段时间,因为如果一些数据包丢失打破。知不知道什么造成这个问题...

这是在$ C $下的应用程序...我只是给发件人和RecordAudio类接收机的IP地址,而在两个设备上运行它。

 公共类AudioRPActivity扩展活动实现OnClickListener {

    DatagramSocket的插座,socketR;
    DatagramPacket类recvP,sendP;
    RecordAudio RT;
    PlayAudio角;

    按钮SR,停止,SP;
    TextView的电视,TV1;
    堆RF;

    布尔isRecording = FALSE;
    布尔IsPlaying模块= FALSE;

    INT频率= 44100;
    INT channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
    INT audioEncoding = AudioFormat.ENCODING_PCM_16BIT;

    @覆盖
    公共无效的onCreate(包savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        的setContentView(R.layout.main);

        电视=(TextView中)findViewById(R.id.text1);
        TV1 =(TextView中)findViewById(R.id.text2);

        SR =(按钮)findViewById(R.id.sr);
        SP =(按钮)findViewById(R.id.sp);
        停止=(按钮)findViewById(R.id.stop);

        sr.setOnClickListener(本);
        sp.setOnClickListener(本);
        stop.setOnClickListener(本);

        stop.setEnabled(假);

        尝试
        {
        插座=新的DatagramSocket();
        socketR =新的DatagramSocket(6000);
        }
        赶上(SocketException SE)
        {
            tv.setText(se.toString());
            完();
        }
    }

    公共无效的onClick(视图v){

        如果(V == SR)
            记录();
        否则,如果(V == SP)
            玩();
        否则,如果(V ==停止)
            stopPlaying();
    }

    公共无效播放()
    {
        stop.setEnabled(真正的);
        sp.setEnabled(假);
        PT =新PlayAudio();
        pt.execute();
    }

    公共无效stopPlaying()
    {
        isRecording = FALSE;
        IsPlaying模块= FALSE;
        stop.setEnabled(假);
    }

    公共无效记录()
    {
        stop.setEnabled(真正的);
        sr.setEnabled(假);
        RT =新RecordAudio();
        rt.execute();
    }



    私有类PlayAudio扩展的AsyncTask<太虚,字符串,太虚>
    {

        @覆盖
        保护无效doInBackground(空...为arg0)
        {
            IsPlaying模块= TRUE;
            INT BUFFERSIZE = AudioTrack.getMinBufferSize(频率,channelConfiguration,audioEncoding);

            byte []的audiodata =新的字节[BUFFERSIZE]

            尝试
            {
                AudioTrack audioTrack =新AudioTrack(AudioManager.STREAM_MUSIC,频率,channelConfiguration,
                                                        audioEncoding,4 * BUFFERSIZE,AudioTrack.MODE_STREAM);
                audioTrack.setPlaybackRate(频率);
                audioTrack.play();

                而(IsPlaying模块)
                {
                    recvP =新的DatagramPacket(audiodata,audiodata.length);
                    socketR.receive(recvP);
                    audioTrack.write(recvP.getData(),0,recvP.getLength());
                }
                audioTrack.stop();
                audioTrack.release();
            }
            捕获(的Throwable T)
            {
                Log.e(音轨,播放失败);
            }
            返回null;
        }
        保护无效onProgressUpdate(字符串...进度)
        {
            tv1.setText(进展[0]的ToString());
        }

        保护无效onPostExecute(虚空结果)
        {
            sr.setEnabled(真正的);
            sp.setEnabled(真正的);
        }

    }

    私有类RecordAudio扩展的AsyncTask<太虚,字符串,太虚>
    {

        @覆盖
        保护无效doInBackground(空...为arg0)
        {
            isRecording = TRUE;

            尝试
            {
                INT BUFFERSIZE = AudioTrack.getMinBufferSize(频率,channelConfiguration,audioEncoding);

                AudioRecord audioRecord =新AudioRecord(MediaRecorder.AudioSource.MIC,频率,channelConfiguration
                                                            ,audioEncoding,4 * BUFFERSIZE);
                byte []的缓冲区=新的字节[BUFFERSIZE]
                audioRecord.startRecording();
                INT R = 0;
                而(isRecording)
                {
                    INT BRR = audioRecord.read(缓冲液,0,BUFFERSIZE);

                    sendP =新的DatagramPacket(缓冲区,BRR,InetAddress.getByName(发件人/接收器的IP),6000);
                    socketS.send(sendP);
                    publishProgress(将String.valueOf(R));

                     -  [R ++;
                }

                audioRecord.stop();
                audioRecord.release();

            }
            捕获(的Throwable T)
            {
                Log.e(AudioRecord,录制失败......);
            }


            返回null;
        }

        保护无效onProgressUpdate(字符串...进度)
        {
            tv.setText(进展[0]的ToString());
        }

        保护无效onPostExecute(虚空结果)
        {
            sr.setEnabled(真正的);
            sp.setEnabled(真正的);
        }
    }
}
 

解决方案

在通过网络发送的声音,我遇到了麻烦,如果它是什么,但8000的频率。 44100听起来太可怕了。这可能刚去过我的情况。

另一个困难是,使用UDP这是很难说的顺序分组到达的。我已经看到,把它们放回正确的顺序实施,但我不能找到它现在。

I am trying to establish Peer to Peer audio calling on android. I used an android phone and a tablet for communication but after receiving around 40 packets the phone almost stops receiving the packets and then suddenly receives a few packets and plays them and so on but this waiting time increases. Similarly the tablet initially receives the packets and plays them but lag increases and also voice starts to break down after some time as if some packets are lost. Any idea whats causing this problem...

This is the code for app...i am just giving the sender's and receiver's ip address in RecordAudio class while running it on two devices.

public class AudioRPActivity extends Activity implements OnClickListener {

    DatagramSocket socketS,socketR;
    DatagramPacket recvP,sendP;
    RecordAudio rt;
    PlayAudio pt;

    Button sr,stop,sp;
    TextView tv,tv1;
    File rf;

    boolean isRecording = false;
    boolean isPlaying = false;

    int frequency = 44100;
    int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
    int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;

    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        tv = (TextView)findViewById(R.id.text1);
        tv1 = (TextView)findViewById(R.id.text2);

        sr = (Button)findViewById(R.id.sr);
        sp = (Button)findViewById(R.id.sp);
        stop = (Button)findViewById(R.id.stop);

        sr.setOnClickListener(this);
        sp.setOnClickListener(this);
        stop.setOnClickListener(this);

        stop.setEnabled(false);

        try
        {
        socketS=new DatagramSocket();
        socketR=new DatagramSocket(6000);
        }
        catch(SocketException se)
        {
            tv.setText(se.toString());
            finish();
        }
    }

    public void onClick(View v) {

        if(v == sr)
            record();
        else if(v == sp)
            play();
        else if(v == stop)
            stopPlaying();
    }

    public void play()
    {
        stop.setEnabled(true);
        sp.setEnabled(false);
        pt = new PlayAudio();
        pt.execute();
    }

    public void stopPlaying()
    {
        isRecording=false;
        isPlaying = false;
        stop.setEnabled(false);
    }

    public void record()
    {
        stop.setEnabled(true);
        sr.setEnabled(false);
        rt = new RecordAudio();
        rt.execute();
    }



    private class PlayAudio extends AsyncTask<Void,String,Void>
    {

        @Override
        protected Void doInBackground(Void... arg0)
        {
            isPlaying = true;
            int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);

            byte[] audiodata = new byte[bufferSize];

            try
            {
                AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,frequency,channelConfiguration,
                                                        audioEncoding,4*bufferSize,AudioTrack.MODE_STREAM);
                audioTrack.setPlaybackRate(frequency);
                audioTrack.play();

                while(isPlaying)
                {
                    recvP=new DatagramPacket(audiodata,audiodata.length);
                    socketR.receive(recvP);
                    audioTrack.write(recvP.getData(), 0, recvP.getLength());
                }
                audioTrack.stop();
                audioTrack.release();
            }
            catch(Throwable t)
            {
                Log.e("Audio Track","Playback Failed");
            }
            return null;
        }
        protected void onProgressUpdate(String... progress)
        {
            tv1.setText(progress[0].toString());
        }

        protected void onPostExecute(Void result)
        {
            sr.setEnabled(true);
            sp.setEnabled(true);
        }

    }

    private class RecordAudio extends AsyncTask<Void,String,Void>
    {

        @Override
        protected Void doInBackground(Void... arg0)
        {
            isRecording = true;

            try
            {
                int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding);

                AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,frequency,channelConfiguration
                                                            ,audioEncoding,4*bufferSize);   
                byte[] buffer = new byte[bufferSize];
                audioRecord.startRecording();
                int r=0;
                while(isRecording)
                {
                    int brr = audioRecord.read(buffer,0,bufferSize);

                    sendP=new DatagramPacket(buffer,brr,InetAddress.getByName("sender's/receiver's ip"),6000);
                    socketS.send(sendP);
                    publishProgress(String.valueOf(r));

                    r++;
                }

                audioRecord.stop();
                audioRecord.release();

            }
            catch(Throwable t)
            {
                Log.e("AudioRecord","Recording Failed....");
            }


            return null;
        }

        protected void onProgressUpdate(String... progress)
        {
            tv.setText(progress[0].toString());
        }

        protected void onPostExecute(Void result)
        {
            sr.setEnabled(true);
            sp.setEnabled(true);
        }
    }
}

解决方案

When sending voice over the network, I had trouble if it was anything but 8000 for the frequency. 44100 sounded horrible. That could have just been for my situation.

Another difficulty is that with UDP it's hard to say which order the packets arrive in. I have seen an implementation that puts them back in the right order, but I can't find it right now.

这篇关于点对点Android上的音频呼叫:语音中断和延迟(延迟接收数据包)增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆