在Java中同时播放多个字节数组 [英] Playing multiple byte arrays simultaneously in Java

查看:192
本文介绍了在Java中同时播放多个字节数组的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你怎么能同时播放多个(音频)的字节数组?这种字节数组被记录TargetDataLine的使用一台服务器转移。

How can you play multiple (audio) byte arrays simultaneously? This "byte array" is recorded by TargetDataLine, transferred using a server.

我到目前为止已经试过

使用SourceDataLine的:

有没有办法发挥使用的SourceDataLine多张流,因为write方法块缓冲区写入到。这个问题不能用线固定,因为只有一个SourceDataLine的可以同时写。

There is no way to play mulitple streams using SourceDataLine, because the write method blocks until the buffer is written. This problem cannot be fixed using Threads, because only one SourceDataLine can write concurrently.

使用AudioPlayer类:

ByteInputStream stream2 = new ByteInputStream(data, 0, data.length);
AudioInputStream stream = new AudioInputStream(stream2, VoiceChat.format, data.length);
AudioPlayer.player.start(stream);

这只是扮演噪音在客户端上。

This just plays noise on the clients.

编辑
我没有在同一时间接收语音包,它不同时,更多的重叠。

EDIT I don't receive the voice packets at the same time, it's not simultaneously, more "overlapping".

推荐答案

还好吧,我把东西在一起这应该让你开始。我会发布完整的code以下,但我先试着解释涉及的步骤。

Allright, I put something together which should get you started. I'll post the full code below but I'll first try and explain the steps involved.

这里的有趣的部分是创建你自己的声音混音器类,允许该类的消费者安排在特定点的音频块(近)的未来。具体点,在部分时间是很重要的位置:我假设你在每个数据包需要在previous一月底开始正是为了播放连续的声音的数据包接收网络的声音单一的语音。此外,由于你说的声音可以重叠我假设(是的,大量的假设)一个新的可以通过网络进来的,而一个或多个旧的还在打。因此,似乎合理的,以允许音频块以从任何线程被调度。请注意,这里只有一个线程实际写入数据线,它只是任何线程可以提交音频数据包到调音台。

The interesting part here is to create you're own audio "mixer" class which allows consumers of that class to schedule audio blocks at specific points in the (near) future. The specific-point-in-time part is important here: i'm assuming you receive network voices in packets where each packet needs to start exactly at the end of the previous one in order to play back a continuous sound for a single voice. Also since you say voices can overlap I'm assuming (yes, lots of assumptions) a new one can come in over the network while one or more old ones are still playing. So it seems reasonable to allow audio blocks to be scheduled from any thread. Note that there's only one thread actually writing to the dataline, it's just that any thread can submit audio packets to the mixer.

因此​​,对于提交音频数据包的一部分,我们现在有这样的:

So for the submit-audio-packet part we now have this:

private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks;
public void mix(long when, short[] block) {
    scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));
}

的QueuedBlock类只是用于与当标记字节数组(音频缓冲器):在时间点所在的块应该播放

The QueuedBlock class is just used to tag a byte array (the audio buffer) with the "when": the point in time where the block should be played.

在时间点是前pressed相对于音频流的当前位置。被创建并与缓冲大小每一个音频缓冲器写入到数据线时更新该流,当它被设置为零:

Points in time are expressed relative to the current position of the audio stream. It is set to zero when the stream is created and updated with the buffer size each time an audio buffer is written to the dataline:

private final AtomicLong position = new AtomicLong();
public long position() {
    return position.get();
}

除了所有的麻烦来设置数据线,混频器类的有趣的部分是明显的地方混音发生。对于每一个预定的音频块,它分成三种情况:

Apart from all the hassle to set up the data line, the interesting part of the mixer class is obviously where the mixdown happens. For each scheduled audio block, it's split up into 3 cases:


  • 块已经在它的全部比赛。从scheduledBlocks列表中删除。

  • 该块被调度在当前缓冲区之后在某个时间点开始。什么都不做。

  • 块(的一部分)应当被混合成当前缓冲区。注意,块的开始可以(或可以不)是已经在previous缓冲器(多个)播放。同样地,在预定的块的末端可能会超过当前缓冲区的在这种情况下我们缩混它的第一部分和离开其余为下一轮结束时,直到所有的已发挥了整个块将被删除。

另外请注意,有以立即开始播放音频数据没有可靠的方法,当你提交的数据包到调音台一定要经常让他们从现在开始的1个音频缓冲至少持续时间,否则你将面临失去的开始您的声音。这里的混音code:

Also note that there's no reliable way to start playing audio data immediately, when you submit packets to the mixer be sure to always have them start at least the duration of 1 audio buffer from now otherwise you'll risk losing the beginning of your sound. Here's the mixdown code:

    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];
    private final AtomicLong position = new AtomicLong();

    Arrays.fill(mixBuffer, (short) 0);
    long bufferStartAt = position.get();
    for (QueuedBlock block : scheduledBlocks) {
        int blockFrames = block.data.length / CHANNELS;

        // block fully played - mark for deletion
        if (block.when + blockFrames <= bufferStartAt) {
            finished.add(block);
            continue;
        }

        // block starts after end of current buffer
        if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
            continue;

        // mix in part of the block which overlaps current buffer
        int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
        int blockMaxFrames = blockFrames - blockOffset;
        int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
        int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
        for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
            for (int c = 0; c < CHANNELS; c++) {
                int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                int blockIndex = (blockOffset + f) * CHANNELS + c;
                mixBuffer[bufferIndex] += (short)
                    (block.data[blockIndex]*MIXDOWN_VOLUME);
            }
    }

    scheduledBlocks.removeAll(finished);
    finished.clear();
    ByteBuffer
        .wrap(audioBuffer)
        .order(ByteOrder.LITTLE_ENDIAN)
        .asShortBuffer()
        .put(mixBuffer);
    line.write(audioBuffer, 0, audioBuffer.length);
    position.addAndGet(BUFFER_SIZE_FRAMES);

和最后一个完整的,独立的样品其派生多个线程提交音频数据块的重新随机持续时间和频率的presenting正弦波到混合器(在此示例中称为AudioConsumer)。通过进入网络包替换正弦波,你应该半路的解决方案。

And finally a complete, self-contained sample which spawns a number of threads submitting audio blocks representing sinewaves of random duration and frequency to the mixer (called AudioConsumer in this sample). Replace sinewaves by incoming network packets and you should be halfway to a solution.

package test;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicLong;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Line;
import javax.sound.sampled.Mixer;
import javax.sound.sampled.SourceDataLine;

public class Test {

public static final int CHANNELS = 2;
public static final int SAMPLE_RATE = 48000;
public static final int NUM_PRODUCERS = 10;
public static final int BUFFER_SIZE_FRAMES = 4800;

// generates some random sine wave
public static class ToneGenerator {

    private static final double[] NOTES = {261.63, 311.13, 392.00};
    private static final double[] OCTAVES = {1.0, 2.0, 4.0, 8.0};
    private static final double[] LENGTHS = {0.05, 0.25, 1.0, 2.5, 5.0};

    private double phase;
    private int framesProcessed;
    private final double length;
    private final double frequency;

    public ToneGenerator() {
        ThreadLocalRandom rand = ThreadLocalRandom.current();
        length = LENGTHS[rand.nextInt(LENGTHS.length)];
        frequency = NOTES[rand.nextInt(NOTES.length)] * OCTAVES[rand.nextInt(OCTAVES.length)];
    }

    // make sound
    public void fill(short[] block) {
        for (int f = 0; f < block.length / CHANNELS; f++) {
            double sample = Math.sin(phase * 2.0 * Math.PI);
            for (int c = 0; c < CHANNELS; c++)
                block[f * CHANNELS + c] = (short) (sample * Short.MAX_VALUE);
            phase += frequency / SAMPLE_RATE;
        }
        framesProcessed += block.length / CHANNELS;
    }

    // true if length of tone has been generated
    public boolean done() {
        return framesProcessed >= length * SAMPLE_RATE;
    }
}

// dummy audio producer, based on sinewave generator
// above but could also be incoming network packets
public static class AudioProducer {

    final Thread thread;
    final AudioConsumer consumer;
    final short[] buffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];

    public AudioProducer(AudioConsumer consumer) {
        this.consumer = consumer;
        thread = new Thread(() -> run());
        thread.setDaemon(true);
    }

    public void start() {
        thread.start();
    }

    // repeatedly play random sine and sleep for some time
    void run() {
        try {
            ThreadLocalRandom rand = ThreadLocalRandom.current();
            while (true) {
                long pos = consumer.position();
                ToneGenerator g = new ToneGenerator();

                // if we schedule at current buffer position, first part of the tone will be
                // missed so have tone start somewhere in the middle of the next buffer
                pos += BUFFER_SIZE_FRAMES + rand.nextInt(BUFFER_SIZE_FRAMES);
                while (!g.done()) {
                    g.fill(buffer);
                    consumer.mix(pos, buffer);
                    pos += BUFFER_SIZE_FRAMES;

                    // we can generate audio faster than it's played
                    // sleep a while to compensate - this more closely
                    // corresponds to playing audio coming in over the network
                    double bufferLengthMillis = BUFFER_SIZE_FRAMES * 1000.0 / SAMPLE_RATE;
                    Thread.sleep((int) (bufferLengthMillis * 0.9));
                }

                // sleep a while in between tones
                Thread.sleep(1000 + rand.nextInt(2000));
            }
        } catch (Throwable t) {
            System.out.println(t.getMessage());
            t.printStackTrace();
        }
    }
}

// audio consumer - plays continuously on a background
// thread, allows audio to be mixed in from arbitrary threads
public static class AudioConsumer {

    // audio block with "when to play" tag
    private static class QueuedBlock {

        final long when;
        final short[] data;

        public QueuedBlock(long when, short[] data) {
            this.when = when;
            this.data = data;
        }
    }

    // need not normally be so low but in this example
    // we're mixing down a bunch of full scale sinewaves
    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];

    private final Thread thread;
    private final AtomicLong position = new AtomicLong();
    private final AtomicBoolean running = new AtomicBoolean(true);
    private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks = new ConcurrentLinkedQueue<>();


    public AudioConsumer() {
        thread = new Thread(() -> run());
    }

    public void start() {
        thread.start();
    }

    public void stop() {
        running.set(false);
    }

    // gets the play cursor. note - this is not accurate and 
    // must only be used to schedule blocks relative to other blocks
    // (e.g., for splitting up continuous sounds into multiple blocks)
    public long position() {
        return position.get();
    }

    // put copy of audio block into queue so we don't
    // have to worry about caller messing with it afterwards
    public void mix(long when, short[] block) {
        scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));
    }

    // better hope mixer 0, line 0 is output
    private void run() {
        Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
        try (Mixer mixer = AudioSystem.getMixer(mixerInfo[0])) {
            Line.Info[] lineInfo = mixer.getSourceLineInfo();
            try (SourceDataLine line = (SourceDataLine) mixer.getLine(lineInfo[0])) {
                line.open(new AudioFormat(SAMPLE_RATE, 16, CHANNELS, true, false), BUFFER_SIZE_FRAMES);
                line.start();
                while (running.get())
                    processSingleBuffer(line);
                line.stop();
            }
        } catch (Throwable t) {
            System.out.println(t.getMessage());
            t.printStackTrace();
        }
    }

    // mix down single buffer and offer to the audio device
    private void processSingleBuffer(SourceDataLine line) {

        Arrays.fill(mixBuffer, (short) 0);
        long bufferStartAt = position.get();

        // mixdown audio blocks
        for (QueuedBlock block : scheduledBlocks) {

            int blockFrames = block.data.length / CHANNELS;

            // block fully played - mark for deletion
            if (block.when + blockFrames <= bufferStartAt) {
                finished.add(block);
                continue;
            }

            // block starts after end of current buffer
            if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
                continue;

            // mix in part of the block which overlaps current buffer
            // note that block may have already started in the past
            // but extends into the current buffer, or that it starts
            // in the future but before the end of the current buffer
            int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
            int blockMaxFrames = blockFrames - blockOffset;
            int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
            int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
            for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
                for (int c = 0; c < CHANNELS; c++) {
                    int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                    int blockIndex = (blockOffset + f) * CHANNELS + c;
                    mixBuffer[bufferIndex] += (short) (block.data[blockIndex] * MIXDOWN_VOLUME);
                }
        }

        scheduledBlocks.removeAll(finished);
        finished.clear();
        ByteBuffer.wrap(audioBuffer).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(mixBuffer);
        line.write(audioBuffer, 0, audioBuffer.length);
        position.addAndGet(BUFFER_SIZE_FRAMES);
    }
}

public static void main(String[] args) {

    System.out.print("Press return to exit...");
    AudioConsumer consumer = new AudioConsumer();
    consumer.start();
    for (int i = 0; i < NUM_PRODUCERS; i++)
        new AudioProducer(consumer).start();
    System.console().readLine();
    consumer.stop();
}
}

这篇关于在Java中同时播放多个字节数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆