在Android中通过HTTP / 2 POST流式传输音频 [英] POST Streaming Audio over HTTP/2 in Android

查看:1421
本文介绍了在Android中通过HTTP / 2 POST流式传输音频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

一些背景:

我正在尝试在Android应用上开发语音相关功能,用户可以使用语音进行搜索,服务器在用户发送中间结果正在说话(反过来更新UI)和查询完成后的最终结果。由于服务器只接受HTTP / 2单插槽连接和Android HTTPUrlConnection 不支持 HTTP / 2,我正在使用Retrofit2。

I am trying to develop a voice-related feature on the android app where a user can search using voice and the server sends intermediate results while user is speaking (which in turn updates the UI) and the final result when the query is complete. Since the server accepts only HTTP/2 single socket connection and Android HTTPUrlConnection doesn't support HTTP/2 yet, I am using Retrofit2.

我看了在这个这个这个但是每个例子都有固定长度的数据,或者事先可以确定大小...这不是音频搜索的情况。

I have looked at this, this and this but each example has fixed length data or the size can be determined beforehand... which is not the case for audio search.

这是我的意思POST的方法如下:

Here's what my method for POST looks like:

  public interface Service{
    @Streaming
    @Multipart
    @POST("/api/1.0/voice/audio")
    Call<ResponseBody> post(
            @Part("configuration") RequestBody configuration,
            @Part ("audio") RequestBody audio);
}

该方法发送配置文件(包含音频参数 - JSON结构)和流音频以下列方式。 (预期的POST请求)

The method sends configuration file(containing audio parameters - JSON structure) and streaming audio in the following manner. (Expected POST request)

Content-Type = multipart/form-data;boundary=----------------------------41464684449247792368259
//HEADERS
----------------------------414646844492477923682591
Content-Type: application/json; charset=utf-8
Content-Disposition: form-data; name="configuration"
//JSON data structure with different audio parameters.
----------------------------414646844492477923682591
Content-Type: audio/wav; charset=utf-8
Content-Disposition: form-data; name="audio"
<audio_data>
----------------------------414646844492477923682591--

不确定如何发送流媒体(!!)< audio_data> 。我尝试使用Okio以这种方式为音频创建多部分(来自: https:/ /github.com/square/okhttp/wiki/Recipes#post-streaming

Not really sure about how to send streaming(!!) <audio_data> . I tried using Okio to create multipart for audio in this way (From: https://github.com/square/okhttp/wiki/Recipes#post-streaming)

public RequestBody createPartForAudio(final byte[] samples){
        RequestBody requestBody = new RequestBody() {
            @Override
            public MediaType contentType() {
                return MediaType.parse("audio/wav; charset=utf-8");
            }

            @Override
            public void writeTo(BufferedSink sink) throws IOException {
                //Source source = null;
                sink.write(samples);        

            }
        };

        return requestBody;
    }

当然这不起作用。这是继续向ResponseBody写入音频样本的正确方法吗?我应该在哪里调用 Service.post(config,audio)方法,这样每次音频缓冲区中都有东西时我都不会发布配置文件。

This didn't work of course. Is this a right way to keep on writing audio samples to ResponseBody? Where exactly should I call Service.post(config, audio) method so that I don't end up posting configuration file every time there is something in the audio buffer.

此外,由于我必须继续发送流音频,如何保持相同的POST连接打开而不关闭它直到用户停止说话?

Also, since I have to keep on sending streaming audio, how can I keep the same POST connection open and not close it until user has stopped speaking?

我基本上是OkHttp和Okio的新手。如果我遗漏了任何内容或部分代码不清楚,请告诉我,我会上传该代码段。谢谢。

I am basically new to OkHttp and Okio. If I have missed anything or part of the code is not clear please let me know and I'll upload that snippet. Thank you.

推荐答案

你或许可以使用管道从您的音频线程生成数据并在您的网络线程中使用它。

You might be able to use a Pipe to produce data from your audio thread and consume it on your networking thread.

来自新创建的OkHttp食谱

/**
 * This request body makes it possible for another
 * thread to stream data to the uploading request.
 * This is potentially useful for posting live event
 * streams like video capture. Callers should write
 * to {@code sink()} and close it to complete the post.
 */
static final class PipeBody extends RequestBody {
  private final Pipe pipe = new Pipe(8192);
  private final BufferedSink sink = Okio.buffer(pipe.sink());

  public BufferedSink sink() {
    return sink;
  }

  @Override public MediaType contentType() {
    ...
  }

  @Override public void writeTo(BufferedSink sink) throws IOException {
    sink.writeAll(pipe.source());
  }
}

如果您的数据可以写入,这种方法最有效作为一个连续的流。如果不能,你可能最好用 BlockingQueue< byte []> 或类似的东西做类似的事情。

This approach will work best if your data can be written as a continuous stream. If it can’t, you might be better off doing something similar with a BlockingQueue<byte[]> or similar.

这篇关于在Android中通过HTTP / 2 POST流式传输音频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆