Android 语音识别将数据传回 Xamarin Forms [英] Android speech recognition pass data back to Xamarin Forms

查看:70
本文介绍了Android 语音识别将数据传回 Xamarin Forms的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我现在真的卡住了,我对 Xamarin 还很陌生.我使用 Xamarin Forms 开发具有语音识别功能的应用.

im really stuck right now and im quite new to Xamarin. I use Xamarin Forms to develop an App with speech recognition.

我只创建了一个带有按钮和输入框的简单 UI.

I created just a simple UI with a button and an entry-box.

工作:

  • 按下按钮并显示带有语音识别功能的弹出窗口
  • 将口语读入 var

不工作:

  • 将数据传回 Xamarin Forms UI(条目)

StartPage.xaml.cs:

StartPage.xaml.cs:

    private void BtnRecord_OnClicked(object sender, EventArgs e)
    {
        WaitForSpeechToText();
    } 

    private async void WaitForSpeechToText()
    {
        EntrySpeech.Text = await DependencyService.Get<Listener.ISpeechToText>().SpeechToTextAsync();
    }

ISpeechToText.cs:

ISpeechToText.cs:

public interface ISpeechToText
{
    Task<string> SpeechToTextAsync();
}

调用本机代码.

SpeechToText_Android.cs:

SpeechToText_Android.cs:

    public class SpeechToText_Android : ISpeechToText
    {
    private const int VOICE = 10;

    public SpeechToText_Android() { }

    public Task<string> SpeechToTextAsync()
    {
        var tcs = new TaskCompletionSource<string>();

        try
        {
            var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
            voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
            voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt");
            voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
            voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);
            voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
            voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
            voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);

            try
            {
                ((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE);

            }
            catch (ActivityNotFoundException a)
            {
                tcs.SetResult("Device doesn't support speech to text");
            }
        }
        catch (Exception ex)
        {

            tcs.SetException(ex);
        }

        return tcs.Task;
    }
}

MainActivity.cs:

MainActivity.cs:

    protected override void OnActivityResult(int requestCode, Result resultVal, Intent data)
    {  
        if (requestCode == VOICE)
        {
            if (resultVal == Result.Ok)
            {
                var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
                if (matches.Count != 0)
                {
                    string textInput = matches[0].ToString();
                    if (textInput.Length > 500)
                        textInput = textInput.Substring(0, 500);
                }
                // RETURN 
            }
        }
        base.OnActivityResult(requestCode, resultVal, data);
    }

首先我认为我可以通过

return tcs.Task;

回到用户界面,但后来我注意到这种返回发生在语音识别的弹窗已经渲染完毕.此刻一句话也没有说.

back to the ui, but then i noticed that this return is happening when the pop up of the speech recognition has finished to render. At this moment there was not a single word spoken.

口语在 OnActivityResult 函数中的字符串textInput"中,但是如何将此字符串传递回 Xamarin.Forms UI?

The spoken words are in the string "textInput" in the OnActivityResult function, but how can i pass this string back to the Xamarin.Forms UI?

谢谢各位!

推荐答案

我会使用 AutoResetEvent 来暂停返回,直到 OnActivityResult 被调用,这将是直到用户要么在 AutoResetEvent 中记录某些内容、取消,要么在 AutoResetEvent 中超时他们的操作.

I would use an AutoResetEvent to pause the return until the OnActivityResult is called, which would be until the user either records something, cancels, or you timeout their action in the AutoResetEvent.

public interface ISpeechToText
{
    Task<string> SpeechToTextAsync();
}

添加一个 AutoResetEvent暂停执行:

注意:包裹AutoResetEvent.WaitOne以防止挂起应用程序循环

Add an AutoResetEvent to pause execution:

Note: Wrapping the AutoResetEvent.WaitOne to prevent hanging the application looper

public class SpeechToText_Android : Listener.ISpeechToText
{
    public static AutoResetEvent autoEvent = new AutoResetEvent(false);
    public static string SpeechText;
    const int VOICE = 10;

    public async Task<string> SpeechToTextAsync()
    {
        var voiceIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
        voiceIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
        voiceIntent.PutExtra(RecognizerIntent.ExtraPrompt, "Sprechen Sie jetzt");
        voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1500);
        voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1500);
        voiceIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 15000);
        voiceIntent.PutExtra(RecognizerIntent.ExtraMaxResults, 1);
        voiceIntent.PutExtra(RecognizerIntent.ExtraLanguage, Java.Util.Locale.Default);

        SpeechText = "";
        autoEvent.Reset();
        ((Activity)Forms.Context).StartActivityForResult(voiceIntent, VOICE);
        await Task.Run(() => { autoEvent.WaitOne(new TimeSpan(0, 2, 0)); });
        return SpeechText;
    }
}

MainActivity OnActivityResult:

    const int VOICE = 10;
    protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
    {
        base.OnActivityResult(requestCode, resultCode, data);
        if (requestCode == VOICE)
        {
            if (resultCode == Result.Ok)
            {
                var matches = data.GetStringArrayListExtra(RecognizerIntent.ExtraResults);
                if (matches.Count != 0)
                {
                    var textInput = matches[0];
                    if (textInput.Length > 500)
                        textInput = textInput.Substring(0, 500);
                    SpeechToText_Android.SpeechText = textInput;
                }
            }
            SpeechToText_Android.autoEvent.Set();
        }
    }

注意:这是利用几个静态变量来简化这个例子的实现......一些开发人员会说这是一种代码异味,我半同意,但你不能拥有超过一个谷歌语音识别器一次运行....

Note: This is making use of a couple of static vars to simplify the implementation of this example... Some developers would say this is a code smell, I semi-agree, but you can not ever have more then one Google Speech Recognizer running at a time....

public class App : Application
{
    public App()
    {
        var speechTextLabel = new Label
        {
            HorizontalTextAlignment = TextAlignment.Center,
            Text = "Waiting for text"
        };

        var speechButton = new Button();
        speechButton.Text = "Fetch Speech To Text Results";
        speechButton.Clicked += async (object sender, EventArgs e) =>
        {
            var speechText = await WaitForSpeechToText();
            speechTextLabel.Text = string.IsNullOrEmpty(speechText) ? "Nothing Recorded" : speechText;
        };

        var content = new ContentPage
        {
            Title = "Speech",
            Content = new StackLayout
            {
                VerticalOptions = LayoutOptions.Center,
                Children = {
                    new Label {
                        HorizontalTextAlignment = TextAlignment.Center,
                        Text = "Welcome to Xamarin Forms!"
                    },
                    speechButton,
                    speechTextLabel
                }
            }
        };
        MainPage = new NavigationPage(content);
    }

    async Task<string> WaitForSpeechToText()
    {
        return await DependencyService.Get<Listener.ISpeechToText>().SpeechToTextAsync();
    }
}

这篇关于Android 语音识别将数据传回 Xamarin Forms的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆