如何使用Google Assist API实施助手 [英] How to implement an Assistant with Google Assist API

查看:112
本文介绍了如何使用Google Assist API实施助手的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在查看并阅读有关Tap上的Google即时(来自 http://developer.android.com/training/articles/assistant.html ).

I have been checking out and reading about Google Now on Tap (from http://developer.android.com/training/articles/assistant.html).

从那篇文章中发现,Now on Tap是基于与棉花糖捆绑在一起的Google Assist API的,这很有趣,我们似乎有可能开发自己的助手(本文中Google用来指代类似Now的应用程序的术语点击该按钮).

It was very interesting to find from that article that Now on Tap is based on Google's Assist API bundled with Marshmallow and it seems possible for us to develop our own assistant (the term Google used in the article to refer to app like Now on Tap) using the API.

但是,提到的文章仅非常简短地讨论了如何使用Assist API,即使在Internet上花了几天的搜索后,我也找不到有关如何使用它开发自定义助手的任何其他信息.没有文档,也没有示例.

However, the mentioned article only very briefly discusses how to use Assist API and I couldn't find any additional information about how to use it to develop a custom assistant even after spending a few days searching for it on the Internet. No documentation and no example.

我想知道您是否有与您共享的Assist API经验?任何帮助表示赞赏.

I was wondering if any of you have experience with Assist API that you could share? Any help appreciated.

谢谢

推荐答案

您绝对可以使用从Android 6.0开始的Assist API来实现个人助手,就像Google Now on Tap一样.官方开发人员( http://developer.android.com/training/articles/assistant.html)指南确切地说明了您应该如何实现它.

You can definitely implement a personal assistant just like the Google Now on Tap using the Assist API starting Android 6.0. The official developer (http://developer.android.com/training/articles/assistant.html) guide tells exactly how you should implement it.

某些开发人员可能希望实现自己的助手.如图2所示,Android用户可以选择活动的助手应用程序.辅助应用程序必须提供此示例中所示的VoiceInteractionSessionService和VoiceInteractionSession的实现,并且需要BIND_VOICE_INTERACTION权限.然后,它可以在onHandleAssist()中接收表示为AssistStructure实例的文本和视图层次结构.助手通过onHandleScreenshot()接收屏幕截图.

Some developers may wish to implement their own assistant. As shown in Figure 2, the active assistant app can be selected by the Android user. The assistant app must provide an implementation of VoiceInteractionSessionService and VoiceInteractionSession as shown in this example and it requires the BIND_VOICE_INTERACTION permission. It can then receive the text and view hierarchy represented as an instance of the AssistStructure in onHandleAssist(). The assistant receives the screenshot through onHandleScreenshot().

Commonware有四个用于基本Assist API用法的演示.TapOffNow( https://github.com/commonsguy/cw-omnibus/tree/master/Assist/TapOffNow )应该足以让您入门.

Commonsware has four demos for basic Assist API usage. The TapOffNow (https://github.com/commonsguy/cw-omnibus/tree/master/Assist/TapOffNow) should be enough to get you started.

您不必使用onHandleScreenshot()来获取相关的文本数据,onHandleAssist()中的AssistStructure将为您提供一个根ViewNode,该ViewNode通常包含您在屏幕上看到的所有内容.

You don't have to use the onHandleScreenshot() to get the relevant textual data, the AssistStructure in onHandleAssist() will give you a root ViewNode which usually contains all you can see on the screen.

您可能还需要实现某种功能,以使用此根ViewNode的子代上的递归搜索来快速定位要关注的特定ViewNode.

You probably need to also implement some sorts of function to quickly locate the specific ViewNode that you want to focus on using recursive search on the children from this root ViewNode.

这篇关于如何使用Google Assist API实施助手的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆