端到端响应式流RESTful服务(又称HTTP反压) [英] End-to-End Reactive Streaming RESTful service (a.k.a. Back-Pressure over HTTP)

查看:183
本文介绍了端到端响应式流RESTful服务(又称HTTP反压)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

一段时间以来,我一直试图在网上澄清这个问题,但没有成功,因此我将尝试在这里提出。

I have been trying to clarify this question online for a while without success, so I will try to ask it here.

我想找到一些资源或示例,其中显示了如何构建端到端的完全背压REST服务+客户端。我的意思是,我希望看到的是,给定一个实现Reactive Streams的REST客户端(无论是在Akka,JS还是其他任何方式中),我将(并且能够可视化)整个内置REST服务器,例如与Akka-Http一起使用。

I would like to find some resource or example where it shows how I can build an end-to-end fully back-pressured REST service + client. What I mean is that I would like to see that, given a REST client that implements Reactive Streams (whether in Akka, JS, or whatever), I will have (and be able to "visualise") the back-pressure handled throughout a REST server built, e.g. with Akka-Http.

要清楚,我正在搜索类似以下内容的内容(但找不到幻灯片或视频进行确认): http://oredev.org/2014/sessions/reactive-streaming- restful-applications-with-akka-http

To be clear, I am searching for something like the following talk (but I could not find slides or videos to confirm it): http://oredev.org/2014/sessions/reactive-streaming-restful-applications-with-akka-http

我对大多数示例所持的怀疑是关于我可以找到很多REST服务的情况(服务器)正在使用Akka Http和Akka流作为后端,但是如果客户端实现了响应式流,我不确定是否通过HTTP和REST传递了背压。在这种情况下,我将在TCP / HTTP上桥接单个流还是仅2个独立的流?这是我的主要疑问和困惑。

My doubts with most examples I see are about the fact that I can find plenty cases where the REST service (server) is using Akka Http and Akka streams for the back end, but I am not sure that the backpressure is "communicated" over HTTP and REST, if the client is implementing Reactive Streams. In such situation, would I have a single "stream" bridged over TCP/HTTP or just 2 independent streams? That is my main doubt and confusion.

希望我已经足够清楚,有人可以对此事有所了解。

Hopefully I was clear enough and someone will be able to shed some light on the matter.
In any case, thank you!

推荐答案

您来对地方了,问Akka问题:-)

You’ve arrived in the right place to ask Akka questions :-)

我知道有两个谈话,它们演示了一个演示如何在使用http时反压机制真正起作用。

There are two talks I’m aware of which show a demo how the backpressure mechanism really works when working with http.

1 )之一是 Roland Kuhn在ScalaDays SF 2015上的演讲
通过http演示产生的反压力在此演讲的第44分钟左右开始。

1) One is Roland Kuhn’s talk on ScalaDays SF 2015: the backpressure over http demo starts around the 44th minute of this talk.

2)我在ScalarConf Warsaw 2015上的讲话。串流部分在18分钟左右开始,
,而在24分钟左右可以看到反压演示。它显示了一个快速处理和缓慢处理服务器,在其中您可以看到
curl客户端在文件上传时受到了反压(我以一个文件为例,因为它是一个不错的大请求 )。

2) My talk from ScalarConf Warsaw 2015. the streams part begins around the 18 minute, and the backpressuring demo is seen around the 24th minute. It shows a "fast processing" and "slow processing" server, in which you can see the curl client being backpressured when the file is being uploaded (I use a file as an example because it’s a nice "big request").

由于TCP内置了这种机制,背压传播到了客户端-在服务器端,我们根本不从套接字
读取数据直到需求到位为止,这将导致背压得以正确传播。

The back pressure is propagated to the client thanks to TCPs built in mechanisms for this - on the server side we simply do not read from the socket until demand is available, which causes the back pressure to be propagated properly.

我希望这会有所帮助!

这篇关于端到端响应式流RESTful服务(又称HTTP反压)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆