流水线在Tomcat中 - 水货? [英] Pipelining in Tomcat - parallel?

查看:184
本文介绍了流水线在Tomcat中 - 水货?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写使用Tomcat服务,我试图了解HTTP1.1流水线的特点及其在Tomcat的实现。

I am writing a service using TomCat and am trying to understand the pipelining feature of HTTP1.1 and its implementation in Tomcat.

下面是我的问题:

1]是流水线在Tomcat中并行。即=>它得到一个流水线请求后,它分解成单独的请求并调用所有的并行?
这里是一个小测试,我做的事:?从我的测试中,它的样子,但我试图找到一个权威性的文件等。

1] Is pipelining in TomCat parallel. i.e => After it gets a pipelined request, does it break it down into individual request and invoke all that in parallel? Here is a small test I did: From my tests it looks like, but I am trying to find an authorative document etc?

public static void main(String[] args) throws IOException, InterruptedException
    {
        Socket socket = new Socket();
        socket.connect(new InetSocketAddress("ServerHost", 2080));
        int bufferSize = 166;
        byte[] reply = new byte[bufferSize];
        DataInputStream dis = null;

        //first without pipeline - TEST1
//        socket.getOutputStream().write(
//            ("GET URI HTTP/1.1\r\n" +
//            "Host: ServerHost:2080\r\n" +
//            "\r\n").getBytes());
//       
//        final long before = System.currentTimeMillis();
//        dis = new DataInputStream(socket.getInputStream());
//        Thread.currentThread().sleep(20);
//        final long after = System.currentTimeMillis();
//      
//        dis.readFully(reply);
//        System.out.println(new String(reply));        

        //now pipeline 3 Requests - TEST2
        byte[] request = ("GET URI HTTP/1.1\r\n" +
            "Host:ServerHost:2080\r\n" +
            "\r\n"+
            "GET URI HTTP/1.1\r\n" +
            "Host: ServerHost:2080\r\n" +
            "\r\n"+
            "GET URI HTTP/1.1\r\n" +
            "Host: ServerHost:2080\r\n" +
            "\r\n").getBytes();
        socket.getOutputStream().write(request);
        bufferSize = 1000*1;
        reply = new byte[bufferSize];

        final long before = System.currentTimeMillis();
        dis = new DataInputStream(socket.getInputStream());
        Thread.currentThread().sleep(20);
        final long after = System.currentTimeMillis();

        dis.readFully(reply);
        System.out.println(new String(reply));

        long time = after-before;
        System.out.println("Request took :"+ time +"milli secs");
    }

在上述试验中,在测试2的响应时间不是[20 * 3 = 60 +毫秒]。实际的GET请求都非常快。这暗示,这些越来越并行化,除非我失去了一些东西?

In the above test, in test2 the response time is not [20*3 = 60+ ms]. The actual GET request are very fast. This hints that these are getting parallelized, unless I am missing something ?

2]什么是Tomcat中的默认的流水线深度?如何控制呢?

2] What is the default pipeline depth in Tomcat? How can I control it ?

3]当允许在服务器端流水线为我服务,我需要考虑别的假设客户需要遵循的 http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4 规范,同时处理流水线?任何经验是值得欢迎的。

3] When allowing pipelining on server side for my service, do I need to consider anything else assuming that the client follows the http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4 spec while handling pipelining? Any experiences are welcome.

推荐答案

流水线的概念说,我们必须能够接受在任何时间点的要求,但请求的处理发生在我们拿到订单它。即并行处理不会发生

The concept of Pipelining says that we must be able to accept the requests at any point of time, but the processing of the requests takes place in the order we get it. That is parallel processing does not take place

这篇关于流水线在Tomcat中 - 水货?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆