去客户端程序会在TIME_WAIT状态下生成很多套接字 [英] go client program generates a lot a sockets in TIME_WAIT state
问题描述
我有一个go程序,它可以从多个协程生成大量的HTTP请求。运行一段时间后,程序吐出一个错误:connect:无法分配请求的地址。
使用netstat进行检查时,我得到一个很高的连接数(28229)在TIME_WAIT中。
大量的TIME_WAIT套接字发生在我的协程数为3时,并且足够严重以致于在5时发生崩溃。
我在Docker上运行Ubuntu 14.4并运行1.7版本。
这是go程序。
package main
import(
io / ioutil
log
net / http
sync
)
var wg sync.WaitGroup
var url =http://172.17.0.9:3000/;
const num_coroutines = 5;
const num_request_per_coroutine = 100000
func get_page(){
response,err:= http.Get(url)
if err!= nil {
log.Fatal( err)
} else {
defer response.Body.Close()
_,err = ioutil.ReadAll(response.Body)
if err!= nil {
)log.Fatal(err)
}
}
}
func get_pages(){
推迟wg.Done()
for我:= 0;我< num_request_per_coroutine; i ++ {
get_page();
func main(){
for i:= 0; i wg.Add(1)
go get_pages()
}
wg.Wait()
}
这是服务器程序:
包主
导入(
fmt
net / http
log
)
var count int;
func sayhelloName(w http.ResponseWriter,r * http.Request){
count ++;
fmt.Fprintf(w,Hello World,count is%d,count)//发送数据到客户端
}
func main(){
http.HandleFunc(/,sayhelloName)//设置路由器
err:= http.ListenAndServe(:3000,nil)//设置监听端口
if err!= nil {
log.Fatal(ListenAndServe:,err)
}
}
默认的http.Transport太快地打开和关闭连接。由于所有连接都是相同的主机:端口组合,因此需要增加 MaxIdleConnsPerHost
以匹配 num_coroutines
的值。否则,运输工具会经常关闭额外的连接,只能立即重新打开。
http.DefaultTransport。(* http.Transport).MaxIdleConnsPerHost = numCoroutines
或者在创建自己的交通工具时
t:=& http.Transport {
Proxy:http.ProxyFromEnvironment,
DialContext:(& net.Dialer {
超时:30 * time.Second,
KeepAlive:30 * time.Second,
})。DialContext,
MaxIdleConnsPerHost:numCoroutines,
MaxIdleConns:100,
IdleConnTimeout:90 * time.Second,
TLSHandshakeTimeout:10 * time.Second,
ExpectContinueTimeout :1 * time.Second,
}
类似的问题:进入http.Get,并发性和由对等方重置连接
I have a go program that generates a lot of HTTP requests from multiple coroutines. after running for a while, the program spits out an error: connect: cannot assign requested address.
When checking with netstat, I get a high number (28229) of connections in TIME_WAIT.
The high number of TIME_WAIT sockets happens when I the number of coroutines is 3 and is severe enough to cause a crash when it is 5.
I run Ubuntu 14.4 under docker and go version 1.7
This is the go program.
package main
import (
"io/ioutil"
"log"
"net/http"
"sync"
)
var wg sync.WaitGroup
var url="http://172.17.0.9:3000/";
const num_coroutines=5;
const num_request_per_coroutine=100000
func get_page(){
response, err := http.Get(url)
if err != nil {
log.Fatal(err)
} else {
defer response.Body.Close()
_, err =ioutil.ReadAll(response.Body)
if err != nil {
log.Fatal(err)
}
}
}
func get_pages(){
defer wg.Done()
for i := 0; i < num_request_per_coroutine; i++{
get_page();
}
}
func main() {
for i:=0;i<num_coroutines;i++{
wg.Add(1)
go get_pages()
}
wg.Wait()
}
This is the server program:
package main
import (
"fmt"
"net/http"
"log"
)
var count int;
func sayhelloName(w http.ResponseWriter, r *http.Request) {
count++;
fmt.Fprintf(w,"Hello World, count is %d",count) // send data to client side
}
func main() {
http.HandleFunc("/", sayhelloName) // set router
err := http.ListenAndServe(":3000", nil) // set listen port
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
The default http.Transport is opening and closing connections too quickly. Since all connections are to the same host:port combination, you need to increase MaxIdleConnsPerHost
to match your value for num_coroutines
. Otherwise, the transport will frequently close the extra connections, only to have them reopened immediately.
You can set this globally on the default transport:
http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = numCoroutines
Or when creating your own transport
t := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
MaxIdleConnsPerHost: numCoroutines,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
}
Similar question: Go http.Get, concurrency, and "Connection reset by peer"
这篇关于去客户端程序会在TIME_WAIT状态下生成很多套接字的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!