使用 Socket.IO 发送数据的频率如何? [英] How frequently can I send data using Socket.IO?

查看:89
本文介绍了使用 Socket.IO 发送数据的频率如何?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个 Web 应用程序,该应用程序需要从服务器非常频繁地向客户端发送少量数据(每个套接字 3 个整数值),我想看看是否有使用更新客户端的最大频率Socket.IO.

I'm creating a web application that would require small amounts of data (3 integer values per socket) to be sent from the server to the client very frequently and I wanted to see if there's a maximum frequency for updating the client using Socket.IO.

我希望实现至少 50 个套接字连接每秒发送 20 个更新.理想的数字是 200 个套接字连接,每秒发送 50 个更新.

I was hoping to achieve at least 50 socket connections sending 20 updates per second each. With the ideal number being 200 socket connections sending 50 updates per second.

问题:使用 Socket.IO 发送新数据的频率是否有限制?

注意:我认识到这也成为服务器-客户端连接的速度问题,因此任何有关我需要的连接速度的信息都值得赞赏.我计算出,如果发送的每个数据包大约为 500 字节,那么我将能够在 1 MB/s 的连接上每秒向 100 个连接发送 20 个更新.

推荐答案

这是一个非常依赖于系统、网络和代码的问题.

That's a very system, network and code dependent question.

这是我之前用于类似普通 socket.io 测试的一个小型测试工具,我已经插入了一些内容来满足您的问题.

Here is a small test harness I've used for similar plain socket.io testing before, I've plugged in some bits to fit your question.

const io = require('socket.io')(8082)
const connections = []

io.on('connection', function(socket){

  connections.push(socket);
  const slog = (msg, ...args) => console.log('%s %s '+msg, Date.now(), socket.id, ...args)
  slog('Client connected. Total: %s', connections.length)

  socket.on('disconnect', function(data){
    connections.splice(connections.indexOf(socket), 1);
    slog('Client disconnected. Total: %s', connections.length)
  })

  socket.on('single', function(data){
    socket.emit('single',[ 0, now, now, now ])
  })

  socket.on('start', function(data = {}){
    slog('Start stream', data)
    sendBatch(1, data.count, data.delay)
  })

  socket.on('start dump', function(data = {}){
    slog('Start dump', data)
    sendBatch(1, data.count)
  })

  function sendBatch(i, max, delay){
    if ( i > max ) return slog('Done batch %s %s', max, delay)
    socket.emit('batch',[ i, now, now, now ])
    if (delay) {
      setTimeout(()=> sendBatch(i++, max, delay), delay)
    } else {
      setImmediate(()=> sendBatch(i++, max))
    }
  }

})

客户

const io = require('socket.io-client')
const socket = io('http://localhost:8082', {transports: ['websocket']})

socket.on('connect_error', err => console.error('Socket connect error:', err))
socket.on('connect_timeout', err => console.error('Socket connect timeout:', err))
socket.on('reconnect', err => console.error('Socket reconnect:', err))
socket.on('reconnect_attempt', err => console.error('Socket reconnect attempt:', err))
socket.on('reconnecting', err => console.error('Socket reconnecting', err))
socket.on('reconnect_error', err => console.error('Socket reconnect error:', err))
socket.on('reconnect_failed', err => console.error('Socket reconnect failed:', err))

function batch(n){
  socket.on('batch', function(data){
    if ( data[0] >= n ) {
      let end = Date.now()
      let persec = n / (( end - start ) / 1000)
      console.log('Took %s ms for %s at %s/s', end - start, n, persec.toFixed(1))
      return socket.close()
    }
  })
}

function startDump(count = 500000){
  socket.emit('start dump', { count: count })
  console.log('Start dump', count)
  batch(count)
}
function startStream(count = 50, delay = 1000){
  socket.emit('start', { count: count, delay: delay })
  console.log('Start stream', count, delay)
  batch(count)
}

function pingIt(i, max = 50){
  socket.on('single', function(data){
    console.log('Got a single with:', data)
    if (i >= max) {
      let end = Date.now()
      let persec = i / (end - start) * 1000
      console.log('Took %s ms %s/s', end - start, persec.toFixed(2))
      return socket.close()
    }
    socket.emit('single', i+=1)
  })
  socket.emit('single', i)
}

let start = Date.now()

//console.log('args command: %s  count: %s  delay: %s',process.argv[2], process.argv[3], process.argv[4])
switch(process.argv[2]){
  case 'ping':   pingIt(0, process.argv[3]); break
  case 'stream': startStream(process.argv[3], process.argv[4]); break
  case 'dump':   startDump(process.argv[3]); break
  default:       console.log('ping stream dump'); socket.close()
}

测试请求/响应往返

 node socketio-client.js ping 4

为了测试吞吐量,尽可能快地转储消息.

To test throughput, dumping messages as fast as the server can.

 node socketio-client.js dump 100000

测试 1000 条消息流,每条消息之间有 18 毫秒的延迟,大约每秒 50 条消息.

To test a stream of 1000 messages with a 18ms delay between each which is about 50 message per second.

 node socketio-client.js stream 1000 18

在我的开发机器上,我可以每秒将大约 40000 条消息转储到单个本地主机客户端,并在 2 GHz CPU 上使用 4 个整数作为有效负载(计数器 + 3 个时间戳).服务器和客户端 node 进程各自使用 95-100% 的 CPU 内核.所以纯吞吐量看起来还可以.

On my dev machine I can dump about 40000 messages per second to a single localhost client with the 4 integers as a payload (counter + 3 timestamps) on a 2 GHz CPU. Both server and client node processes use 95-100% of a CPU core each. So pure throughput looks ok.

我可以在服务器进程的 CPU 使用率为 55% 的情况下每秒向 100 个本地客户端发送 100 条消息.

I can emit 100 messages per second to 100 local clients at 55% CPU usage on the server process.

我的开发机器上的单个 node 进程每秒无法向 100 个客户端发送超过 130-140 条消息.

I can't get more than 130-140 messages per second to 100 clients out of a single node process on my dev machine.

新的高频 Intel Skylake CPU 服务器可能会在本地消除这些数字.添加一个可能是 flakey 的网络连接,它会立即将其恢复.除了本地网络延迟之外的任何事情都可能会扰乱您认为以如此高的消息速率获得的收益.任何较慢的延迟抖动都会对客户端消息的帧速率"造成严重破坏.可能需要在客户端上对消息进行时间戳记并对其进行跟踪.

A new, high frequency Intel Skylake CPU server might demolish those numbers locally. Add a, possibly flakey, network connection in and it will bring it right back down. Anything other than local network latency is probably going to mess with what you perceive you will gain with such high message rates. Latency jitter on anything slower will play havoc with the "frame rate" of the messages on the client end. Timestamping messages and tracking them on the client would probably be required.

如果您确实遇到问题,也可以使用较低级别的 websocket 库,例如 ws 这将需要您进行更多的实现,但会让您更好地控制套接字连接,并且您可能会从中获得更高的性能.

If you do run into problems there are also lower level websocket libraries like ws that will require more implementation from you but will give you more control over socket connections and you can probably eek more performance out of them.

连接越多,其余代码和套接字代码的争用就越多.您可能最终需要使用多个节点来保持顺畅.集群可以将应用拆分到多个 Node.js 进程.你可能需要像 Redis、ZeroMQNanomsg 来管理 IPC.Node 9 中的 V8 支持 SharedArrayBufferAtomics 但尚未在 Node 中使用的不多他们和工人在一起.

The more connections you have the more contention you will get with the rest of your code and socket code. You will probably end up needing to use multiple nodes to keep thing smooth. Cluster can split the app across multiple Node.js processes. You may need something like Redis, ZeroMQ or Nanomsg to manage IPC. V8 in Node 9 supports SharedArrayBuffer and Atomics but not much has landed in Node yet to use them with workers.

这篇关于使用 Socket.IO 发送数据的频率如何?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆