gcp实例可用于单个网络接口的最大带宽是多少? [英] What is the maximum bandwidth gcp instance can use with single network interface?

查看:336
本文介绍了gcp实例可用于单个网络接口的最大带宽是多少?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  • 我正在使用n个进程并行将数据上传到Google云存储桶,出口数据速率约为16Gbps.
  • 对于仅使用n个进程从gcp存储桶并行下载数据,入口流量约为26Gbps.
  • 但是当我以相同的编号同时执行上传和下载时进程数,入口速率降低到〜7Gbps,出口速率降低到〜11/12Gbps.
  • 使用不同的存储桶进行上传和下载.实例和存储桶位于同一区域. 使用Windows Server 2012 r2 OS且配置为n1-standard-32(32个vCPU,120 GB内存). 使用httprequests在C#REST api中上载和下载api. 没有提及有关带宽上限的GCP文档: https://cloud.google.com/vpc/docs/quota#per_instance .
  • 适配器的LinkSpeed为100 Gbps
  • 如果仅上载和仅下载占用全部带宽,那么我假设同时执行这两个操作应将每个都扩展到〜16Gbps.
  • 每个gcp实例的总流量是否有上限? gcp实例可以在单个网络接口上使用的最大带宽是多少?
  • I am uploading data to Google cloud storage bucket using n processes parallelly, egress data rate is ~16Gbps.
  • For only downloading data from gcp storage bucket using n processes parallelly, ingress traffic is ~26Gbps.
  • But when I execute upload and download at the same time with same no. of processes, ingress rate reduces to ~7Gbps and egress to ~11/12Gbps.
  • Using different buckets for upload and download. Instance and buckets are in the same region. Using Windows Server 2012 r2 OS and configuration is n1-standard-32 (32 vCPUs, 120 GB memory). Upload and download api's are in C# REST api using httprequests. GCP documention doesn't mentioned regarding any bandwidth cap: https://cloud.google.com/vpc/docs/quota#per_instance.
  • LinkSpeed for the adapter is 100 Gbps
  • If only upload and only download takes full bandwidth, then I am assuming executing both at the same time should scale each to ~16Gbps.
  • Is there any cap for overall traffic per gcp instance? what is the maximum bandwidth gcp instance can use with single network interface?

推荐答案

GCP不设置上限-入口或出口流量-但这完全取决于计算机可以处理的内容以及可以占用的网络数量(具体取决于网络状况).

GCP does not cap the ingress or egress traffic - but it all depends on what the machine can handle and how much network can take (it will vary depending on network conditions).

您链接到的文档说没有上限,而是说

The documentation you linked to says that there's no cap but what it says is that

出入带宽取决于机器类型

egress and ingress bandwidth depends on machine type

它还指出(在注释"列中的最大入口数据速率"下,您应该仅为一台计算机计划10Gbps:

It also states (under "Maximum ingress data rate" in the "Notes" column that you should plan only 10Gbps for one machine:

出于容量规划的目的,您应假定每个VM实例最多只能处理10 Gbps的外部Internet流量.

For purposes of capacity planning, you should assume that each VM instance can handle no more than 10 Gbps of external Internet traffic.

您可以在其中查找各种机器的最大带宽文档.您有N1 32cpu,其限制为32Gbps(

You can look up the max bandwidth for various machine types in the documentation. You have N1 32cpu for which limit is 32Gbps (

对于Skylake或更高版本的CPU平台,

32 Gbps.适用于所有其他平台的16 Gbps

32 Gbps for Skylake or later CPU platforms. 16 Gbps for all other platforms

考虑到您确实有效地实现了约26 Gbps,我会说这是非常好的结果-而且在非理想的网络条件下.

Considering that you do achieved about 26 Gbps effectively I would say you're it's very good result - furthermore under non ideal network conditions.

还有更多:

网络带宽达到指定的限制.实际性能取决于网络拥塞或协议开销等因素.

Network bandwidth is up to the specified limit. Actual performance depends on factors such as network congestion or protocol overhead.

根据测量方法-我认为您几乎达到了单个GCP VM的任意带宽限制.

Depending on the measuring method - in my opinion you almost reached the arbitrary bandwidth limit of a single GCP VM.

如果我是你,我只会坚持这个数字并做出相应的计划.如果您想提高速度,可以写GCP支持并询问.

If I were you I would just stick with that number and plan accordingly. If you want more speed then you can write GCP support and ask.

这篇关于gcp实例可用于单个网络接口的最大带宽是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆