基于流的路由和开放流 [英] Flow based routing and openflow

查看:131
本文介绍了基于流的路由和开放流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这可能不是典型的stackoverflow问题.

我的一位同事一直在猜测基于流的路由将成为网络中的下一件大事. Openflow 提供了在大型应用程序,IT数据中心等中使用低成本交换机的技术;更换思科,惠普等交换机和路由器.从理论上讲,您可以使用简单的配置来创建这些开放流交换机的层次结构.没有生成树.开放流将仅使用交换机的层次结构知识(没有路由器)将每个流路由到适当的交换机/交换机端口.该解决方案旨在节省企业资金并简化网络.

问他推测这可能会极大地改变企业网络.由于许多原因,我对此表示怀疑.我想听听你的想法.

解决方案

为了评估基于流的网络和OpenFlow的未来,请考虑以下方法.

  1. 从硅趋势开始:摩尔定律(每18-24个月2倍晶体管),以及单个芯片上可用I/O带宽的相关但缓慢的改进(每30-36个月大约2倍) ).现在,您可以购买具有64个端口的全功能10GbE单芯片交换机,以及具有40GbE和10GbE端口混合且具有可比的总I/O带宽的芯片.

  2. 有多种物理方式将它们连接到网格中(忽略生成树的无环约束和以太网学习MAC地址的方式).在高性能计算(HPC)领域,已经完成了许多工作,并使用InfiniBand和其他协议通过小型交换机的网格将计算服务器联网,从而构建了群集.现在,这已应用于以太网网格. CLOS或胖树拓扑的几何结构可以实现具有大量端口的两级网格.因此,数学公式为:其中n是每个芯片的端口数,两级网格中可连接的设备数为(n * 2)/2,三级网格中可连接的设备数为-阶段网格是(n * 3)/4.虽然使用标准的生成树和学习功能,生成树协议将禁用到第二阶段的多路径链接,但是大多数以太网交换机供应商都具有某种多机箱链路聚合协议,该协议绕开了多路径限制.在这方面也有标准工作.尽管可能并不明显,但是绝大多数链路聚合方案都会分配流量,因此任何给定流的所有帧都采用相同的路径.这样做是为了最大程度地减少乱序帧,以便它们不会被某些更高级别的协议丢弃.他们可以选择将其称为基于流的多路复用",而将其称为链路聚合".

  3. 尽管细节在于魔鬼,但仍有许多数据中心运营商和供应商得出结论,他们不需要在聚合/核心层中具有用于服务器连接的大型多插槽机箱交换机,而是使用网状网络便宜的1U或2U交换机.
  4. 人们还得出结论,最终您需要某种管理站来设置所有交换机的配置.同样,借鉴了HPC和InfiniBand的经验,他们使用了所谓的InfiniBand控制器.在电信世界中,大多数电信网络已经演变为将管理平面和部分控制平面与承载数据流量的盒子分开.

总结以上几点,以太网交换机与具有多路径流量的外部管理平面的网状结构(按顺序保持流量)是进化的,而不是革命性的,并且有可能成为主流.至少有一家大型公司Juniper就其对这种方法的认可发表了重要的公开声明.我将所有这些称为基于流的路由".

尽管瞻博网络和其他供应商采用了专有方法,但这是一个亟需标准的领域.开放网络基金会(ONF)的成立是为了从OpenFlow开始促进该领域的标准.在几个月内,ONF的六十多名成员将庆祝他们成立一周年.我相信,每个成员都花了数万美元加入.尽管OpenFlow协议在被广泛采用之前还有一段路要走,但它具有真正的动力.

This may not be the typical stackoverflow question.

A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.

Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.

解决方案

In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.

  1. It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.

  2. There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this "flow based multiplexing" but instead they call it "link aggregation".

  3. Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
  4. People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.

Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".

Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.

这篇关于基于流的路由和开放流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆