Avago Technologies's Articles

Networks to Converge at 10 Gigabits

Networks to Converge at 10 Gigabits: SANs to Seamlessly Connect with LANs and MANs

Today, general networking, storage networking, and inter-processor communications have each developed different standards. As we move toward a 10 gigabitper- second world, total cost of ownership and interoperability will drive the industry toward a single networking interconnect standard.

Networking Today
In general networking, including all Internet applications—email, web services, downloads and multimedia, including voice and streaming video—interconnect technology is IP over Ethernet at 10 Mb/s, 100 Mb/s and 1 Gb/s. Although IP over Ethernet scales across the LAN, MAN, and WAN, and dominates the industry in port count, interoperability, and vendor support, the IP over Ethernet protocol stack (usually TCP/IP) is executed in the host CPU. This makes this architecture unsuitable for latencysensitive applications such as database storage and interprocessor communications.

Storage networking incorporates two applications: Network Attached Storage (NAS), which uses IP over Ethernet to transport data in file formats between storage servers and their clients, and Storage Area Networks (SANs), which transport blocks of data over Fibre Channel. Fibre Channel is the performance leader today at 1 Gb/s and 2 Gb/s link speeds, and offers excellent latency characteristics resulting from a fully offloaded protocol stack. This is one reason Fibre Channel-based SANs are often applied in performance sensitive applications while Ethernet- based NAS is used where cost and ease of use are more important.

There are also inter-processor communications (IPC) networks which are used for server clustering. Although generally limited to high availability (HA) clusters, Ethernet is the most widely used IPC interconnect technology today. High performance parallel processing clusters tend to use a proprietary interconnect designed for very low latency. IPC is one of the most important applications targeted by InfiniBand. With an architecture optimized for low latency and high bandwidth, InfiniBand appears ideally suited for this application. Starting down the road to 10 Gb/s

The Internet’s insatiable appetite for performance will continue to drive Ethernet development to faster link rates at a quicker pace than Fibre Channel. Where 1-Gigabit Ethernet development leveraged mature Fibre Channel technology, Fibre Channel development will now leverage 10 Gb/s Ethernet standards. InfiniBand will be introduced at 2.5 Gb/s, but will quickly move up to 10 Gb/s. See Figure 1.

Figure:1 10 Gb/s Ethernet products will be available first, with 10 Gb Fibre Channel, iSCSI, and InfiniBand all expected to begin shipping in the late 2002, early 2003 timeframe. Note, though, that the PCI-X host interface will limit Ethernet and Fibre Channel to sustained throughputs of 5 Gb/s to 8 Gb/s.

Figure 1  10 Gb/s Ethernet products will be available first, with 10 Gb Fibre Channel, iSCSI, and InfiniBand all expected to begin shipping in the late 2002, early 2003 timeframe. Note, though, that the PCI-X host interface will limit Ethernet and Fibre Channel to sustained throughputs of 5 Gb/s to 8 Gb/s.

For general networking, there are no serious challengers to Ethernet. CPU utilization will improve dramatically with the application of TCP Offload Engine (TOE) technology, which moves protocol stack execution from the host processor to the I/O card.

The choice of interconnect technology for inter-processor communications also seems to be fairly clear. Designed from the ground up for IPC and destined to be a standard I/O port on many chipsets, InfiniBand is well positioned to dominate large segments of this application. Ethernet with VI (virtual interface)/TCP/ IP in offloaded hardware at 10 Gb/s will have excellent latency and throughput characteristics and could challenge InfiniBand for certain segments of the IPC application market.

Read More…

Sign Up for Avago Now Newsletter

Comments on this post:

There are currently no comments.

Login or Register to post comments.