InfiniBand

InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be scalable and uses a switched fabric network topology. By 2014, it was the most commonly used interconnect in the TOP500 list of supercomputers, until about 2016.[1]

InfiniBand Trade Association
Formation1999
TypeIndustry trade group
PurposePromoting InfiniBand
HeadquartersBeaverton, Oregon, U.S.
Websiteinfinibandta.org

Mellanox (acquired by Nvidia) manufactures InfiniBand host bus adapters and network switches, which are used by large computer system and database vendors in their product lines.[2] As a computer cluster interconnect, IB competes with Ethernet, Fibre Channel, and Intel Omni-Path. The technology is promoted by the InfiniBand Trade Association.

History

InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by Intel, with a specification released on 1998,[3] and joined by Sun Microsystems and Dell. Future I/O was backed by Compaq, IBM, and Hewlett-Packard.[4] This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as Microsoft. At the time it was thought some of the more powerful computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X.[5] Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room, cluster interconnect and Fibre Channel. IBTA also envisaged decomposing server hardware on an IB fabric.

Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10 Gbit/second speeds.[6] Following the burst of the dot-com bubble there was hesitation in the industry to invest in such a far-reaching technology jump.[7] By 2002, Intel announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI Express, and Microsoft discontinued IB development in favor of extending Ethernet. Sun Microsystems and Hitachi continued to support IB.[8]

In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what was estimated to be the third largest computer in the world at the time.[9] The OpenIB Alliance (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel.[10][11] In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio.[12]

Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand.[13] In 2010, market leaders Mellanox and Voltaire merged, leaving just one other IB vendor, QLogic, primarily a Fibre Channel vendor.[14] At the 2011 International Supercomputing Conference, links running at about 56 gigabits per second (known as FDR, see below), were announced and demonstrated by connecting booths in the trade show.[15] In 2012, Intel acquired QLogic's InfiniBand technology, leaving only one independent supplier.[16]

By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although within two years, 10 Gigabit Ethernet started displacing it.[1] In 2016, it was reported that Oracle Corporation (an investor in Mellanox) might engineer its own InfiniBand hardware.[2] In 2019 Nvidia acquired Mellanox, the last independent supplier of InfiniBand products.[17]

Specification

Specifications are published by the InfiniBand trade association.

Performance

Original names for speeds were single-data rate (SDR), double-data rate (DDR) and quad-data rate (QDR) as given below.[12] Subsequently, other three-letter acronyms were added for even higher data rates.[18]

Characteristics
 SDRDDRQDRFDR10FDREDRHDRNDRXDRGDR
Signaling rate (Gbit/s) 2.551010.312514.0625[18]25.7812550100200400
Theoretical
effective
throughput
(Gb/s)[19]
for 1 link 2481013.642550100200400
for 4 links 816324054.541002004008001600
for 8 links 16326480109.0820040080016003200
for 12 links 244896120163.64300600120024004800
Encoding (bits) 8b/10b[20]64b/66bt.b.d
Modulation NRZPAM4t.b.d
Adapter latency (µs)[21] 52.51.30.70.70.5<0.6[22]t.b.d.
Year[23] 2001, 200320052007201120112014[24]2018[24]2022[24]t.b.d.

Each link is duplex. Links can be aggregated: most systems use a 4 link/lane connector (QSFP). HDR often makes use of 2x links (aka HDR100, 100Gb link using 2 lanes of HDR, while still using a QSFP connector). 8x is called for with NDR switch ports using OSFP (Octal Small Form Factor Pluggable) connectors "Cable and Connector Definitions".

InfiniBand provides remote direct memory access (RDMA) capabilities for low CPU overhead.

Topology

InfiniBand uses a switched fabric topology, as opposed to early shared medium Ethernet. All transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS).

Messages

InfiniBand transmits data in packets of up to 4 KB that are taken together to form a message. A message can be:

  • a remote direct memory access read or write
  • a channel send or receive
  • a transaction-based operation (that can be reversed)
  • a multicast transmission
  • an atomic operation

Physical interconnection

InfiniBand switch with CX4/SFF-8470 connectors

In addition to a board form factor connection, it can use both active and passive copper (up to 10 meters) and optical fiber cable (up to 10 km).[25] QSFP connectors are used.

The InfiniBand Association also specified the CXP connector system for speeds up to 120 Gbit/s over copper, active optical cables, and optical transceivers using parallel multi-mode fiber cables with 24-fiber MPO connectors.

Software interfaces

Mellanox operating system support is available for Solaris, FreeBSD,[26][27] Red Hat Enterprise Linux, SUSE Linux Enterprise Server (SLES), Windows, HP-UX, VMware ESX,[28] and AIX.[29]

InfiniBand has no specific standard application programming interface (API). The standard only lists a set of verbs such as ibv_open_device or ibv_post_send, which are abstract representations of functions or methods that must exist. The syntax of these functions is left to the vendors. Sometimes for reference this is called the verbs API. The de facto standard software is developed by OpenFabrics Alliance and called the Open Fabrics Enterprise Distribution (OFED). It is released under two licenses GPL2 or BSD license for Linux and FreeBSD, and as Mellanox OFED for Windows (product names: WinOF / WinOF-2; attributed as host controller driver for matching specific ConnectX 3 to 5 devices)[30] under a choice of BSD license for Windows. It has been adopted by most of the InfiniBand vendors, for Linux, FreeBSD, and Microsoft Windows. IBM refers to a software library called libibverbs, for its AIX operating system, as well as "AIX InfiniBand verbs".[31] The Linux kernel support was integrated in 2005 into the kernel version 2.6.11.[32]

Ethernet over InfiniBand

Ethernet over InfiniBand, abbreviated to EoIB, is an Ethernet implementation over the InfiniBand protocol and connector technology. EoIB enables multiple Ethernet bandwidths varying on the InfiniBand (IB) version.[33] Ethernet's implementation of the Internet Protocol Suite, usually referred to as TCP/IP, is different in some details compared to the direct InfiniBand protocol in IP over IB (IPoIB).

Ethernet over InfiniBand performance
TypeLanesBandwidth (Gbit/s)Compatible Ethernet type(s)Compatible Ethernet quantity
SDR 0010002.5GbE to 2.5 GbE02 × GbE to 1 × 02.5 GbE
0040010GbE to 10 GbE10 × GbE to 1 × 10 GbE
0080020GbE to 10 GbE20 × GbE to 2 × 10 GbE
0120030GbE to 25 GbE30 × GbE to 1 × 25 GbE + 1 × 05 GbE
DDR 0010005GbE to 5 GbE05 × GbE to 1 × 05 GbE
0040020GbE to 10 GbE20 × GbE to 2 × 10 GbE
0080040GbE to 40 GbE40 × GbE to 1 × 40 GbE
0120060GbE to 50 GbE60 × GbE to 1 × 50 GbE + 1 × 10 GbE
QDR 0010010GbE to 10 GbE10 × GbE to 1 × 10 GbE
0040040GbE to 40 GbE40 × GbE to 1 × 40 GbE

See also

References

  1. "Highlights– June 2016". Top500.Org. June 2016. Retrieved September 26, 2021. InfiniBand technology is now found on 205 systems, down from 235 systems, and is now the second most-used internal system interconnect technology. Gigabit Ethernet has risen to 218 systems up from 182 systems, in large part thanks to 176 systems now using 10G interfaces.
  2. Timothy Prickett Morgan (February 23, 2016). "Oracle Engineers Its Own InfiniBand Interconnects". The Next Platform. Retrieved September 26, 2021.
  3. Scott Bekker (November 11, 1998). "Intel Introduces Next Generation I/O for Computing Servers". Redmond Channel Partner. Retrieved September 28, 2021.
  4. Will Wade (August 31, 1999). "Warring NGIO and Future I/O groups to merge". EE Times. Retrieved September 26, 2021.
  5. Pentakalos, Odysseas. "An Introduction to the InfiniBand Architecture". O'Reilly. Retrieved 28 July 2014.
  6. "Timeline". Mellanox Technologies. Retrieved September 26, 2021.
  7. Kim, Ted. "Brief History of InfiniBand: Hype to Pragmatism". Oracle. Archived from the original on 8 August 2014. Retrieved September 28, 2021.
  8. Computerwire (December 2, 2002). "Sun confirms commitment to InfiniBand". The Register. Retrieved September 26, 2021.
  9. "Virginia Tech Builds 10 TeraFlop Computer". R&D World. November 30, 2003. Retrieved September 28, 2021.
  10. Sean Michael Kerner (February 24, 2005). "Linux Kernel 2.6.11 Supports InfiniBand". Internet News. Retrieved September 28, 2021.
  11. OpenIB Alliance (January 21, 2005). "OpenIB Alliance Achieves Acceptance By Kernel.org". Press release. Retrieved September 28, 2021.
  12. Ann Silverthorn (January 12, 2006), "Is InfiniBand poised for a comeback?", Infostor, 10 (2), retrieved September 28, 2021
  13. Lawson, Stephen (November 16, 2009). "Two rival supercomputers duke it out for top spot". Computerworld. Retrieved September 29, 2021.
  14. Raffo, Dave. "Largest InfiniBand vendors merge; eye converged networks". Archived from the original on 1 July 2017. Retrieved 29 July 2014.
  15. Mikael Ricknäs (June 20, 2011). "Mellanox Demos Souped-Up Version of InfiniBand". CIO. Archived from the original on April 6, 2012. Retrieved September 30, 2021.
  16. Michael Feldman (January 23, 2012). "Intel Snaps Up InfiniBand Technology, Product Line from QLogic". HPCwire. Retrieved September 29, 2021.
  17. "Nvidia to Acquire Mellanox for $6.9 Billion". Press release. March 11, 2019. Retrieved September 26, 2021.
  18. "FDR InfiniBand Fact Sheet". InfiniBand Trade Association. November 11, 2021. Retrieved September 30, 2021.
  19. "InfiniBand Roadmap: IBTA - InfiniBand Trade Association". Archived from the original on 2011-09-29. Retrieved 2009-10-27.
  20. "InfiniBand Types and Speeds".
  21. http://www.hpcadvisorycouncil.com/events/2014/swiss-workshop/presos/Day_1/1_Mellanox.pdf // Mellanox
  22. https://www.mellanox.com/files/doc-2020/pb-connectx-6-vpi-card.pdf
  23. Panda, Dhabaleswar K.; Sayantan Sur (2011). "Network Speed Acceleration with IB and HSE" (PDF). Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet. Newport Beach, CA, USA: CCGrid 2011. p. 23. Retrieved 13 September 2014.
  24. "InfiniBand Roadmap - Advancing InfiniBand". InfiniBand Trade Association.
  25. "Specification FAQ". ITA. Archived from the original on 24 November 2016. Retrieved 30 July 2014.
  26. "Mellanox OFED for FreeBSD". Mellanox. Retrieved 19 September 2018.
  27. Mellanox Technologies (3 December 2015). "FreeBSD Kernel Interfaces Manual, mlx5en". FreeBSD Man Pages. FreeBSD. Retrieved 19 September 2018.
  28. "InfiniBand Cards - Overview". Mellanox. Retrieved 30 July 2014.
  29. "Implementing InfiniBand on IBM System p (IBM Redbook SG24-7351-00)" (PDF).
  30. Mellanox OFED for Windows - WinOF / WinOF-2
  31. "Verbs API". IBM AIX 7.1 documentation. 2020. Retrieved September 26, 2021.
  32. Dotan Barak (March 11, 2014). "Verbs programming tutorial" (PDF). OpenSHEM, 2014. Mellanox. Retrieved September 26, 2021.
  33. "10 Advantages of InfiniBand". NADDOD. Retrieved January 28, 2023.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.