For Support:
+91-9885129292
dell server price chennai

lenovo networking

List of Latest lenovo networking Models

Abstract
The Flex System IB6131 InfiniBand Switch is designed to offer the performance you need to support clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications, helping to reduce task completion time and lower the cost per operation. The switch supports 40 Gbps QDR InfiniBand and can be upgraded to 56 Gbps FDR InfiniBand.

Introduction
  • The Flex System IB6131 InfiniBand Switch is designed to offer the performance you need to support clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications, helping to reduce task completion time and lower the cost per operation. The switch supports 40 Gbps QDR InfiniBand and can be upgraded to 56 Gbps FDR InfiniBand.
  • The Flex System IB6131 InfiniBand Switch can be installed in the Flex System chassis, which provides a high bandwidth, low latency fabric for Enterprise Data Centers (EDC), high-performance computing (HPC), and embedded environments. When used in conjunction with IB6132 InfiniBand QDR and FDR dual-port mezzanine I/O cards, these switches will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Benefits:
  • Ultra high performance with full bisectional bandwidth at both Fourteen Data Rate (FDR) and Quad Data Rate (QDR) speeds
  • Capability of up to 18 uplink ports for 14 servers allowing high-speed throughput with zero oversubscription
  • Suited for clients running InfiniBand infrastructure in High Performance Computing and Financial Services
  • When operating at FDR speed, less than 170 nanoseconds measured latency node to node — nearly half of the typical QDR InfiniBand latency
  • Forward Error Correction–resilient
  • Low power consumption
  • Capability to scale to larger node counts to create a low latency clustered solution and reduce packet hops
Abstract
The Emulex 16Gb Fibre Channel Adapter family for Lenovo Flex System enables the highest FC speed access for Flex System compute nodes to an external storage area network (SAN). The adapters described here are the Emulex LPm16002B-L and LPm16004B-L Mezz Adapters and the Flex System FC5052 and FC5054 Adapters. These adapters are based on the proven Emulex Fibre Channel stack, and work with 16 Gb Flex System Fibre Channel switch modules.

Change History
Changes in the Novmeber 26 update:
Updated the list of supported operating systems

Introduction
  • The Emulex 16Gb Fibre Channel Adapter family for Lenovo Flex System enables the highest FC speed access for Flex System compute nodes to an external storage area network (SAN). These adapters are based on the proven Emulex Fibre Channel stack, and work with 16 Gb Flex System Fibre Channel switch modules.
  • As the only 4-port HBAs for Flex System and ThinkSystem respectively, the FC5054 and LPm16004B-L provide unmatched scalability and redundancy. In addition, these two 4-port adapters have two separate ASICs with no bridge chip, so data flows directly to an independent PCIe bus for high availability without a single point of failure.

Did you know?
You can deploy faster and manage less when you combine Host Bus Adapters (HBAs) and Virtual Fabric Adapters (VFAs) that are developed by Emulex. Lenovo HBAs and VFAs use the same installation and configuration process, streamlining the effort to get your system up and running, and saving you valuable time. They also use the same Fibre Channel drivers, reducing the time to qualify and manage storage connectivity. And with the Emulex OneCommand Manager, you can manage Lenovo HBAs and VFAs that are developed by Emulex through the data center from a single console.

I/O module support
  • The adapters support the I/O modules that are listed in the following table. One or two compatible switches must be installed in the corresponding I/O bays in the chassis. Installing two switches means that all ports of the adapter are enabled.
  • The FC5022 switches include a base number of port licenses, 12 or 24, depending on the part number that is ordered. Switch port licenses for the FC5022 switches can be used for internal or external ports. Each two-port adapter requires one internal switch port for each of the two switches that are installed. Each four-port adapter requires two internal switch ports for each of the two switches that are installed. Additional ports might be needed depending on your configuration.
Abstract
  • The Flex System FC5172 2-port 16Gb FC Adapter enables high-speed access for Flex System compute nodes to connect to a Fibre Channel storage area network (SAN). This adapter is based on the proven QLogic 16Gb ASIC design and works with the 8 Gb and 16 Gb Flex System Fibre Channel switches and pass-thru modules.
  • This product guide provides essential presales information to understand the QLogic 16Gb FC adpater and its key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about FC5172 adapter and consider its use in IT solutions.

Change History
Changes in the November 26 update:
Updated the list of supported operating systems

Introduction
The Flex System FC5172 2-port 16Gb FC Adapter enables high-speed access for Flex System compute nodes to connect to a Fibre Channel storage area network (SAN). This adapter is based on the proven QLogic 16Gb ASIC design and works with the 8 Gb and 16 Gb Flex System Fibre Channel switches and pass-thru modules.

Did you know?
These QLogic adapter supports nearly twice the throughput and 2.5 times the I/O operations per second (IOPS) per port compared to 8 Gb adapters. The adapter is ideal for high bandwidth and I/O-intensive applications, such as media streaming, backup/recovery, data warehousing, OLTP, Microsoft Exchange Server, and server virtualization. Through rigorous testing carried out using the ServerProven program, you can maintain a high degree of confidence that your storage subsystem is compatible and functions reliably when using this Flex System adapter.
Abstract
The network architecture on the Flex System platform has been specifically designed to address network challenges, giving you a very scalable way to integrate, optimize and automate your data center. The Flex System™ FC3052 2-port 8Gb Fibre Channel Adapter enables high speed access for Flex System compute nodes to an external storage area network (SAN). This adapter is based on the proven Emulex Fibre Channel stack, and works with any of the 8 Gb or 16 Gb Flex System Fibre Channel switch and pass-thru modules.

Change History
Changes in the November 26 update:
Updated the list of supported operating systems

Introduction
The network architecture on the Flex System platform has been specifically designed to address network challenges, giving you a scalable way to integrate, optimize, and automate your data center. The Flex System™ FC3052 2-port 8Gb Fibre Channel Adapter enables high-speed access for Flex System compute nodes to an external storage area network (SAN). This adapter is based on the proven Emulex Fibre Channel stack, and works with any of the 8 Gb or 16 Gb Flex System Fibre Channel switch and pass-thru modules.

Features
  • The Flex System FC3052 2-port 8Gb FC Adapter has the following features and specifications:
  • Based on the Emulex "Saturn" 8Gb Fibre Channel I/O Controller (IOC) chip
  • Multifunction PCIe 2.0 device with two independent FC ports
  • Auto-negotiation between 2-Gbps, 4-Gbps, or 8-Gbps FC link attachments
  • Complies with the PCIe base and CEM 2.0 specifications
  • Enablement of high-speed and dual-port connection to a Fibre Channel SAN
  • Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV) and Virtual Fabric
  • Simplified installation and configuration using common HBA drivers
  • Common driver model that eases management and enables upgrades independent of HBA firmware

Warranty
The adapter has a 1-year limited warranty. When installed on an Flex System Compute Node, the adapter assumes the system’s base warranty and any Lenovo warranty service upgrade.
Abstract
The Flex System FC3172 2-port 8Gb FC Adapter enables high-speed access for Flex System compute nodes to connect to a Fibre Channel storage area network (SAN). This adapter is based on the proven QLogic 2532 8Gb ASIC design and works with the 8 Gb and 16 Gb Flex System Fibre Channel switches and pass-thru modules.

Change History
Changes in the November 26 update:
Updated the list of supported operating systems

Introduction
The Flex System FC3172 2 port 8Gb FC Adapter enables high-speed access for Flex System compute nodes to connect to a Fibre Channel storage area network (SAN). This adapter is based on the proven QLogic 2532 8Gb ASIC design and works with the 8 Gb and 16 Gb Flex System Fibre Channel switches and pass-thru modules.

Features
  • The Flex System FC3172 2-port 8Gb FC Adapter has the following features and specifications
  • QLogic ISP2532 controller
  • PCI Express 2.0 x4 host interface
  • Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port
  • 8/4/2 Gbps auto-negotiation
  • Support for FCP SCSI initiator and target operation
  • Support for NPIV
  • Support for full-duplex operation
  • Support for Fibre Channel protocol SCSI (FCP-SCSI) and Fibre Channel Internet protocol (FCP-IP)
  • Support for point-to-point fabric connection (F-port fabric login)
  • Support for Fibre Channel Arbitrated Loop (FCAL) public loop profile: Fibre Loop-(FL-Port)-Port Login
  • Support for Fibre Channel services class 2 and 3
  • Configuration and boot support in UEFI
  • APIs supported: SNIA HBA API V2, SMI-S, FDMI
  • Support for Fabric Manager
  • Power usage: 3.7 W typical
  • RoHS 6 compliant
Abstract
The Flex System  EN2024 4-port 1Gb Ethernet Adapter is a quad-port Gigabit Ethernet network adapter. When it is combined with the Flex System EN2092 1Gb Ethernet Scalable Switch, clients can leverage an end-to-end 1 Gb solution on the Flex System Enterprise Chassis. The EN2024 adapter is based on the Broadcom 5718 controller and offers a PCIe 2.0 x1 host interface with MSI/MSI-X. It also supports I/O virtualization features like VMware NetQueue and Microsoft VMQ technologies.

Introduction
The Flex System  EN2024 4-port 1Gb Ethernet Adapter is a quad-port Gigabit Ethernet network adapter. When it is combined with the Flex System EN2092 1Gb Ethernet Scalable Switch, clients can leverage an end-to-end 1 Gb solution on the Flex System Enterprise Chassis. The EN2024 adapter is based on the Broadcom 5718 controller and offers a PCIe 2.0 x1 host interface with MSI/MSI-X. It also supports I/O virtualization features like VMware NetQueue and Microsoft VMQ technologies.

Features
  • The Flex System EN2024 4-port 1Gb Ethernet Adapter has these features:
  • Dual Broadcom BCM5718 ASICs
  • Quad-port Gigabit 1000BASE-X interface
  • Two PCI Express 2.0 x1 host interfaces, one per ASIC
  • Full-duplex (FDX) capability, enabling simultaneous transmission and reception of data on the Ethernet network
  • MSI and MSI-X capabilities, up to 17 MSI-X vectors
  • I/O virtualization support for VMware NetQueue, and Microsoft VMQ
  • Seventeen receive queues and 16 transmit queues
  • Seventeen MSI-X vectors supporting per-queue interrupt to host
  • Function Level Reset (FLR)
  • ECC error detection and correction on internal SRAM
  • TCP, IP, and UDP checksum offload
  • Large Send offload, TCP segmentation offload
  • Receive-side scaling
  • Virtual LANs (VLANs): IEEE 802.1q VLAN tagging
  • Jumbo frames (9 KB)
  • IEEE 802.3x flow control
  • Statistic gathering (SNMP MIB II, Ethernet-like MIB [IEEE 802.3x, Clause 30])
  • Comprehensive diagnostic and configuration software suite
  • ACPI 1.1a-compliant: multiple power modes
  • Wake-on-LAN (WOL) support
  • Preboot Execution Environment (PXE) support
  • RoHS-compliant
Abstract
The Flex System  EN4132 2-port 10Gb Ethernet Adapter delivers high-bandwidth and industry-leading Ethernet connectivity for performance-driven server applications in enterprise data centers, high-performance computing (HPC), and embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few applications that achieve significant throughput and latency improvements, resulting in faster access, real-time response, and more users per server. Based on Mellanox ConnectX-3 EN technology, this adapter improves network performance by increasing available bandwidth while decreasing the associated transport load on the processor.

Change History
Changes in the August 31 update:
  • Connectivity with the ThinkSystem NE2552E Flex Switch is now supported - Supported I/O modules section
  • Updated the list of supported operating systems - Operating system support section

Introduction
The Flex System EN4132 2-port 10Gb Ethernet Adapter delivers high-bandwidth and industry-leading Ethernet connectivity for performance-driven server in enterprise data centers, high-performance computing (HPC), and embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few applications that achieve significant throughput and latency improvements, resulting in faster access, real-time response, and more users per server. Based on Mellanox ConnectX-3 EN technology, this adapter improves network performance by increasing available bandwidth while decreasing the associated transport load on the processor. The following figure shows the adapter.

Features
  • The Flex System EN4132 2-port 10Gb Ethernet Adapter has the following features:
  • RDMA over Ethernet
  • ConnectX-3 provides efficient RDMA services, delivering low-latency and high-performance to bandwidth and latency sensitive applications.
  • Sockets acceleration
  • Applications using TCP/UDP/IP transport can achieve industry-leading throughput over 10 GbE. The hardware-based stateless off load and flow steering engines in ConnectX-3 reduce the processor overhead of IP packet transport, freeing more processor cycles to work on the application. Sockets acceleration software further increases performance for latency sensitive applications.
Description:
As transaction volumes rise, your existing compute and storage clustering interconnects may have trouble keeping up. Yet you
know that response time matters—you have to keep up with the competition and new regulations demand real-time risk analysis.
You need to be able to scale your network and storage capabilities to meet the demands of your applications.

Meet Demands
The Lenovo Flex System EN6131 40 Gigabit Ethernet (GbE) Switch in conjunction with the Flex System EN6132 40 GbE Adapter is designed to offer the performance you need to support clustered databases, parallel processing, transactional services and high-performance embedded I/O applications, reducing task completion time and lowering cost per operation. This switch offers 14 40 Gb internal ports and up to 18 external QSFP 40 Gb ports that enables a non-blocking network design. It supports all Layer 2 functions so servers can communicate within the chassis without going to a top-of-rack switch. This feature helps improve performance and latency.

Efficient Computing
This switch and adapter are designed for low latency, high bandwidth, and computing efficiency for performance-driven server
and storage clustering applications. With this combination, your organization can achieve efficient computing by offloading from the CPU protocol processing and data movement overhead such as RDMA and Send/Receive semantics allowing more processor power for the application. 
As clients look for higher performance,power usage has emerged as a key concern in data centers. The Flex System 40 Gb solution offers the highest bandwidth without adding any significant power overhead.

More Workloads
Clients are also looking for higher utilization of their existing hardware by leveraging virtualization and cloud computing models. As workload density per server increases, it needs to be balanced by appropriate I/O throughput. The 40 Gb solution offered by Flex System can deploy more workloads per server without running into I/O bottlenecks. In case of failures or server maintenance, clients can also move their virtual machines much faster using 40 Gb interconnects within the chassis.
Abstract
The CN4054S 4-port and CN4052S 2-port 10Gb Virtual Fabric Adapters are VFA5.2 adapters that are supported on ThinkSystem and Flex System compute nodes.
This product guide provides essential presales information to understand the CN4054S and CN4052S adapters and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the adapters and consider their use in a Flex System solution.

Change History
  • Updated the server support table - Server support section
  • Updated the list of supported operating systems - Operating system support section
  • Clarified that INTD1 ports are not available on Lenovo switches currently available - I/O module support section

Introduction
The CN4054S 4-port and CN4052S 2-port 10Gb Virtual Fabric Adapters are VFA5.2 adapters that are supported on ThinkSystem and Flex System compute nodes.
The CN4052S can be divided into up to eight virtual NIC (vNIC) devices per port (for a total of 16 vNICs) and the CN4054S can be divided in to four vNICs (for a total of 16 vNICs). Each vNIC can have flexible bandwidth allocation. These adapters also feature RDMA over Converged Ethernet (RoCE) capability, and support iSCSI, and FCoE protocols, either as standard or with the addition of a Features on Demand (FoD) license upgrade.

Features
  • The CN4054S 4-port 10Gb Virtual Fabric Adapter and CN4052S 2-port 10Gb Virtual Fabric Adapter, which are part of the VFA5.2 family of System x and Flex System adapters, reduce cost by enabling a converged infrastructure and improve performance with powerful offload engines. The adapters have the following features and benefits:
  • Multiprotocol support for 10 GbE
  • The adapters offer two (CN4052S) or four (CN4054S) 10 GbE connections and are cost- and performance-optimized for integrated and converged infrastructures. They offer a “triple play” of converged data, storage, and low latency RDMA networking on a common Ethernet fabric. The adapters provides customers with a flexible storage protocol option for running heterogeneous workloads on their increasingly converged infrastructures.

Virtual NIC emulation
  • The Emulex VFA5.2 family supports three NIC virtualization modes as standard: Virtual Fabric mode (vNIC1), switch independent mode (vNIC2), and Unified Fabric Port (UFP). With NIC virtualization, each of the physical ports on the adapter can be logically configured to emulate up to four or eight virtual NIC (vNIC) functions with user-definable bandwidth settings. With UFP or vNIC2, the CN4052S supports eight vNICs per port. Both adapters support four vNICs per port with vNIC1 and vNIC2. Additionally, each physical port can simultaneously support a storage protocol (FCoE or iSCSI).
Abstract
  • The Flex System CN4022 2-port 10Gb Converged Adapter is a dual-port 10 Gigabit Ethernet network adapter that supports Ethernet, FCoE, and iSCSI protocols as standard. The EN4172 2-port 10Gb Ethernet Adapter is a similar adapter that supports Ethernet protocols. Both adapters also support virtual network interface controller (vNIC) capability, which helps clients to reduce cost and complexity. These adapters are based on the Broadcom BCM57840 controller by QLogic.

Change History
Changes in the September 14 update:
Updated the list of supported operating systems - Operating system support section

Introduction
The Flex System CN4022 2-port 10Gb Converged Adapter is a dual-port 10 Gigabit Ethernet network adapter that supports Ethernet, FCoE, and iSCSI protocols as standard. The EN4172 2-port 10Gb Ethernet Adapter is a similar adapter that supports Ethernet protocols. Both adapters also support virtual network interface controller (vNIC) capability, which helps clients to reduce cost and complexity. These adapters are based on the Broadcom BCM57840 controller.

Features
  • The CN4022 and EN4172 adapters have these features:
  • One Broadcom BCM57840 ASIC
  • Connection to either 1 Gb or 10 Gb data center infrastructure (1 Gb and 10 Gb auto-negotiation).
  • PCIe 2.0 x8 (CN4022) or PCIe 3.0 x8 (EN4172) host interface
  • Full line-rate performance
  • Support 10 Gb Ethernet (CN4022 and EN4172)
  • Support FCoE, and iSCSI (CN4022 only)
  • IBM Flex System Manager support (Tier 2 support only, no alerting)
Abstract
The CN4054S 4-port and CN4052S 2-port 10Gb Virtual Fabric Adapters are VFA5.2 adapters that are supported on ThinkSystem and Flex System compute nodes.
This product guide provides essential presales information to understand the CN4054S and CN4052S adapters and their key features, specifications, and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the adapters and consider their use in a Flex System solution.

Change History
Changes in the November 8 & 9 update:
  • Updated the server support table Server support section
  • Updated the list of supported operating systems Operating system support section
  • Clarified that INTD1 ports are not available on Lenovo switches currently available - I/O module support section

Introduction
  • The CN4054S 4-port and CN4052S 2-port 10Gb Virtual Fabric Adapters are VFA5.2 adapters that are supported on ThinkSystem and Flex System compute nodes.
  • The CN4052S can be divided into up to eight virtual NIC (vNIC) devices per port (for a total of 16 vNICs) and the CN4054S can be divided in to four vNICs (for a total of 16 vNICs). Each vNIC can have flexible bandwidth allocation. These adapters also feature RDMA over Converged Ethernet (RoCE) capability, and support iSCSI, and FCoE protocols, either as standard or with the addition of a Features on Demand (FoD) license upgrade.
Abstract
The Flex System CN4058S 8-port 10Gb Virtual Fabric Adapter and CN4052 2-port 10Gb Virtual Fabric Adapter are part of the VFA5 family of System x and Flex System adapters. These adapter supports up to four virtual NIC (vNIC) devices per port (for a total of 32 for the CN4058S and 8 for the CN4052), where each physical 10 GbE port can be divided into four virtual ports with flexible bandwidth allocation. These adapters also feature RDMA over Converged Ethernet (RoCE) capability, and support iSCSI, and FCoE protocols with the addition of a Features on Demand (FoD) license upgrade.

Introduction
  • The Flex System CN4058S 8-port 10Gb Virtual Fabric Adapter and CN4052 2-port 10Gb Virtual Fabric Adapter are part of the VFA5 family of System x and Flex System adapters. These adapter supports up to four virtual NIC (vNIC) devices per port (for a total of 32 for the CN4058S and 8 for the CN4052), where each physical 10 GbE port can be divided into four virtual ports with flexible bandwidth allocation. These adapters also feature RDMA over Converged Ethernet (RoCE) capability, and support iSCSI, and FCoE protocols with the addition of a Features on Demand (FoD) license upgrade.
  • With hardware protocol offloads for TCP/IP and FCoE, the CN4058S and CN4052 provides maximum bandwidth with minimum use of CPU resources and enables more VMs per server, which provides greater cost saving to optimize return on investment. With up to eight ports, the CN4058S in particular makes full use of the capabilities of all supported Ethernet switches in the Flex System portfolio.

Specifications
  • The Flex System CN4058S 8-port 10Gb Virtual Fabric Adapter features the following specifications:
  • Eight port 10 Gb Ethernet adapter
  • Dual ASIC controller using the Emulex XE104 design
  • Two PCIe Express 3.0 x8 host interfaces (8 GT/s)
  • MSI-X support
  • Fabric Manager support
  • Power consumption: 25 W maximum

The CN4052 2-port 10Gb Virtual Fabric Adapter has these specifications:
  • Two port 10 Gb Ethernet adapter
  • Single-ASIC controller using the Emulex XE104 design
  • One PCIe Express 3.0 x8 host interface (8 GT/s)
  • MSI-X support
  • Fabric Manager support
  • Power consumption: 25 W maximum
Abstract
  • The Flex System IB6132 2-port FDR InfiniBand Adapter and Mellanox ConnectX-3 Mezz FDR 2-Port InfiniBand Adapter deliver low latency and high bandwidth for performance-driven server clustering applications in enterprise data centers, high-performance computing (HPC), and embedded environments. The adapters are designed to operate at InfiniBand FDR speeds (56 Gbps or 14 Gbps per lane).
  • This product guide provides essential presales information to understand the Mellanox adapters and their key features, specifications and compatibility. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the adapters and consider their use in IT solutions.

Introduction
  • The Flex System IB6132 2-port FDR InfiniBand Adapter and Mellanox ConnectX-3 Mezz FDR 2-Port InfiniBand Adapter deliver low latency and high bandwidth for performance-driven server clustering applications in enterprise data centers, high-performance computing (HPC), and embedded environments. The adapter is designed to operate at InfiniBand FDR speeds (56 Gbps or 14 Gbps per lane).
  • Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications potentially achieve significant performance improvements, helping to reduce completion time and lower the cost per operation. These Mellanox ConnectX-3 adapters simplify network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments.

Performance

  • Mellanox ConnectX-3 technology provide a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. Servers can achieve up to 56 Gbps transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless off-load engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.
  • RDMA over the InfiniBand fabric further accelerates application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount to take advantage. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times. High-frequency transaction applications are able to access trading information more quickly, making sure that the trading servers are able to respond first to any new market data and market inefficiencies, while the higher throughput enables higher volume trading, maximizing liquidity and profitability.
  • In data mining or web crawl applications, RDMA provides the needed boost in performance to search faster by solving the network latency bottleneck associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and various Cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients with high bandwidth and low latency at the lowest cost of ownership.
Abstract
The Emulex 16Gb Fibre Channel Adapter family for Lenovo Flex System enables the highest FC speed access for Flex System compute nodes to an external storage area network (SAN). The adapters described here are the Emulex LPm16002B-L and LPm16004B-L Mezz Adapters and the Flex System FC5052 and FC5054 Adapters. These adapters are based on the proven Emulex Fibre Channel stack, and work with 16 Gb Flex System Fibre Channel switch modules.

Change History
Changes in the Novmeber 26 update:
Updated the list of supported operating systems

Introduction
  • The Emulex 16Gb Fibre Channel Adapter family for Lenovo Flex System enables the highest FC speed access for Flex System compute nodes to an external storage area network (SAN). These adapters are based on the proven Emulex Fibre Channel stack, and work with 16 Gb Flex System Fibre Channel switch modules.
  • As the only 4-port HBAs for Flex System and ThinkSystem respectively, the FC5054 and LPm16004B-L provide unmatched scalability and redundancy. In addition, these two 4-port adapters have two separate ASICs with no bridge chip, so data flows directly to an independent PCIe bus for high availability without a single point of failure.

I/O module support
  • The adapters support the I/O modules that are listed in the following table. One or two compatible switches must be installed in the corresponding I/O bays in the chassis. Installing two switches means that all ports of the adapter are enabled.
  • The FC5022 switches include a base number of port licenses, 12 or 24, depending on the part number that is ordered. Switch port licenses for the FC5022 switches can be used for internal or external ports. Each two-port adapter requires one internal switch port for each of the two switches that are installed. Each four-port adapter requires two internal switch ports for each of the two switches that are installed. Additional ports might be needed depending on your configuration.
Introduction
  • The ThinkSystem QLogic QML2692 Mezz 16Gb 2-Port Fibre Channel Adapter is an Enhanced Generation 5 (Gen 5) 16 Gb FC adapter for ThinkSystem blade servers. The adapter, based on Cavium technology, offer industry leading native FC performance with extremely low CPU usage with full hardware offloads. Enhanced Gen 5 FC technology provides advanced storage networking features capable of supporting the most demanding virtualized and private cloud environments, while fully leveraging the capabilities of high-performance 16 Gb FC (16GFC) and all-flash arrays (AFAs).
  • The following figure shows the ThinkSystem QLogic QML2692 Mezz 16Gb 2-Port Fibre Channel Adapter.

Key features
The ThinkSystem QLogic QML2692 Mezz 16Gb 2-Port Fibre Channel Adapter has the following features:Maximum performance with up to 1.3 million input/output operations per second (IOPS) to support larger server virtualization deployments and scalable cloud initiatives, and performance to match new multicore processors, SSDs/flash storage, and faster server host bus architectures.
  • Independent function, transmit and receive buffers, an on-chip CPU, DMA channels, and a firmware image for each port enable complete port-level isolation, prevent errors and firmware crashes from propagating across all ports, and provide predictable and scalable performance across all ports.
  • Support forward error correction (FEC) to enhance reliability of transmission and thereby performance.
  • Industry-standard class-specific control (CS_CTL)-based frame prioritization Quality of Service (QoS) helps alleviate network congestion by prioritizing traffic for time-sensitive mission critical workloads for optimized performance.
  • T10-PI data integrity with high performance offload provides end-to-end data corruption protection.
  • Support for Message Signaled Interrupts eXtended (MSI-X) improves host utilization and enhances application performance.
  • Fabric-assigned port worldwide name (FA-WWN) and fabric-based boot LUN discovery (F-BLD) pre-provisioning services allow servers to be quickly deployed, replaced, and moved across the SAN; the creation of zones, LUNs, and other services can be completed before the servers arrive on site.
  • Using the Brocade ClearLink diagnostic port (D_Port) available on the Brocade Gen 5 switches, administrators can quickly run automated diagnostic tests to assess the health of links and fabric components.
  • Read diagnostic parameters (RDP) feature provides detailed port, media, and optics diagnostics to easily discover and diagnose link-related errors and degrading conditions on any N_Port-to-F_Port link.
  • Single-pane-of-glass management across generations of QLogic FC adapters with QLogic QConvergeConsole (QCC).
  • Deployment flexibility and integration with third-party management tools, including the VMware vCenter and Brocade Network Advisor.
  • Support for 16 Gb, 8 Gb, and 4 Gb FC devices.
  • Comprehensive virtualization capabilities with support for N_Port ID Virtualization (NPIV).
  • A common driver model allows a single driver to support all QLogic HBAs on a given OS.
  • Exceptional performance per watt and price/performance ratios.
  • Backward compatibility with existing 4Gb and 8Gb FC infrastructure, leveraging existing SAN investments.
  • Allow application of SAN best practices, tools, and processes with virtual server deployments.
  • Ensure data availability and data integrity.
  • Boot from SAN capability reduces the system management costs and increases uptime.