- Cash On Delivery + Free Shipping Across India (For all Physical Products)
- Store Location
-
- support@grabnpay.in
- Data Center Networking
- Wireless Networking
- Optical Networking
- Wordpress Hosting
- Blog Hosting
- WooCommerce Hosting
- Kubernetes Playground
- VPS Hosting
- Home
- Nvidia Adapter
- Mellanox NVIDIA CONNECTX-6 200G InfiniBand/Ethernet VPI Adapter Cards Mellanox NVIDIA CONNECTX-6 200G InfiniBand/Ethernet VPI Adapter Cards
Mellanox NVIDIA CONNECTX-6 200G InfiniBand/Ethernet VPI Adapter Cards
- Description
- Shipping & Returns
- Reviews
NVIDIA Quantum InfiniBand platform relies on ConnectX6 InfiniBand smart adapter cards. ConnectX6 provides 200Gbps Fast Ethernet and InfiniBand connectivity with extremely low latencies, high message rates, smart offloads and acceleration from NVIDIA In-Network Computing.
KEY FEATURES
- Up to 200Gb/s Max bandwidth connectivity per port
- Up to 215 million messages/sec with extremely low latency
- XTS-AES mode hardware encryption at the block level
- Compliant with FIPS (Federal Information Processing Standards)
- SerDes (PAM4) and SerDes (NRZ) both are supported
- Excellent packet pacing with sub-nanosecond accuracy
- Support for PCIe Gen 3.0 and Gen 4.0
- Engines for accelerating in-network computation
- Open Data Center Committee (ODCC) compatible and RoHS-compliant
PRODUCT SPECIFICATION
Brand |
Nvidia Mellanox |
Type |
200G Adaptors |
Model |
Connect X-6 |
Network Protocol |
TCP/IP, UDP/IP, iSCSI |
Max Speed |
HDR & 200 Gbe |
Connector Type |
QSFP56 |
Ethernet |
200 / 100 / 50 / 40 / 25 / 10 / 1 GbE |
Ports |
Dual |
Host Interface |
PCI Express 4.0 x16 |
PCI Specification |
PCIe 1.1, PCIe 2.0, PCIe 3.0 |
Storage Protocol |
RP, iSER, NFS RDMA,SMB Direct, and NVMe-oF |
Data Link Protocol |
GigE, 10 GigE, 40 Gigabit LAN, 100 Gigabit Ethernet, 25 Gigabit LAN, 50 Gigabit LAN, 100 Gigabit InfiniBand, 200 Gigabit LAN, 200 Gigabit InfiniBand |
Network Standard |
IEEE 1588, IEEE 802.1Q, IEEE 802.1Qau, IEEE 802.1Qaz, IEEE 802.1Qbb, IEEE 802.1Qbg, IEEE 802.1p, IEEE 802.3ad, IEEE 802.3ae, IEEE 802.3ap, IEEE 802.3az, IEEE 802.3ba |
System Requirement |
FreeBSD, Microsoft Windows, Red Hat Enterprise Linux, CentOS |
Complaint |
RoHS |
Form Factor |
Plug -in card |
Height x Depth |
6.6 x 2.7 inch |
ConnectX-6 supports NVIDIA In-Network Computing and In-Network Memory, offloading computation to the network to save CPU time and increase efficiency. InfiniBand Trade Association (IBTA) specification defines remote direct memory access (RDMA) technology that delivers low latency and high performance. It provides end-to-end packet-level flow control to enhance RDMA network capabilities.
Big Data and Machine Learning
Clouds, hyperscale platforms, and enterprise data centers increasingly use data analytics. To train deep neural networks (DNNs) and improve recognition and classification accuracy, machine learning requires high throughput and low latency. ConnectX-6 provides ML applications with the performance and scalability they need with its 200Gb/s throughput.
Encryption and Security
ConnectX-6 block-level encryption improves network security. AES-XTS encryption/decryption is offloaded from the CPU to ConnectX-6 hardware, which saves CPU time and utilization. With dedicated encryption keys, it also protects users sharing the same resources. It replaces self-encrypted disks with block storage encryption in the adapter. With this, you can choose any device you want, including byte addressable and NVDIMMs that are traditionally not encrypted. Moreover, ConnectX-6 can comply with Federal Information Processing Standards (FIPS).
NVMe-oF for storage
The access time to storage media is very fast with NVMe storage devices. In order to provide low latency end-to-end NVMe storage devices remotely, the NVMe over Fabric (NVMe-oF) protocol leverages RDMA connectivity. Enhanced CPU utilization and scalability are achieved by ConnectX-6's NVMe-oF target and initiator offloads.
OTHER INTERFACES
Infiniband |
|
Enhanced Network |
|
Hardware-based I/O virtualization |
|
Control & Management |
|
Remote Boot |
|
PCI Express Interface |
|
Adapters with smart features
The ConnectX-6 series is available in two form factors: a low-profile stand-up PCIe card and a QSFP card with Open Compute Project specification 3.0. Stand-up PCIe adapters with HDR are available from ConnectX-6 or ConnectX-6 DE (ConnectX-6 Dx enhanced for HPC applications). Additionally, PCIe stand-up cards with cold plates are available for liquid-cooled Intel Server System D50TNP systems.
NVIDIA Socket Direct
A NVIDIA Socket Direct configuration improves the performance of multi-socket servers by allowing each CPU to access the network using its own PCIe interface. This improves latency, performance, and CPU utilization by bypassing the QPI (UPI) and other CPUs. With Socket Direct, GPUs are also linked to CPUs closest to the adapter cards via NVIDIA GPUDirect RDMA. It is possible to optimize Intel DDIO on both sockets with Socket Direct since the sockets and adapter cards are connected directly. An auxiliary PCIe card brings in the remaining PCIe lanes through Socket Direct technology. It is installed in two PCIe x16 slots and connected with a harness. Additionally, two PCIe x16 slots can be connected to the same CPU. Its main advantage lies in delivering 200Gb/s to servers that only support PCIe Gen3.
Host Management
NC-SI over MCTP over SMBus, MCTP via PCIe to BMC interfaces, and PLDM for both Monitor and Control DSP0248 and Firmware Update DSP0267 form part of Host Management.
ORDER INFORMATION
Manufacturers are responsible for warranties, which are based on purchase date and validity. We make sure the products we sell are delivered on time and in their original condition at Grabnpay. Whether it's physical damage or operational issues, our team will help you find a solution. We help you install, configure, and manage your devices. We'll help you file a warranty claim and guide you every step of the way.
Model |
Infiniband & Ethernet support |
Network Cages |
Host interface |
MCX653105A-HDAT |
200Gb/s & lower |
1x QSFP56 |
PCIe Gen 3.0/4.0 x16 |
MCX653106A-HDAT |
200Gb/s & lower |
2x QSFP56 |
PCIe Gen 3.0/4.0 x16 |
The nvidia mcx75310aas-neat cards offer incredible bandwidth. Our network has never been this responsive.
I upgraded our data center with nvidia mcx75310aas-neat card, and the performance boost was remarkable. The setup was straightforward, and it has improved our network speed significantly.