- Cash On Delivery + Free Shipping Across India (For all Physical Products)
- Store Location
-
- support@grabnpay.in
- Data Center Networking
- Wireless Networking
- Optical Networking
- Wordpress Hosting
- Blog Hosting
- WooCommerce Hosting
- Kubernetes Playground
- VPS Hosting
- Home
- HPC
- Mellanox Nvidia Infiniband 400G Quantum 2 QM9700 Series smart switch Mellanox Nvidia Infiniband 400G Quantum 2 QM9700 Series smart switch
Mellanox Nvidia Infiniband 400G Quantum 2 QM9700 Series smart switch
- Description
- Shipping & Returns
- Reviews
As high-performance computing (HPC) and artificial intelligence (AI) applications become more complex, high-speed networking becomes increasingly vital. NVIDIA Quantum-2 stands out as the leading switch platform, excelling in both power and density. It boasts an impressive NDR throughput of 400 gigabits per second (Gb/s) in InfiniBand, delivering unparalleled networking performance to empower AI developers and scientific researchers as they tackle the world's most formidable challenges.
KEY FEATURES
- 64 * 400Gb/s bandwidth per port
- Full transport offload
- RDMA, GPUDirect RDMA, GPUDirect Storage
- Programmable In-Network Computing engines
- MPI All-to-All and MPI tag-matching hardware acceleration
- Includes of NVIDIA SHARPv3
- Advanced adaptive routing, congestion control, QoS and Self-healing network
TECHNICAL SPECIFICATION
Brand |
NVIDIA |
Type |
Infiniband smart switches |
Model |
Quantum-2 QM9700 Series |
Speed |
400 Gbps /port |
CPU |
x86 Coffee Lake i3 |
System Memory |
Single 8GB, 2,666 MT/s, DDR4 SO-DIMM |
Storage |
M2 SSD SATA 16GB 2242 FF |
Switch radix |
Data throughput of 51.2TB based on 64 - 400Gb/s non-blocking ports |
Managed Ports |
1x USB 3.0,1x RJ45 , 1x USB for I2C channel, 1x RJ45 (UART) |
Cooling System |
Type - Front-to-back or back-to-front |
Hot swappable fan ( 6+1) |
|
Software |
MLNX-OS |
Rack Mount |
1 U Rack mount |
Nvidia Quantum 2 infiniband
With NVIDIA Quantum-2, the seventh generation of the NVIDIA InfiniBand architecture, AI developers and scientists can tackle the most challenging problems in network communication. With advanced acceleration engines, remote direct memory access (RDMA), and 2 x data throughput of 400Gb/s speeds, NVIDIA Quantum-2 enhances the HPC , Application and supercomputing data centers in the world.
Nvidia Quantum 2 InfiniBand breaks through their records by emitting extraordinary advancements. With an impressive 2X increase in bandwidth per port compared to its predecessor, it sets a new standard in data transmission. The switch radix sees a remarkable 3X enhancement, demonstrating a significant stride forward in network capacity. Moreover, boasting 4X MPI performance, it empowers high-performance computing to reach unprecedented levels of efficiency. The switch's AI acceleration power surges ahead, achieving a staggering 32X improvement over the previous generation. In a four-switch-tier DragonFly+ network configuration, it accommodates over one million 400Gb/s nodes, showcasing a 6.5X augmentation in scalability. This innovative technology also contributes to a 7% reduction in data center power and space, reflecting Nvidia's commitment to sustainability and efficiency in the realm of high-performance networking.
SWITCH INTERFACES
NVIDIA Quantum-2-based QM9700 and QM9790 switch systems deliver an unprecedented 64 400Gb/s InfiniBand per port in 1U standard chassis. With 66.5 billion packets per second (BPPS) capacity, a single switch aggregates 51.2 terabits per second (Tb/s). As NVIDIA Quantum-2 supports the latest 400Gb/s interconnect technology, it's fast, very low-latency, and scalable. It's got NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), RDMA, and adaptive routing to give it state-of-the-art performance.
NVIDIA InfiniBand offers unique features like self-healing networking, QoS, enhanced VL mapping, and congestion control for top application throughput. These QM9700 and QM9790 400Gb/s InfiniBand switches are versatile that supports various topologies, including Fat Tree, SlimFly, DragonFly+, and more. They're also backward compatible and supported by a wide software ecosystem.
QM9700 400Gb/s InfiniBand switches extend NVIDIA In-Network Computing technologies and introduce SHARPv3 -- the third generation of NVIDIA SHARP. In comparison to the previous generation, SHARPv3 allows 32X more AI acceleration power for small and large data aggregation through the network. It improves complex application performance computations as data moves through a network, reducing data traversal and enhancing application runtime.
This (NDR) 400Gb/s InfiniBand connectivity allows the use of connectorized transceivers, active copper cables (ACCs), and direct attached copper cables (DACs) to build a topology of choice according to your needs.By using MPO fiber cables, complex infrastructures can be constructed and maintenance, installation, and upgrades can be simplified.
Nvidia Quantum-2 InfiniBand offers a range of connector plugs and cages to support high-speed data transfer. Here's a breakdown of the available options and their interconnect choices:
Connector Plugs/Cages Options:
1. OSFP and QSFP112: The Quantum-2 InfiniBand supports two types of connector plugs – the 8-channel and 4-channel octal small form-factor plug (OSFP) and the 4-channel, quad small form-factor plug 112G (QSFP112).
2. Twin-Port OSFP Transceivers: These transceivers house two separate 400Gb/s NDR transceiver engines, each equipped with two 4-channel, MPO/APC optical connectors. This configuration enables various connectivity options:
- Two NDR connections can be paired for an 800Gb/s aggregate link.
- Each NDR connection can function as two separate 400Gb/s links to two switches or adapter/DPU adapter endpoints.
- Each 4-channel NDR port can be split into two 2-channel, 200Gb/s NDR200 ports using fiber splitter cables, facilitating the linking of one switch cage device to four adapter/DPU endpoints.
3. Copper DACs and ACCs: Twin-port copper DACs and ACCs are available with straight 1:2 or 1:4 splitter cables, compatible with single-port OSFP or QSFP112 endpoints to adapters and DPUs.
4. Compatibility with Switches and Devices: Quantum-2 switches are designed to accept twin-port, finned-top OSFP devices. DGX-H100 InfiniBand supports twin-port, flat-top 800Gb/s transceivers and ACCs, while HCAs and DPUs are compatible with flat-top OSFP or QSFP112 devices.
Interconnect Choices:
1.Quantum-2 Switches to HCAs and DPUs: The following interconnect options are available:
- Twin-port, 800Gb/s DACs, ACCs, and transceivers on the switch side can connect to:
- Two 400Gb/s NDR links.
- Four 200Gb/s NDR200 links.
- Single-port OSFP or QSFP112 connections.
2. Connectivity to Previous InfiniBand Generations: Quantum-2 switches support twin-port OSFP connectors with 2xHDR DACs and AOCs splitter cables equipped with QSFP56 connector endpoints, enabling:
- Two 400Gb/s HDR links.
- Two 200Gb/s EDR links.
- Four HDR100 Quantum switches with 100Gb/s of bandwidth.
- Compatibility with ConnectX-6 HCAs and BlueField-2 DPUs.
3. DGX-H100 Hopper GPU Systems flat-top, twin-port 800Gb/s transceivers, and ACC cables can connect to Quantum-2 switches, creating 400Gb/s NDR GP compute fabrics with minimal cabling.
NETWORK TOPOLOGY
With NVIDIA port-split technology, the QM9700 and QM9790 switches maximize data speeds at 200Gb/s, reducing design and network topology costs. NVIDIA offers the highest-density top-of-rack switch available on the market, supporting 128 ports of 200Gb/s. Its two-level Fat Tree topology reduces power, latency, and space requirements for small to medium-sized deployments.
POWER ISOLATION
NVIDIA Quantum-2 InfiniBand platform offers proactive monitoring and congestion management that virtually eliminates performance jitter and assures predictive performance in the same manner as if the application were running on a dedicated device.
CLOUD NATIVE SUPERCOMPUTE
Leverages NVIDIA's BlueField architecture along with NVIDIA Quantum-2 InfiniBand networking to provide high-speed, low-latency supercomputing. On-demand high-performance computing (HPC), AI services, and user management and isolation are all easy and secure with this solution.
UFM CYBER AI
NVIDIA Unified Fabric Manager (UFM) Cyber-AI platform provides an upgraded network telemetry experience in real-time. Leveraging the power of artificial intelligence and advanced analytics, it empowers IT managers to identify operational irregularities and forecast potential network failures. This leads to enhanced security measures, increased data center uptime, and reduced overall operational costs.
ROUTER CAPABILITIES
InfiniBand switches with optional router capabilities from NVIDIA support the scaling out of InfiniBand clusters to extremely large numbers of nodes. With this scale-out, the performance and reliability requirements of research, simulations, artificial intelligence, and cloud applications data processing are far beyond what was achievable with the previous generation.
ENHANCED MANAGEMENT
QM9700 switches include an on-board subnet manager that enables simple, out-of-the-box setup for up to 2,000 nodes. It runs NVIDIA MLNX-OS and offers full chassis management via the command-line interface (CLI), web interface (WebUI), Simple Network Management Protocol (SNMP), or JavaScript Object Notation (JSON).
By using the advanced NVIDIA Unified Fabric Manager (UFM) feature sets on the externally managed QM9790 switch, data center operators can provision, manage, troubleshoot, and maintain modern data center fabrics more efficiently, reducing overall opex and maximizing utilization.
POWER OUTPUT
Power Supply |
1+1 redundant hot-swappable power supply |
Input Range |
Max.Power with passive cable 1084w |
Working Conditions |
Operating Temperature : 0ºC to 40ºC Non-Operating Temperature : -40ºC to 70ºC |
EMC Emission |
CE,VCCI,FCC, RCM, and ICES |
Safety certificate |
RoHS, CE ,CB,CU, and cTUVus |
Warranty |
1 Year |
Dimension ( H x W x D) |
1.7X 17 X 26 inch |
Weight |
14.5 Kg |
ORDER INFORMATION
Manufacturers are responsible for warranties, which are based on purchase date and validity. We make sure the products we sell are delivered on time and in their original condition at Grabnpay. Whether it's physical damage or operational issues, our team will help you find a solution. We help you install, configure, and manage your devices. We'll help you file a warranty claim and guide you every step of the way.
Order Part |
Description |
MQM9700-NS2F ( Forward ) |
64-ports 400Gb/s, 32 OSFP ports, managed, power-to-connector (P2C) airflow |
MQM9700-NS2R (Reverse) |
64-ports 400Gb/s, 32 OSFP ports, managed, connector-to-power (C2P) airflow |
MQM9790-NS2F |
64-ports 400Gb/s, 32 OSFP ports, unmanaged switch, P2C airflow |
MQM9790-NS2R |
64-ports 400Gb/s, 32 OSFP ports, unmanaged, C2P airflow |