Especialistas en: Networking, High Performance Computing (HPC), InfiniBand, Low Latency, Cloud/Virtualization, Storage acceleration, Ethernet y Web 2.0
.
www.maps.com.mx
Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability.
Cristofer Aguilar | Gerente de Producto | Tel. (55) 53 87 35 00 Ext. 461 | Móvil. 044 (55) 19 30 35 19 | Email: cristofer.aguilar@maps.com.mx
Mellanox’s networking solutions based on InfiniBand, Ethernet, or RoCE (RDMA over Converged Ethernet) provide the best price, performance, and power value proposition for network and storage I/O processing capabilities. Advanced data centers can utilize 25/40/50/100G InfiniBand or Ethernet, or RoCE to consolidate I/O to a single wire and enable IT managers to deliver significantly higher application service levels, while reducing CapEx and OpEx related to I/O infrastructures. Mellanox provides a large pool of deployment, manageability and performance tools with networking products for a myriad of software environments to fine tune solutions to customer requirements.
Increased density of virtual machines on a single system within a data center is driving more I/O connectivity per physical server. Multiple 1 or 10 Gigabit Ethernet NICs along with Fibre Channel HBAs are used in a single enterprise system for data exchange. Such hardware proliferation has increased I/O cost, convoluted cable management, and caused loss of I/O slots. 25GbE and higher networking solutions with higher speeds that can run multiple protocols simultaneously (RoCE, iSCSI, etc.) can deliver better performance with unmatched scalability and efficiency. This helps reduce costs while providing the ability to support an increasingly virtualized and agile data center.
The efficiency of today‘s data centers depends heavily on fast and efficient networking and storage capabilities. Microsoft has determined that offloading the network stack processing from the CPU to the network is the optimal solution for storage hungry workloads such as Microsoft SQL Server, and Machine learning. Offloading frees the CPU to do other application processing, which improves performance and reduces the number of servers required to support a given workload, resulting in both CapEx and OpEx savings.
Whether looking for on premise data center or public cloud offerings, Microsoft‘s solutions combined with Mellanox networking with offload accelerators provides for a solid foundation to provide outstanding performance, and increased efficiency to accommodate evolving business needs.
Mellanox high performance and low latency Ethernet and InfiniBand-based server adapters and switches provide fault-tolerant and unified connectivity between clustered database servers and native storage, allowing for very high efficiency of CPU and storage capacity usage. The result is 50% less hardware cost to achieve the same level of performance
Intelligent ConnectX-6 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, bring new acceleration engines for maximizing High Performance, Machine Learning, Web 2.0, Cloud, Data Analytics and Telecommunications platforms.
ConnectX-6 with Virtual Protocol Interconnect® supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications.
Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines for maximizing High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.
ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets.
ConnectX-4 adapters with Virtual Protocol Interconnect (VPI), support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, and provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.
ConnectX-4 adapters provide an unmatched combination of 100Gb/s bandwidth in a single port, the lowest available latency, 150 million messages per second and application hardware offloads, addressing both today's and the next generation's compute and storage data center demands.
ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks ("Tunneling"), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.
Mellanox's industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most flexible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0 host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI messages per second making it the most scalable and suitable solution for current and future transaction-demanding applications. ConnectX-3 maximizes the network efficiency making it ideal for HPC or converged data centers operating a wide range of applications.
Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.
Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Application acceleration with CORE-Direct™ and GPU communication acceleration brings further levels of performance improvement. Mellanox InfiniBand adapters' advanced acceleration technology enables higher cluster efficiency and large scalability to tens-of-thousands of nodes.
Mellanox adapters are a major component in Mellanox CloudX architecture. Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization on Ethernet and InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.
Overlay networks offload and encap/decap (for VXLAN, NVGRE and Geneve) enable highest bandwidth while freeing the CPU for application tasks. Mellanox adapters enable high bandwidth and more virtual machines per server ratio. For more information on Mellanox adapters savings and VM calculator please refer to Mellanox CloudX.
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Mellanox adapters support SCSI, iSCSI, NFS and FCoIB protocols. Mellanox adapters also provide advanced storage offloads such as T10/DIF and RAID Offload.
All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software, and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are compatible with configuration and management tools from OEMs and operating system vendors.
VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand or Ethernet as well as RDMA over Converged Ethernet (RoCEv1 and RoCE v2). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.
ConnectX-6 adapters are part of Mellanox's full HDR 200Gb/s InfiniBand end-to-end portfolio for data centers and high-performance computing systems, which includes switches and cables. Mellanox's Quantum family of HDR InfiniBand switches and Unified Fabric Management software incorporate advanced tools that simplify networking management and installation, and provide the needed capabilities for the highest scalability and future growth. Mellanox's line of HDR copper and active optical cables ensure the highest interconnect performance. With Mellanox end to end, IT managers can be assured of the highest performance, most efficient network fabric.
Products are available in both Ethernet and InfiniBand protocols and SFP & QSFP form factors.
LinkX has solutions for any data center speed and reach from 0.5m to 10km providing lower latency, lower power, and better ROI than many competitors.
Especialistas en: Networking, High Performance Computing (HPC), InfiniBand, Low Latency, Cloud/Virtualization, Storage acceleration, Ethernet y Web 2.0