AAEON’s MAXER-2100 Inference Server Integrates Both Intel CPU and NVIDIA GPU Technologies

Leading provider of advanced AI solutions AAEON, has released the inaugural offering of its AI Inference Server product line, the MAXER-2100. The MAXER-2100 is a 2U Rackmount AI inference server powered by the Intel® Core i9-13900 Processor, designed to meet high-performance computing needs.

Image Credit: AAEON

The MAXER-2100 is also able to support both 12 th and 13 th Generation Intel® Core LGA 1700 socket- type CPUs, up to 125W, and features an integrated NVIDIA® GeForce RTX 4080 SUPER GPU. While the product’s default comes with the NVIDIA® GeForce RTX 4080 SUPER, it is also compatible with and an NVIDIA-Certified Edge System for both the NVIDIA L4 Tensor Core and NVIDIA RTX 6000 Ada GPUs.

Given the MAXER-2100 is equipped with both a high-performance CPU and industry-leading GPU, a key feature highlighted by AAEON upon the product’s launch is its capacity to execute complex AI algorithms and datasets, process multiple high-definition video streams simultaneously, and utilize machine learning to refine large language models (LLMs) and inferencing models.

Given the need for latency-free operation in such areas, the MAXER-2100 offers up to 128GB of DDR5 system memory through dual-channel SODIMM slots. For storage, it includes an M.2 2280 M-Key for NVMe and two hot-swappable 2.5” SATA SSD bays with RAID support. The system also provides extensive functional expansion options, including one PCIe [x16] slot, an M.2 2230 E-Key for Wi-Fi, and an M.2 3042/3052 B-Key with a micro SIM slot.

For peripheral connectivity, the server boasts a total of four RJ-45 ports, two running at 2.5 GbE and two at 1GbE speed, along with four USB 3.2 Gen 2 ports running at 10 Gbps. In terms of industrial communication options, the MAXER-2100 grants users RS-232/422/485 via a DB-9 port. Multiple display interfaces are available, thanks to HDMI 2.0, DP 1.4, and VGA ports, which leverage the exceptional graphic capability of the server's NVIDIA® GeForce RTX 4080 SUPER GPU.

Given the combined thermal output of its 1000W power supply, 125W CPU, integrated NVIDIA® GeForce RTX 4080 SUPER GPU, and potential additional add-on cards, the MAXER-2100 is remarkably compact at just 17" x 3.46" x 17.6" (431.8 mm x 88 mm x 448 mm). This is made possible by a novel cooling architecture utilizing three fans, prioritizing airflow around the CPU and key chassis components. Fan placement within the MAXER-2100 chassis also serves to reduce system noise.

AAEON has indicated that the system caters to three primary user bases – edge computing clients, central management clients, and enterprise AI clients.

The first of these refers to organizations and businesses that require scalable, server-grade edge inferencing for applications such as automated optical inspection (AOI) and smart city solutions.

“The MAXER-2100 can be used to run multiple AI models across multiple high-definition video streams simultaneously, via either its onboard peripheral interfaces or scaled up via network port integration.” Associate Vice President of AAEON’s Smart Platform Division Alex Hsueh said. “Its high-performance CPU, powerful GPU, large memory capacity, and high-speed network interfaces make it well-equipped to handle the acquisition and processing of 50-100+ high definition video streams, making it an ideal solution for applications requiring real-time video analysis.” Hsueh added.

AAEON’s second target market is those seeking remote multi-device management functions, such as running diagnostics, deploying or refining AI models, or storing local data on edge devices. On the topic of the product’s suitability for such clients, Mr. Hsueh remarked, “With the MAXER-2100, our customers can utilize K8S, over-the-air, and out-of-band management to update and scale edge device operations across smart city, transportation, and enterprise AI applications, addressing key challenges faced by our customers when managing multiple AI workloads at the edge.”

For enterprise AI clients, AAEON indicates that by leveraging the MAXER-2100, companies can effectively harness their data to build and deploy advanced AI solutions powered by LLMs. This includes applications in natural language processing, content generation, and customer interaction automation. The key benefits that the MAXER-2100 brings to such setups are the security provided by data being stored and processed at the edge and the system’s ability to train and refine inference models during operations.

AAEON - Automated Optical Inspection Made Easy

AAEON - Automated Optical Inspection Made Easy. Video Credit: AAEON

Source:

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    AAEON Technology Inc. (2024, July 10). AAEON’s MAXER-2100 Inference Server Integrates Both Intel CPU and NVIDIA GPU Technologies. AZoM. Retrieved on November 21, 2024 from https://www.azom.com/news.aspx?newsID=63324.

  • MLA

    AAEON Technology Inc. "AAEON’s MAXER-2100 Inference Server Integrates Both Intel CPU and NVIDIA GPU Technologies". AZoM. 21 November 2024. <https://www.azom.com/news.aspx?newsID=63324>.

  • Chicago

    AAEON Technology Inc. "AAEON’s MAXER-2100 Inference Server Integrates Both Intel CPU and NVIDIA GPU Technologies". AZoM. https://www.azom.com/news.aspx?newsID=63324. (accessed November 21, 2024).

  • Harvard

    AAEON Technology Inc. 2024. AAEON’s MAXER-2100 Inference Server Integrates Both Intel CPU and NVIDIA GPU Technologies. AZoM, viewed 21 November 2024, https://www.azom.com/news.aspx?newsID=63324.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.