Dell Technologies unveiled significant advancements to its AI Factory offerings at the Supercomputing 2024 (SC24) event in Atlanta, showcasing updates to its Data Lakehouse platform and introducing high-performance AI servers.

These developments aim to simplify and accelerate AI adoption for organizations, addressing challenges in managing complex workloads and vast datasets.
Enhancements to the Data Lakehouse
Dell's Data Lakehouse has been upgraded to include Apache Spark, a powerful distributed data processing engine.
This integration enhances the platform's ability to handle large-scale batch processing, real-time streaming, machine learning, and advanced analytics.
By combining Apache Spark with the Lakehouse’s existing architecture, Dell offers a unified framework for data analytics, management, and processing, enabling faster insights and improved efficiency for AI tasks.
Originally launched in March, the Data Lakehouse was built on modern infrastructure, featuring:
- Starburst Trino query engine for interactive, distributed queries.
- Kubernetes-orchestrated system software for flexibility and scalability.
- S3-compatible object storage, such as ECS, ObjectScale, and PowerScale, for robust data management.
These enhancements ensure businesses can efficiently manage their data while leveraging cutting-edge analytics tools for AI-driven operations.
Next-Generation AI Servers
Dell has also introduced two new servers tailored for demanding AI and HPC workloads, designed to fit seamlessly into its IR5000 rack:
- PowerEdge XE9685L: A liquid-cooled powerhouse, this server pairs dual 5th Gen AMD EPYC CPUs with Nvidia HGX H200 or B200 platforms. It supports up to 96 Nvidia GPUs per rack and includes 12 PCIe Gen 5.0 slots.
Built for AI, machine learning, and other compute-intensive applications, its customizable configurations provide unmatched performance for data-intensive workloads.
- PowerEdge XE7740: Designed for GenAI model fine-tuning and large-scale dataset analysis, this air-cooled server features dual Intel Xeon CPUs.

It accommodates up to 8 double-wide accelerators, such as the Nvidia H200 NVL or Intel Gaudi 3 AI accelerators, or up to 16 single-wide accelerators, including Nvidia L4 Tensor Core GPUs.
This server is ideal for organizations seeking scalable solutions for AI inferencing and advanced analytics.
Key Features and Scalability
These servers reflect Dell's commitment to addressing the growing demands of AI-driven enterprises.
With support for cutting-edge hardware, including the upcoming Nvidia GB200 Grace Blackwell Superchip, Dell ensures organizations are equipped for the future.
The integration of up to 144 GPUs per IR7000 rack exemplifies its focus on delivering scalable, high-performance solutions for modern data centers.
- PowerEdge XE9685L: Available globally in Q1 2025.
- PowerEdge XE7740: Available globally in Q2 2025.
Dell continues to innovate by combining state-of-the-art hardware and software, empowering organizations to harness the full potential of AI with efficiency and ease.

