The Dell PowerEdge XE9680 GPU-enabled AI server is the “quickest ramping” new answer within the firm’s historical past, COO Jeff Clarke stated
After laying out a sturdy synthetic intelligence (AI) technique overlaying infrastructure and companies throughout its companies throughout Dell Applied sciences World in April, firm COO Jeff Clarke this week stated AI “is a powerful tailwind for all issues information and compute,” on a This fall fiscal yr 2024 earnings name. “AI is increasing the TAM for whole know-how spending and is projected to develop at a 19% CAGR for the subsequent couple of years to roughly $90 billion, together with {hardware} and companies.”
For its second quarter, Dell reported income of $22.9 billion, up 10% sequentially and down 13% year-over-year. Working revenue was $1.2 billion, down 8% sequentially and up 1% year-over-year.
To the AI level, Clarke stated that in Q2 Dell noticed robust demand for its PowerEdge XE9680 GPU-enabled server which he referred to as a “key component” to the corporate’s generative AI options. He stated AI service elevated to twenty% of servers order income within the first half of the fiscal yr. He additionally referred to as out $2 billion in backlogged orders for the XE9680 “and our gross sales pipeline is considerably larger.”
“Gen AI represents an inflection level driving elementary change within the tempo of innovation whereas bettering the client expectation and enabling vital productiveness good points and new methods to work,” Clarke stated. “Because the primary infrastructure supplier, we’re clearly positioned to serve the market in a singular and differentiated means. And we’ve the world’s broadest Gen AI infrastructure portfolio that spans from the cloud to the shopper….I like our hand.”
Highlighting the AI use circumstances clients appear centered on, Clarke talked about buyer operations, content material creation and administration, software program improvement and gross sales.
Clarke reiterated Dell’s robust outlook for AI on condition that related workloads have to run throughout PCs, information facilities and edge clouds, which means Dell can assist the proliferation of AI throughout its portfolio. He additionally hit on one other key level which is that generative AI will depend on giant language fashions which are skilled utilizing proprietary, domain-specific information, and optimized in service of very particular enterprise processes.
Extra on the AI alternative: “We expect it’s one dimension doesn’t match all. We expect there’s an entire slew of AI options, once more, from PC to workstations to what occurs I the information heart—and the information heart may very well be a single server operating inference on the edge…We imagine it’s incremental.”
In response to a query that was partly concerning the varieties of firms shopping for Dell’s AI options, Clarke tied it to the prevalence of multi-cloud architectures and stated, primarily, AI workloads will comply with the information. “It’s extremely unlikely you’re going to have a wise manufacturing unit or a wise hospital or a set of robots which are going to constantly look to be skilled or run inference a good distance away. Latency will matter. We expect safety will matter. We expect efficiency will matter. And we in the end assume price will matter. Once you put that equation collectively, we predict it’s going to be a hybrid world…We expect it’s going to be very, very heterogeneous in the way in which that this will likely be executed with traditional compute in addition to accelerated compute. In a nutshell, that’s what we consider the chance.”