Building more powerful data centers for artificial intelligence is packing more and more GPU chips, making data centers huge, according to Siena’s chief executive. Along with the data center.
“Some of these large data centers are surprisingly large. They’re huge,” said Gary Smith, CEO of Hanover, Maryland-based Ciena. .
Also, OpenAI’s o3 is not yet an AGI, but it just did something that no other AI has done.
“We have data centers that are over two kilometers long,” Smith said, which is more than 1.24 miles. He notes that some new data centers are multi-story, creating a second dimension of distance in addition to horizontal spread.
Mr. Smith spoke as part of this. Interview for last week’s financial newsletter “The Technology Letter”.
While cloud data centers are growing, corporate campuses are struggling to support clusters of GPUs as they scale, Smith said.
“These campuses are getting bigger and longer,” he says. Campuses are made up of many buildings, and “the boundaries between what was once a wide area network and what was inside the data center are becoming blurred.”
Also: AWS Announces More Efficiency Improvements in AI Data Centers – Here’s How
“We’re starting to see that these campuses are reaching significant distances, and that’s putting a huge strain on direct-connect technology.”
Within a few years, Smith plans to begin selling fiber-optic equipment similar to those found in long-distance communications networks, but tailored to connect GPUs in data centers.
siena
Direct connect devices are networking devices specifically designed to allow GPUs to communicate with each other, such as Nvidia’s “NVLink” networking product.
Smith’s comments echo those of others in the AI industry, including Thomas Graham, co-founder of chip startup Lightmatter. At least 12 new AI data centers are planned or currently under construction, he said at a Bloomberg Intelligence conference last month. It requires gigawatts of power to run.
“For reference, New York City consumes an average of 5 gigawatts of electricity per day, which means that multiple New York cities are providing electricity,” Graham said. said the world’s AI processing is expected to require 40 gigawatts of power, “particularly in AI data centers, or eight New York City areas.”
Also: Global AI computing will use electricity equivalent to “multiple New York City” by 2026, says founder
Smith said the strain on Nvidia’s Direct Connect technology means that traditional fiber-optic links, previously reserved for long-distance communications networks, will begin to be deployed within cloud data centers in the coming years. .
“Given the speed of GPUs and the distances that these data centers are running now, we think there is a crossroads. [fiber optics] That’s what we’re focused on,” Smith told the Newsletter.