//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
It’s that point of yr once more—Sizzling Chips will quickly be upon us. Happening as a digital occasion on August 21–23, the convention will as soon as once more current the very newest in microprocessor architectures and system improvements.
As EE Occasions’ AI reporter, I’ll in fact be looking for brand spanking new and attention-grabbing AI chips. As lately, this yr this system has a transparent give attention to AI and accelerated computing, however there are additionally classes on networking chips, integration applied sciences, and extra. Chips offered will run the gamut from wafer–scale to multi–die excessive–efficiency computing GPUs, to cell phone processors.
SPONSORED | TMYTEK’s Fast mmWave Prototyping Answer Unlocks NI’s USRP X410 Superior Wi-fi Communication & Sensing Capabilities
The primary session on day 1 will host the most important chip firms on the planet as they current the largest GPU chips on the planet. Nvidia is up first to current its flagship Hopper GPU, AMD will current the MI200, and Intel will current Ponte Vecchio. Presenting these one after one other contrasts their kind elements: Hopper is a monolithic die (plus HBM), the MI200 has two huge compute chiplets, and Ponte Vecchio has dozens.
Alongside the large three, a shock entry within the at–scale GPU class: Biren. The Chinese language basic–objective graphics processing unit (GPGPU) maker, based in 2019, just lately lit up its first–gen 7–nm GPGPU, the BR100. All we all know to this point is that the corporate makes use of chiplets to construct the GPGPU with “the most important computing energy in China,” in keeping with its web site. Biren’s chip has been hailed as a breakthrough for the home IC trade, because it “straight benchmarks in opposition to the newest flagships just lately launched by worldwide producers.” Hopefully, the corporate’s Sizzling Chips presentation will reveal whether or not this actually is the case.
The principle machine studying processor session is on day 2. We are going to hear from Groq’s chief architect on the startup’s inference accelerator for the cloud. Cerebras may even current a deep–dive on the {hardware}–software program codesign for its second–gen wafer–scale engine.
There may even be two shows from Tesla on this class—each on its forthcoming AI supercomputer Dojo. Dojo has been offered as “the primary exascale AI supercomputer” (1.1 EFLOPS for BF16/CFP8) that makes use of the corporate’s specifically designed Tesla D1 ASIC in modules the corporate calls Coaching Tiles.
Information heart AI chip firm Untether will current its model new second–gen inference structure, known as Boqueria. We don’t know the small print but, however we all know the chip has at the very least 1,000 RISC–V cores (will it take Esperanto’s crown as largest industrial RISC–V design?) and that it depends on an identical at–reminiscence compute structure to the primary technology.
AI of us might also wish to look out for the tutorial session on Aug. 21 on the subject of compiling for heterogeneous techniques with MLIR.
The opposite tutorial session is on CPU/accelerator/reminiscence interconnect normal Compute Specific Hyperlink (CXL). CXL simply introduced the third model of its know-how, which appears to be like set to change into the trade normal since beforehand competing requirements just lately threw their weight behind CXL.
Elsewhere on this system, we’ll hear from Lightmatter about its Passage machine, a wafer–scale programmable photonic communication substrate. Ranovus will current on its monolithic integration know-how for photonic and digital dies.
I’ll even be looking for Nvidia’s presentation on its Grace CPU, a presentation on a processing cloth for mind–pc interfaces from Yale College, and keynotes from Intel’s Pat Gelsinger and Tesla Motors’ Ganesh Venkataramanan.
The advance program for Sizzling Chips 34 might be discovered right here.