SPACX: A Hardware and Algorithm Co-Optimized Photonic Deep Neural Network Computing Architecture | The George Washington University

SPACX: A Hardware and Algorithm Co-Optimized Photonic Deep Neural Network Computing Architecture

Case ID: 023-035-Louri

SPACX- A silicon-based Photonic accelerator for DNN Chiplets architecture

The continuous increase in size and complexity of deep neural network (DNN) models leads to rapidly increasing demand for computing capacity which has outpaced the scaling capability of conventional monolithic chips. Chiplet-based DNN accelerators have emerged as a promising solution for continuous scaling. Nevertheless, the metallic-based interconnects in these accelerators create a bottleneck due to the high latency and excessive power consumption during long-distance communication. Researchers at George Washington University have created a novel idea to overcome the communication bottleneck. Our researchers propose SPACX, a chiplet-based DNN accelerator design that exploits disruptive silicon photonics technology to enable seamless low-overhead communication.

Applications:

  • DNNs or any other artificial intelligence applications with prevalent convolution or matrix multiplication operations

Advantages:

  • Low-latency and low-power cross-chiplet and single-chiplet communication
  • Maximal computing parallelism
  • Scalable design for efficient integration of numerous chiplets
  • Simultaneous two-dimensional multicast communication support

Patent Information:

Title App Type Country Patent No. File Date Issued Date Patent Status
Silicon Photonics-Based Chiplet Accelerator for DNN Inference US Utility *United States of America   4/1/2024   Filed

For Information, Contact:

Michael Harpen
Licensing Manager
George Washington University
mharpen@gwu.edu

Inventors:

Yuan Li
Ahmed Louri
Keywords: