Hardware Accelerators

Design and Implementation of Hardware Accelerators for Neural Computing

Nowadays, the design of hardware accelerators for neural networks has become a pivotal area of innovation in the field of machine learning (ML). As the demand for more powerful and efficient ML applications continues to grow, so does the significance of optimizing the hardware that underpins these neural networks. Hardware accelerators, specialized computing devices tailored to perform specific tasks, play a crucial role in improving the speed, energy efficiency, privacy and scalability of neural network computations. In this project, we design and implement hardware accelerators specialized for traditional neural networks (e.g., convolutional neural networks, feed-forward neural networks, graph neural networks, and transformers), stochastic neural networks and spiking neural networks. Here are some of my works on hardware accelerator design.

An Architecture to Accelerate Convolution in Deep Neural Networks (Arash Ardakani, Carlo Condo, Mehdi Ahmadi, Warren J Gross, TCAS-I 2017)

Fast and Efficient Convolutional Accelerator for Edge Computing (Arash Ardakani, Carlo Condo, Warren J Gross, TCOMP 2019)

A Convolutional Accelerator for Neural Networks With Binary Weights (Arash Ardakani, Carlo Condo, Warren J Gross, ISCAS 2018)

Learning to Skip Ineffectual Recurrent Computations in LSTMs (Arash Ardakani, Zhengyun Ji, Warren J Gross, DATE 2019)