KEYNOTES

KEYNOTES

Computation-in Memory for edge AI: Opportunities and Challenges

Prof. Dr. Said Hamdioui

This is the title

Read Bio
Said Hamdioui (http://www.ce.ewi.tudelft.nl/hamdioui) is currently head of Computer Engineering Laboratory at Delft University of Technology, the Netherlands. He is also co-founder and CEO of Cognitive-IC, a start-up focusing on hardware dependability solutions. Hamdioui received the MSEE and PhD degrees (both with honors) from TUDelft. Prior to joining TUDelft as a professor, Hamdioui worked for Intel Corporation (Califorina, USA), Philips Semiconductors R&D (Crolles, France) and Philips/ NXP Semiconductors (Nijmegen, The Netherlands). Hamdioui owns two patents, has published one book and contributed to other two, and had co-authored over 250 conference and journal papers. He has consulted for many worldwide semiconductor companies such as Intel, Atmel, Renesas, etc. He delivered dozens of keynote speeches, distinguished lectures, and invited presentations and tutorial at major international forums/conferences/schools and at leading semiconductor companies. Hamdioui (has) served as Associate Editor of many journals such as IEEE Transactions on VLSI Systems, Microelectronic Reliability, ACM Journal on Emerging Technologies in Computing Systems, IEEE Design & Test, etc. Hamdioui is the recipient of many international/national awards. E.g., he is the recipient of European Design Automation Association (EDAA) Outstanding Dissertation Award in 2003, the European Commission Components and Systems Innovation Award for most innovative H2020 project at European Forum for Electronic Components and Systems in 2020, HiPEAC Technology Transfer Award in 2015 and 2022, etc. In addition, he received many Best Paper Awards and nominations at many leading international conferences (e.g., , IEEE International Conference on Computer Design (ICCD), Design Automation and Test in Europe (DATE), International test Conference (ITC), IEEE Computer Society Annual Symposium on VLSI (ISVLSI), IEEE European Test Symposium (ETS), etc.). Moreover, he was appointed as an IEEE Circuits and Systems Society (CASS) Distinguished Lecturer for 2021-2022. He is a member of AENEAS Scientific Committee Council (AENEAS =Association for European NanoElectronics Activities)
Read abstract
No one can deny the fact that we rely heavily on electronic systems in our daily life. It is almost impossible to imagine a day without your smart phone, computers, TV or even our coffee-machines. Without electronics and internet, business/work couldn't continue any more, the quality of education would degenerate, and the life quality probably turn back to the 18th century. However, this come at nonnegligible price, and this price further increase with emerging applications such as AI (Artificial Intelligence). For instance, the energy forecast suggests that the electricity demand for ICT will reach over 20% of the global electricity demand by 2030. Energy spending on data centers, smart phones, computers, networks are at the heart of this consumption. Hence, designing and even using these systems in an energy efficient manner is of critical importance. This talk discusses today’s chip technology and computer hardware/architectures (which enable the design of ICT systems), and highlights their limitations making them unsuitable to enable energy efficient solutions needed; not only to minimize the ICT’s electricity consumption and ensure the sustainability, but also to enable many emerging energy-constrained applications such as edge-AI. The talk will cover both the device as well as the architecture aspects. Thereafter, the talk will cover some future directions for energy efficient computing, while focusing on Computation-In-Memory (CIM) architecture using both memsitor devices as well as SRAMs, and inspired with the brain. The huge potential of CIM (in realizing over 100X improvement in terms of energy efficiency) will be illustrated based on some real case studies, supported by data measurement of chip prototypes. Aspects related to design, test and reliability of such brain-inspired CIM architectures will be discussed. Future challenges in chip technology and computer hardware/ architectures will be highlighted.

Exascale Reconfigurable and Accelerated Computing in Space

Prof. Dr. Luca Sterpone

This is the title

Read Bio
Luca Sterpone is Full Professor of the CAD and Reliability group since 2021, and Head of the Control and Computer Engineering Department. The research activity of Luca Sterpone focuses on computer engineering and it covers several multidisciplinary topics such as reconfigurable computing in space, computer-aided design algorithms, fault tolerance and reliability.
Read abstract
Reconfigurable devices have gained a lot of attention thanks to their excellent compromise between costs and performance. Being of very limited use due to a lack of performance a few years ago, these devices are now capable of implementing a wide range of applications requiring high computational capabilities. However, to further enhance computing capabilities and permit the effective implementation of Vision-Based Navigation (VBN) algorithms, an ad-hoc HW accelerator able to elaborate multi-dimensional arrays (Tensors) is needed. Tensors are fundamental units to store data such as the weights of a node in a neural network. They can manage massive multiplications and additions at high speed with a limited design area and power consumption. Several design strategies investigated the efficient implementation of Tensors on FPGA architectures by improving the pipeline strategy and resource sharing towards the Tensor processing elements (PEs) or by unifying the tensor computational kernels. Nowadays, there are not any available design solutions for Tensors on FPGAs that have high performance and are space-oriented or rad-tolerant. This talk will present a perspective of the existing solutions and adopted architecture; and target the expected view for the upcoming 5 years on the new upcoming reconfigurable technology.

Overcoming the Communication Bottleneck in Neuromorphic Computing Systems

prof. Dr. Davide Bertozzi

This is the title

Read Bio
Davide Bertozzi is a Reader in Advanced Processing Technologies at University of Manchester (United Kingdom). He got his PhD at University of Bologna (Italy) in 2003 and led the multiprocessor system-on-chip group at University of Ferrara till 2023. The mission of his research is to stay at the forefront of system innovation by leveraging the enabling properties of interconnection architectures and emerging technologies. He has been visiting researcher at Stanford University and at several semiconductor industries (STMicroelectronics, NEC America Labs, NXP, Samsung). In 2018 he received the Wolfgang Mehr Award from IHP Microelectronics, a Leibniz Institute for innovative microelectronics in Germany, for his interdisciplinary research on photonically-integrated systems. His research interests span widly across the field of on-chip communication with a strong drive to overcome the communication bottleneck in emerging system architectures, ranging from neuromorphic computing to multi-tenant fog nodes. He currently contributes to the activities for neuroscience simulation acceleration within the flagship EBRAINS 2.0 project, and takes part to a collaborative effort to bring the celebrated SpiNNaker large-scale neuromorphic computing platform to the next generations of technology, implementation and architecture. Finally, he is deeply involved in recent EU-funded initiatives (TAICHIP, TWIN-RELEC, AIDA4Edge) to enhance networking between research institutions of the widening countries and top-class leading counterparts, encompassing joint research, knowledge transfer and exchange of best practices in the fields of AI chips, open EDA tools and hybrid SNN-ANN models, respectively. His scientific activity has led to roughly 200 publications, with five best paper awards, three best paper award nominations and one high-impact paper award for one of the five most cited papers in the first thirty years of the Int. Conference of Computer Design.
Read abstract
The human brain has something very special about energy efficiency. This is the key rationale behind the current surge of interest in neuromorphic computing, which takes biological brains as a guide toward more versatile and efficient forms of computing. The most promising approach consists of spiking neural networks, leveraging sparse and event-driven activations to reduce the computational overhead, running on neuromorphic hardware optimized for asynchronous event-driven computation. Despite the different circuit design styles and levels of scale of current neuromorphic platforms, an unmistakable evidence from the leading-edge projects of the recent past is that the performance and/or power scalability of these systems is fundamentally limited by their communication requirements. While it is obvious that current microelectronics cannot wire like the brain, the problem has typically been overlooked by force-fitting traditional interconnection networks into the new domain, with the result that after some times of steady progress communication requirements are now happening to be the elephant in the room. This keynote will review state-of-the-art in both large- and small-scale neuromorphic computing systems, highlighting the pivotal role that interconnect technologies play to bring scalability and power efficiency to the next step. Along this direction, it will present a promising asynchronous interconnect technology combining synchronous-equivalent design flexibility with ultra-low energy-per-bit and area footprint, which lays the groundwork for the next generation of asynchronous communications.