This tutorial provides a thorough investigation of low-dropout (LDO) voltage regulator designs, covering principles, analysis, advanced LDO architectures, and cutting-edge design examples. We begin with a review of traditional LDO regulator topologies and how they perform in terms of key performance metrics such as regulation accuracy, load dynamic range, stability, power supply rejection (PSR), transient response, dropout voltage, and power efficiency. Next, it discusses the LDO design strategies to tradeoff some of the performance metrics and presents advanced LDO architectural solutions. Advances and challenges of low-VDD LDOs as integrated voltage regulators (IVRs) will also be covered to address the emerging power management in modern many-core system-on-chip (SoC) processors. Finally, the tutorial examines several state-of-the-art LDO design examples, including digital-assisted LDOs, switched-capacitor-hybrid LDOs, and triode LDOs that efficiently supply heavy loads with ultra-low dropout voltage.
Hyun-Sik Kim received the B.S. degree (Hons.) in electronic engineering from Hanyang University, Seoul, Korea, in 2009, and the M.S. and Ph.D. degrees in electrical engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2011 and 2014, respectively. In 2014, he joined Samsung Display Co. Ltd., Yongin, Korea. From 2015 to 2019, he was an Assistant Professor in Dankook Univ., Cheonan, Korea. Since 2019, he has been with the School of Electrical Engineering, KAIST, Daejeon, Korea, where is currently an Associate Professor. His research interests are in power management and display driver circuits. He has (co-)authored 65+ peer-reviewed journal and conference papers. Dr. Kim was a recipient of the IEEE-SSCS Predoctoral Achievement Award in 2014. He is currently serving on the Technical Program Committees of the IEEE A-SSCC and the IEEE CICC.
In-band Simultaneous Full-Duplex (FD) radio communication, where the transmitter and the receiver operate on the same frequency band at the same time, has been explored as a means to increase the throughput in the last decade. The hurdle in achieving FD is the self-interference (SI) from the transmitter that is several orders of magnitude stronger than the desired signal at the RX. For mobile applications, SI cancellation circuits should be low-power, linear, tunable, low-noise, fully monolithic with a compact form factor, and implemented at the radio frequency front-end. This tutorial describes the challenges in designing an FD single-chip radio for WiFi/cellular applications and gives an overview of the state-of-the-art. It then describes the outstanding challenges, and also presents other wireless applications where FD promises a significant performance improvement.
Sudip Shekhar received his B.Tech. degree from the Indian Institute of Technology, Kharagpur, and the Ph.D. degree from the University of Washington, Seattle.
From 2008 to 2013, he worked on high-speed I/O architectures at Intel. He is now an Associate Professor at the University of British Columbia, Vancouver. His current research interests include circuits for electrical and optical interfaces, frequency synthesizers, and wireless transceivers.
Dr. Shekhar is a recipient of the Schmidt Science Polymath Award (2022-27), the UBC Killam Teaching Prize (2022), the Young Alumni Achiever Award by IIT Kharagpur (2019), the IEEE TCAS Darlington Best Paper Award (2010) and a co-recipient of IEEE RFIC Symposium Student Paper Award (2015). He serves as a Distinguished Lecturer for the SSCS Society (2021-22).
Deep neural networks (DNN) are used across a wide range of applications from embedded edge devices to data centers. Custom DNN accelerators are increasingly explored to gain performance and power advantages over general-purpose processors. This tutorial presents hardware-software co-design techniques to achieve the high energy efficiency and performance scalability, while striking the right balance with flexibility across different neural networks and towards new models. It presents a survey of architecture choices to design compute units, memory hierarchies, and interconnect topologies. It describes compiler approaches to effectively tile computations for achieving parallelism and data reuse. It explores hardware-aware neural network optimizations to study efficiency vs. accuracy tradeoffs.
Rangharajan Venkatesan received the B.Tech. degree from IIT Roorkee in 2009 and the Ph.D. degree Purdue University in 2014. He joined NVIDIA since 2014, where he is currently a Senior Research Scientist. His research interests are in the areas of low-power VLSI design and computer architecture with particular focus in deep learning accelerators, and high-level synthesis. He has received Best Paper Awards at IEEE symposium on Microarchitecture, Journal on Solid State Circuits, and International Symposium on Low Power Electronics and Design. In 2021, his work on approximate computing received the Ten Year Retrospective Most Influential Paper Award at ICCAD. He has served as a member of the technical program committees of several leading IEEE/ACM conferences including ISSCC, DAC, MICRO, and ISLPED.
Neural networks based on traditional von-Neumann computing architecture suffer from high energy consumption and large latency because of heavily repeated data transfer between memory and arithmetic-logic units (ALUs). Computing-In-Memory (CIM) has been considered as a promising solution for tackling the above issues. Minimizing data transfer between memory and processing elements through CIM can improve the overall energy efficiency dramatically (e.g. >100×). However, CIM has various design issues such as PVT variations, linearity, precision, etc. In this tutorial, the speaker will present the opportunities of CIM for AI and ML, will discuss various design challenges, and will introduce recent state-of-the-art CIM works such as SRAM CIM macros, digital CIM macros, resistive RAM (RRA) CIM macros, etc.
Tony Tae-Hyoung Kim received the BS and the MS degrees from Korea University, Seoul, Korea in 1999 and 2001. He received the Ph.D. degree from the University of Minnesota, Minneapolis, MN, USA in 2009. He was with Samsung Electronics in Hwasung, Korea from 2001 to 2005. In 2007 and 2008, he was a summer intern in IBM T. J. Watson Research Center, Yorktown Heights, NY. In 2009, Dr. Kim joined Nanyang Technological University where he is currently an associate professor. His current research interests include in-memory computing for edge computing, emerging memory circuit design, energy-efficient circuits and systems for IoT and wearable applications, and variation tolerant circuits and systems. Dr. Kim is an IEEE senior member and a distinguished lecturer of IEEE SSCS DL Program (2022 ~ 2023). He was Chair of the IEEE SSCS Singapore Chapter from 2015 to 2016.