Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices., , , , , , , , , and 1 other author(s). Frontiers Comput. Neurosci., (2021)AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing., , , , , , , , , and 2 other author(s). EDGE, page 233-244. IEEE, (2023)Phase Change Memory-based Hardware Accelerators for Deep Neural Networks (invited)., , , , , , , , , and 15 other author(s). VLSI Technology and Circuits, page 1-2. IEEE, (2023)AI hardware acceleration with analog memory: Microarchitectures for low energy at high speed., , , , , , , , , and . IBM J. Res. Dev., 63 (6): 8:1-8:14 (2019)Circuit Techniques for Efficient Acceleration of Deep Neural Network Inference with Analog-AI (Invited)., , , , , , , , , and 2 other author(s). ISCAS, page 1-5. IEEE, (2021)Analog-memory-based 14nm Hardware Accelerator for Dense Deep Neural Networks including Transformers., , , , , , , , , and 6 other author(s). ISCAS, page 3319-3323. IEEE, (2022)Impact of Phase-Change Memory Drift on Energy Efficiency and Accuracy of Analog Compute-in-Memory Deep Learning Inference (Invited)., , , , , , , , , and 11 other author(s). IRPS, page 1-10. IEEE, (2023)Improved Deep Neural Network Hardware-Accelerators Based on Non-Volatile-Memory: The Local Gains Technique., , , , , , , , , and 1 other author(s). ICRC, page 1-8. IEEE, (2017)Analog-to-Digital Conversion With Reconfigurable Function Mapping for Neural Networks Activation Function Acceleration., , , , , , and . IEEE J. Emerg. Sel. Topics Circuits Syst., 9 (2): 367-376 (2019)Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators., , , , , , , , , and 3 other author(s). CoRR, (2023)