Author of the publication

A 28nm 77.35TOPS/W Similar Vectors Traceable Transformer Processor with Principal-Component-Prior Speculating and Dynamic Bit-wise Stationary Computing.

, , , , , , , , , , and . VLSI Technology and Circuits, page 1-2. IEEE, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

A 28nm 49.7TOPS/W Sparse Transformer Processor with Random-Projection-Based Speculation, Multi-Stationary Dataflow, and Redundant Partial Product Elimination., , , , , , , , , and 3 other author(s). A-SSCC, page 1-3. IEEE, (2023)A 28nm 77.35TOPS/W Similar Vectors Traceable Transformer Processor with Principal-Component-Prior Speculating and Dynamic Bit-wise Stationary Computing., , , , , , , , , and 1 other author(s). VLSI Technology and Circuits, page 1-2. IEEE, (2023)Trainer: An Energy-Efficient Edge-Device Training Processor Supporting Dynamic Weight Pruning., , , , , , , , and . IEEE J. Solid State Circuits, 57 (10): 3164-3178 (2022)LPE: Logarithm Posit Processing Element for Energy-Efficient Edge-Device Training., , , , and . AICAS, page 1-4. IEEE, (2021)A 28nm 276.55TFLOPS/W Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation., , , , , , , , and . VLSI Circuits, page 1-2. IEEE, (2021)A 28nm 27.5TOPS/W Approximate-Computing-Based Transformer Processor with Asymptotic Sparsity Speculating and Out-of-Order Computing., , , , , , , , , and 1 other author(s). ISSCC, page 1-3. IEEE, (2022)FACT: FFN-Attention Co-optimized Transformer Architecture with Eager Correlation Prediction., , , , , , , , and . ISCA, page 22:1-22:14. ACM, (2023)