From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Efficient DVFS to Prevent Hard Faults for Many-Core Architectures., , и . ICT-EurAsia, том 8407 из Lecture Notes in Computer Science, стр. 674-679. Springer, (2014)Rethinking the Distributed DNN Training Cluster Design from the Cost-effectiveness View., , , , и . HPCC/DSS/SmartCity/DependSys, стр. 730-731. IEEE, (2023)Communication Analysis for Multidimensional Parallel Training of Large-scale DNN Models., , , и . HPCC/DSS/SmartCity/DependSys, стр. 728-729. IEEE, (2023)Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal Balance., , , , и . CoRR, (2024)A Multidimensional Communication Scheduling Method for Hybrid Parallel DNN Training., , , , , и . IEEE Trans. Parallel Distributed Syst., 35 (8): 1415-1428 (августа 2024)Mining of Attack Models in IDS Alerts from Network Backbone by a Two-stage Clustering Method., , , и . IPDPS Workshops, стр. 1263-1269. IEEE Computer Society, (2012)CD-Sched: An Automated Scheduling Framework for Accelerating Neural Network Training on Shared Memory CPU-DSP Platforms., , и . PCCNT, стр. 41:1-41:6. ACM, (2023)SCGraph: Accelerating Sample-based GNN Training by Staged Caching of Features on GPUs., , , , и . ISPA/BDCloud/SocialCom/SustainCom, стр. 106-113. IEEE, (2022)Auto-Divide GNN: Accelerating GNN Training with Subgraph Division., , , , , и . Euro-Par, том 14100 из Lecture Notes in Computer Science, стр. 367-382. Springer, (2023)2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters., , , , , и . CLUSTER, стр. 103-113. IEEE, (2021)