Seguir
Zhongzhan Huang (黄中展)
Zhongzhan Huang (黄中展)
Dirección de correo verificada de mails.tsinghua.edu.cn - Página principal
Título
Citado por
Citado por
Año
Instance enhancement batch normalization: An adaptive regulator of batch noise
S Liang, Z Huang, M Liang, H Yang
Proceedings of the AAAI conference on artificial intelligence 34 (04), 4819-4827, 2020
562020
Dianet: Dense-and-implicit attention network
Z Huang, S Liang, M Liang, H Yang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 4206-4214, 2020
422020
Rethinking the pruning criteria for convolutional neural network
Z Huang, W Shao, X Wang, L Lin, P Luo
Advances in Neural Information Processing Systems 34, 16305-16318, 2021
402021
Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models
S Zhong, Z Huang, W Wen, J Qin, L Lin
Proceedings of the 31st ACM International Conference on Multimedia, 567-578, 2023
202023
Convolution-weight-distribution assumption: Rethinking the criteria of channel pruning
Z Huang, W Shao, X Wang, L Lin, P Luo
Advances in Neural Information Processing Systems 34, 2020
192020
Stiffness-aware neural network for learning Hamiltonian systems
S Liang, Z Huang, H Zhang
International Conference on Learning Representations, 2021
152021
Efficient attention network: Accelerate attention by searching where to plug
Z Huang, S Liang, M Liang, W He, H Yang
arXiv preprint arXiv:2011.14058, 2020
132020
Continuous transition: Improving sample efficiency for continuous control problems via mixup
J Lin, Z Huang, K Wang, X Liang, W Chen, L Lin
2021 IEEE International Conference on Robotics and Automation (ICRA), 9490-9497, 2021
122021
Blending pruning criteria for convolutional neural networks
W He, Z Huang, M Liang, S Liang, H Yang
Artificial Neural Networks and Machine Learning–ICANN 2021: 30th …, 2021
112021
The lottery ticket hypothesis for self-attention in convolutional neural network
Z Huang, S Liang, M Liang, W He, H Yang, L Lin
arXiv preprint arXiv:2207.07858, 2022
102022
Scalelong: Towards more stable training of diffusion model via scaling network long skip connection
Z Huang, P Zhou, S Yan, L Lin
Advances in Neural Information Processing Systems 36, 70376-70401, 2023
82023
Layer-wise shared attention network on dynamical system perspective
Z Huang, S Liang, M Liang, W He, L Lin
arXiv preprint arXiv:2210.16101, 2022
52022
Accelerating numerical solvers for large-scale simulation of dynamical system via neurvec
Z Huang, S Liang, H Zhang, H Yang, L Lin
arXiv preprint arXiv:2208.03680, 2022
52022
AlterSGD: Finding Flat Minima for Continual Learning by Alternative Training
Z Huang, M Liang, S Liang, W He
arXiv preprint arXiv:2107.05804, 2021
52021
Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation
S Zhong, Z Huang, S Gao, W Wen, L Lin, M Zitnik, P Zhou
arXiv preprint arXiv:2312.02439, 2023
42023
Understanding Self-attention Mechanism via Dynamical System Perspective
Z Huang, M Liang, J Qin, S Zhong, L Lin
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
42023
ASR: Attention-alike Structural Re-parameterization
S Zhong, Z Huang, W Wen, J Qin, L Lin
arXiv preprint arXiv:2304.06345, 2023
32023
On robust numerical solver for ode via self-attention mechanism
Z Huang, M Liang, L Lin
arXiv preprint arXiv:2302.10184, 2023
32023
On fast simulation of dynamical system with neural vector enhanced numerical solver
Z Huang, S Liang, H Zhang, H Yang, L Lin
Scientific Reports 13 (1), 15254, 2023
22023
CEM: Machine-Human Chatting Handoff via Causal-Enhance Module
S Zhong, J Qin, Z Huang, D Li
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
22022
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20