Follow
Yangyi Chen
Title
Cited by
Cited by
Year
Onion: A simple and effective defense against textual backdoor attacks
F Qi*, Y Chen*, M Li, Z Liu, M Sun
EMNLP, 2021
2322021
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
F Qi*, M Li*, Y Chen*, Z Zhang, Z Liu, Y Wang, M Sun
ACL, 2021
2002021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
F Qi*, Y Chen*, X Zhang, M Li, Z Liu, M Sun
EMNLP, 2021
1562021
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
X Wang*, Z Wang*, J Liu, Y Chen, L Yuan, H Peng, H Ji
ICLR, 2024
862024
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
L Xu, Y Chen, G Cui, H Gao, Z Liu
Findings of NAACL, 2022
802022
A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
G Cui*, L Yuan*, B He, Y Chen, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track), 2022
732022
R-Tuning: Instructing Large Language Models to Say ‘I Don’t Know’
H Zhang*, S Diao*, Y Lin*, Y Fung, Q Lian, X Wang, Y Chen, H Ji, T Zhang
NAACL, 2024
61*2024
Executable code actions elicit better llm agents
X Wang, Y Chen, L Yuan, Y Zhang, Y Li, H Peng, H Ji
ICML, 2024
602024
Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
L Yuan, Y Chen, G Cui, H Gao, F Zou, X Cheng, H Ji, Z Liu, M Sun
NeurIPS (Dataset and Benchmark Track), 2023
552023
Dress: Instructing large vision-language models to align and interact with humans via natural language feedback
Y Chen, K Sikka, M Cogswell, H Ji, A Divakaran
CVPR, 2024
452024
A Close Look into the Calibration of Pre-trained Language Models
Y Chen*, L Yuan*, G Cui, Z Liu, H Ji
ACL, 2023
432023
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Y Chen*, H Gao*, G Cui, F Qi, L Huang, Z Liu, M Sun
EMNLP, 2022
362022
CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
L Yuan*, Y Chen*, X Wang, YR Fung, H Peng, H Ji
ICLR, 2024
352024
Bridge the gap between CV and NLP! a gradient-based textual adversarial attack framework
L Yuan*, Y Zhang*, Y Chen, W Wei
Findings of ACL, 2023
352023
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Y Chen*, J Su*, W Wei
EMNLP, 2021
352021
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
B Zhu*, Y Qin*, G Cui, Y Chen, W Zhao, C Fu, Y Deng, Z Liu, J Wang, ...
NeurIPS, 2022
252022
Measuring and improving chain-of-thought reasoning in vision-language models
Y Chen, K Sikka, M Cogswell, H Ji, A Divakaran
NAACL, 2024
212024
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Y Chen*, F Qi*, H Gao, Z Liu, M Sun
EMNLP, 2022
202022
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
T Xu*, S Wu*, S Diao, X Liu, X Wang, Y Chen, J Gao
EMNLP, 2024
112024
Automatic Construction of Sememe Knowledge Bases via Dictionaries
F Qi, Y Chen, F Wang, Z Liu, X Chen, M Sun
Findings of ACL, 2021
82021
The system can't perform the operation now. Try again later.
Articles 1–20