Follow
Peter Hase
Title
Cited by
Cited by
Year
Open problems and fundamental limitations of reinforcement learning from human feedback
S Casper, X Davies, C Shi, TK Gilbert, J Scheurer, J Rando, R Freedman, ...
arXiv preprint arXiv:2307.15217, 2023
3112023
Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?
P Hase, M Bansal
arXiv preprint arXiv:2005.01831, 2020
3062020
Grips: Gradient-free, edit-based instruction search for prompting large language models
A Prasad, P Hase, X Zhou, M Bansal
arXiv preprint arXiv:2203.07281, 2022
1412022
Interpretable image recognition with hierarchical prototypes
P Hase, C Chen, O Li, C Rudin
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 …, 2019
1182019
Do language models have beliefs? methods for detecting, updating, and visualizing model beliefs
P Hase, M Diab, A Celikyilmaz, X Li, Z Kozareva, V Stoyanov, M Bansal, ...
arXiv preprint arXiv:2111.13654, 2021
112*2021
Fastif: Scalable influence functions for efficient model interpretation and debugging
H Guo, NF Rajani, P Hase, M Bansal, C Xiong
arXiv preprint arXiv:2012.15781, 2020
1002020
Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models
P Hase, M Bansal, B Kim, A Ghandeharioun
Advances in Neural Information Processing Systems 36, 2024
902024
Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language?
P Hase, S Zhang, H Xie, M Bansal
arXiv preprint arXiv:2010.04119, 2020
902020
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
P Hase, H Xie, M Bansal
Advances in Neural Information Processing Systems 34, 2021
812021
When can models learn from explanations? a formal framework for understanding the roles of explanation data
P Hase, M Bansal
arXiv preprint arXiv:2102.02201, 2021
722021
Rethinking machine unlearning for large language models
S Liu, Y Yao, J Jia, S Casper, N Baracaldo, P Hase, X Xu, Y Yao, H Li, ...
arXiv preprint arXiv:2402.08787, 2024
542024
Foundational challenges in assuring alignment and safety of large language models
U Anwar, A Saparov, J Rando, D Paleka, M Turpin, P Hase, ES Lubana, ...
arXiv preprint arXiv:2404.09932, 2024
532024
Can sensitive information be deleted from llms? objectives for defending against extraction attacks
V Patil, P Hase, M Bansal
arXiv preprint arXiv:2309.17410, 2023
382023
Can language models teach? teacher explanations improve student performance via personalization
S Saha, P Hase, M Bansal
Advances in Neural Information Processing Systems 36, 2024
30*2024
Low-cost algorithmic recourse for users with uncertain cost functions
P Yadav, P Hase, M Bansal
arXiv preprint arXiv:2111.01235, 2021
192021
Summarization programs: Interpretable abstractive summarization with neural modular trees
S Saha, S Zhang, P Hase, M Bansal
arXiv preprint arXiv:2209.10492, 2022
182022
Visfis: Visual feature importance supervision with right-for-the-right-reason objectives
Z Ying, P Hase, M Bansal
Advances in Neural Information Processing Systems 35, 17057-17072, 2022
132022
The unreasonable effectiveness of easy training data for hard tasks
P Hase, M Bansal, P Clark, S Wiegreffe
arXiv preprint arXiv:2401.06751, 2024
122024
Are hard examples also harder to explain? a study with human and model-generated explanations
S Saha, P Hase, N Rajani, M Bansal
arXiv preprint arXiv:2211.07517, 2022
112022
Shall i compare thee to a machine-written sonnet? an approach to algorithmic sonnet generation
J Benhardt, P Hase, L Zhu, C Rudin
arXiv preprint arXiv:1811.05067, 2018
52018
The system can't perform the operation now. Try again later.
Articles 1–20