팔로우
Jing Xu
Jing Xu
Meta AI Research (FAIR)
meta.com의 이메일 확인됨
제목
인용
인용
연도
Recipes for building an open-domain chatbot
S Roller
arXiv preprint arXiv:2004.13637, 2020
10902020
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage
K Shuster, J Xu, M Komeili, D Ju, EM Smith, S Roller, M Ung, M Chen, ...
arXiv preprint arXiv:2208.03188, 2022
2652022
Beyond goldfish memory: Long-term open-domain conversation
J Xu
arXiv preprint arXiv:2107.07567, 2021
2562021
Chain-of-verification reduces hallucination in large language models
S Dhuliawala, M Komeili, J Xu, R Raileanu, X Li, A Celikyilmaz, J Weston
arXiv preprint arXiv:2309.11495, 2023
2482023
Self-rewarding language models
W Yuan, RY Pang, K Cho, S Sukhbaatar, J Xu, J Weston
arXiv preprint arXiv:2401.10020, 2024
2362024
Recipes for safety in open-domain chatbots
J Xu, D Ju, M Li, YL Boureau, J Weston, E Dinan
arXiv preprint arXiv:2010.07079, 2020
1912020
Bot-adversarial dialogue for safe conversational agents
J Xu, D Ju, M Li, YL Boureau, J Weston, E Dinan
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
1332021
Some things are more cringe than others: Preference optimization with the pairwise cringe loss
J Xu, A Lee, S Sukhbaatar, J Weston
arXiv preprint arXiv:2312.16682, 2023
512023
Saferdialogues: Taking feedback gracefully after conversational safety failures
M Ung, J Xu, YL Boureau
arXiv preprint arXiv:2110.07518, 2021
382021
Learning new skills after deployment: Improving open-domain internet-driven dialogue with human feedback
J Xu, M Ung, M Komeili, K Arora, YL Boureau, J Weston
arXiv preprint arXiv:2208.03270, 2022
362022
The cringe loss: Learning what language not to model
L Adolphs, T Gao, J Xu, K Shuster, S Sukhbaatar, J Weston
arXiv preprint arXiv:2211.05826, 2022
332022
Meta-rewarding language models: Self-improving alignment with llm-as-a-meta-judge
T Wu, W Yuan, O Golovneva, J Xu, Y Tian, J Jiao, J Weston, S Sukhbaatar
arXiv preprint arXiv:2407.19594, 2024
242024
On anytime learning at macroscale
L Caccia, J Xu, M Ott, M Ranzato, L Denoyer
Conference on Lifelong Learning Agents, 165-182, 2022
232022
When life gives you lemons, make cherryade: Converting feedback from bad responses into good labels
W Shi, E Dinan, K Shuster, J Weston, J Xu
arXiv preprint arXiv:2210.15893, 2022
152022
Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
D Ju, J Xu, YL Boureau, J Weston
arXiv preprint arXiv:2208.03295, 2022
152022
Distilling system 2 into system 1
P Yu, J Xu, J Weston, I Kulikov
arXiv preprint arXiv:2407.06023, 2024
142024
Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage, 2022
K Shuster, J Xu, M Komeili, D Ju, EM Smith, S Roller, M Ung, M Chen, ...
URL https://arxiv. org/abs/2208.03188, 0
14
Training models to generate, recognize, and reframe unhelpful thoughts
M Maddela, M Ung, J Xu, A Madotto, H Foran, YL Boureau
arXiv preprint arXiv:2307.02768, 2023
102023
Housing choices, sorting, and the distribution of educational benefits under deferred acceptance
J Xu
Journal of Public Economic Theory 21 (3), 558-595, 2019
102019
Improving open language models by learning from organic interactions
J Xu, D Ju, J Lane, M Komeili, EM Smith, M Ung, M Behrooz, W Ngan, ...
arXiv preprint arXiv:2306.04707, 2023
92023
현재 시스템이 작동되지 않습니다. 나중에 다시 시도해 주세요.
학술자료 1–20