Segui
Ramé Alexandre
Ramé Alexandre
Google DeepMind
Email verificata su google.com - Home page
Titolo
Citata da
Citata da
Anno
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
A Douillard, A Ramé, G Couairon, M Cord
CVPR 2022, 2021
3232021
Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization
A Ramé, C Dancette, M Cord
ICML 2022, 2021
2142021
Gemma 2: Improving Open Language Models at a Practical Size
G Team, M Riviere, S Pathak, PG Sessa, C Hardin, S Bhupatiraju, ...
arXiv preprint arXiv:2408.00118, 2024
1782024
Leveraging weakly annotated data for fashion image retrieval and label prediction
C Corbiere, H Ben-Younes, A Ramé, C Ollion
ICCV 2017 Workshop, 2017
1182017
Diverse Weight Averaging for Out-of-Distribution Generalization
A Ramé, M Kirchmeyer, T Rahier, A Rakotomamonjy, P Gallinari, M Cord
NeurIPS 2022, 2022
1122022
Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards
A Ramé, G Couairon, M Shukor, C Dancette, JB Gaya, L Soulier, M Cord
NeurIPS 2023, 2023
872023
Direct Language Model Alignment from Online AI Feedback
S Guo, B Zhang, T Liu, T Liu, M Khalman, F Llinares, A Ramé, T Mesnard, ...
arXiv preprint arXiv:2402.04792, 2024
762024
Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization
A Ramé, K Ahuja, J Zhang, M Cord, L Bottou, D Lopez-Paz
ICML 2023, 2023
75*2023
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
A Ramé, R Sun, M Cord
ICCV 2021, 2021
752021
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
A Ramé, M Cord
ICLR 2021, 2021
692021
WARM: On the Benefits of Weight Averaged Reward Models
A Ramé, N Vieillard, L Hussenot, R Dadashi, G Cideron, O Bachem, ...
ICML 2024, 2024
442024
Unified Model for Image, Video, Audio and Language Tasks
M Shukor, C Dancette, A Ramé, M Cord
TMLR 2023, 2023
27*2023
OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
A Ramé, E Garreau, H Ben-Younes, C Ollion
arXiv preprint arXiv:1812.02611, 2018
162018
BOND: Aligning LLMs with Best-of-N Distillation
PG Sessa, R Dadashi, L Hussenot, J Ferret, N Vieillard, A Ramé, ...
arXiv preprint arXiv:2407.14622, 2024
122024
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning
M Shukor, A Ramé, C Dancette, M Cord
ICLR 2024, 2023
112023
Towards efficient feature sharing in MIMO architectures
R Sun, A Ramé, C Masson, N Thome, M Cord
CVPR 2022 ECV Workshop, 2022
92022
Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
K Wang, R Kidambi, R Sullivan, A Agarwal, C Dann, A Michi, M Gelmi, ...
arXiv preprint arXiv:2407.15762, 2024
72024
CORE: Color Regression for Multiple Colors Fashion Garments
A Ramé, A Douillard, C Ollion
CVPR 2022 Workshop on Computer Vision for Fashion, Art, and Design, 2020
5*2020
WARP: On the Benefits of Weight Averaged Rewarded Policies
A Ramé, J Ferret, N Vieillard, R Dadashi, L Hussenot, PL Cedoz, ...
arXiv preprint arXiv:2406.16768, 2024
32024
Pre-train, fine-tune, interpolate: a three-stage strategy for domain generalization
A Ramé, J Zhang, L Bottou, D Lopez-Paz
NeurIPS 2022 Interpolation Workshop, 0
3*
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–20