Theo dõi
Ghada Sokar
Ghada Sokar
Google DeepMind
Email được xác minh tại google.com
Tiêu đề
Trích dẫn bởi
Trích dẫn bởi
Năm
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
G Sokar, R Agarwal, PS Castro, U Evci
ICML 2023, International Conference on Machine Learning, Oral, 2023
1032023
SpaceNet: Make Free Space For Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
Elsevier Neurocomputing Journal, 2020
842020
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
612021
Dynamic Sparse Training for Deep Reinforcement Learning
G Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
IJCAI-ECAI 2022. The 31st International Joint Conference on Artificial …, 2021
452021
Topological Insights in Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
ECML PKDD 2020, 2020
36*2020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T Van Der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning, 1-38, 2022
272022
Mixtures of experts unlock parameter scaling for deep rl
J Obando-Ceron, G Sokar, T Willi, C Lyle, J Farebrother, J Foerster, ...
ICML 2024, International Conference on Machine Learning, Spotlight., 2024
232024
A generic OCR using deep siamese convolution neural networks
G Sokar, EE Hemayed, M Rehan
2018 IEEE 9th Annual Information Technology, Electronics and Mobile …, 2018
212018
Where to Pay Attention in Sparse Training for Feature Selection?
G Sokar, Z Atashgahi, M Pechenizkiy, DC Mocanu
NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022
202022
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning
B Grooten, G Sokar, S Dohare, E Mocanu, ME Taylor, M Pechenizkiy, ...
AAMAS 2023. 22nd International Conference on Autonomous Agents and …, 2023
162023
Learning Invariant Representation for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAAI Workshop on Meta-Learning for Computer Vision (AAAI-2021), 2020
162020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning Journal, 2020
162020
Self-Attention Meta-Learner for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAMAS 2021. 20th International Conference on Autonomous Agents and …, 2021
152021
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
G Sokar, DC Mocanu, M Pechenizkiy
ECMLPKDD2022, 2021
14*2021
Continual learning with dynamic sparse training: Exploring algorithms for effective model updates
MO Yildirim, EC Gok, G Sokar, DC Mocanu, J Vanschoren
Conference on parsimony and learning, 94-107, 2024
82024
Dynamic sparse training for deep reinforcement learning (poster)
GAZN Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
Sparsity in Neural Networks: Advancing Understanding and Practice 2021, 2021
32021
Don't flatten, tokenize! Unlocking the key to SoftMoE's efficacy in deep RL
G Sokar, J Obando-Ceron, A Courville, H Larochelle, PS Castro
ICLR 2025, Spotlight, 2024
12024
Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks
K Liu, Z Atashgahi, G Sokar, M Pechenizkiy, DC Mocanu
International Conference on Artificial Intelligence and Statistics, 3952-3960, 2024
12024
Continual Lifelong Learning for Intelligent Agents
G Sokar
IJCAI 2021. International Joint Conferences on Artifical Intelligence (IJCAI), 2021
12021
Training neural networks by resetting dormant neurons
U Evci, PSC Rivadeneira, GAERZN Sokar, R Agarwal
US Patent App. 18/424,633, 2024
2024
Hệ thống không thể thực hiện thao tác ngay bây giờ. Hãy thử lại sau.
Bài viết 1–20