Seguir
Arsalan Mousavian
Arsalan Mousavian
NVIDIA Research
Dirección de correo verificada de nvidia.com - Página principal
Título
Citado por
Citado por
Año
3d bounding box estimation using deep learning and geometry
A Mousavian, D Anguelov, J Flynn, J Kosecka
Proceedings of the IEEE conference on Computer Vision and Pattern …, 2017
12872017
Progprompt: Generating situated robot task plans using large language models
I Singh, V Blukis, A Mousavian, A Goyal, D Xu, J Tremblay, D Fox, ...
2023 IEEE International Conference on Robotics and Automation (ICRA), 11523 …, 2023
6452023
6-dof graspnet: Variational grasp generation for object manipulation
A Mousavian, C Eppner, D Fox
Proceedings of the IEEE/CVF international conference on computer vision …, 2019
6452019
Contact-graspnet: Efficient 6-dof grasp generation in cluttered scenes
M Sundermeyer, A Mousavian, R Triebel, D Fox
2021 IEEE International Conference on Robotics and Automation (ICRA), 13438 …, 2021
3562021
Synthesizing training data for object detection in indoor scenes
G Georgakis, A Mousavian, AC Berg, J Kosecka
arXiv preprint arXiv:1702.07836, 2017
2672017
6-dof grasping for target-driven object manipulation in clutter
A Murali, A Mousavian, C Eppner, C Paxton, D Fox
2020 IEEE International Conference on Robotics and Automation (ICRA), 6232-6238, 2020
2422020
Visual representations for semantic target driven navigation
A Mousavian, A Toshev, M Fišer, J Košecká, A Wahid, J Davidson
2019 International Conference on Robotics and Automation (ICRA), 8846-8852, 2019
2412019
PoseRBPF: A Rao–Blackwellized particle filter for 6-D object pose tracking
X Deng, A Mousavian, Y Xiang, F Xia, T Bretl, D Fox
IEEE Transactions on Robotics 37 (5), 1328-1342, 2021
2332021
Self-supervised 6d object pose estimation for robot manipulation
X Deng, Y Xiang, A Mousavian, C Eppner, T Bretl, D Fox
2020 IEEE International Conference on Robotics and Automation (ICRA), 3665-3671, 2020
2042020
Acronym: A large-scale grasp dataset based on simulation
C Eppner, A Mousavian, D Fox
2021 IEEE International Conference on Robotics and Automation (ICRA), 6222-6227, 2021
1952021
Joint semantic segmentation and depth estimation with deep convolutional networks
A Mousavian, H Pirsiavash, J Košecká
2016 Fourth International Conference on 3D Vision (3DV), 611-619, 2016
1712016
Unseen object instance segmentation for robotic environments
C Xie, Y Xiang, A Mousavian, D Fox
IEEE Transactions on Robotics 37 (5), 1343-1359, 2021
1502021
Latentfusion: End-to-end differentiable reconstruction and rendering for unseen object pose estimation
K Park, A Mousavian, Y Xiang, D Fox
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
1462020
Deep learning approaches to grasp synthesis: A review
R Newbury, M Gu, L Chumbley, A Mousavian, C Eppner, J Leitner, J Bohg, ...
IEEE Transactions on Robotics 39 (5), 3994-4015, 2023
1372023
Learning rgb-d feature embeddings for unseen object instance segmentation
Y Xiang, C Xie, A Mousavian, D Fox
Conference on Robot Learning, 461-470, 2021
1322021
Storm: An integrated framework for fast joint-space model-predictive control for reactive manipulation
M Bhardwaj, B Sundaralingam, A Mousavian, ND Ratliff, D Fox, F Ramos, ...
Conference on Robot Learning, 750-759, 2022
1152022
Megapose: 6d pose estimation of novel objects via render & compare
Y Labbé, L Manuelli, A Mousavian, S Tyree, S Birchfield, J Tremblay, ...
arXiv preprint arXiv:2212.06870, 2022
1092022
The best of both modes: Separately leveraging rgb and depth for unseen object instance segmentation
C Xie, Y Xiang, A Mousavian, D Fox
Conference on robot learning, 1369-1378, 2020
1082020
Multiview rgb-d dataset for object instance detection
G Georgakis, MA Reza, A Mousavian, PH Le, J Košecká
2016 Fourth international conference on 3D vision (3DV), 426-434, 2016
1002016
Object rearrangement using learned implicit collision functions
M Danielczuk, A Mousavian, C Eppner, D Fox
2021 IEEE International Conference on Robotics and Automation (ICRA), 6010-6017, 2021
932021
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20