Why is MPI so slow? analyzing the fundamental limits in implementing MPI-3.1 K Raffenetti, A Amer, L Oden, C Archer, W Bland, H Fujita, Y Guo, ... Proceedings of the international conference for high performance computing …, 2017 | 39 | 2017 |
Lessons learned from moving earth system grid data sets over a 20 gbps wide-area network R Kettimuthu, A Sim, D Gunter, B Allcock, PT Bremer, J Bresnahan, ... Proceedings of the 19th ACM International Symposium on High Performance …, 2010 | 30 | 2010 |
Distributed monitoring and management of exascale systems in the Argo project S Perarnau, R Thakur, K Iskra, K Raffenetti, F Cappello, R Gupta, ... Distributed Applications and Interoperable Systems: 15th IFIP WG 6.1 …, 2015 | 29 | 2015 |
Mpich user’s guide P Balaji, W Bland, W Gropp, R Latham, H Lu, AJ Pena, K Raffenetti, S Seo, ... Argonne National Laboratory, 2014 | 25 | 2014 |
Argo: An exascale operating system and runtime S Perarnau, R Gupta, P Beckman, P Balaji, C Bordage, G Bosilca, ... The International Conference for High Performance Computing, Networking …, 2015 | 19 | 2015 |
MPIX Stream: An explicit solution to hybrid MPI+ X programming H Zhou, K Raffenetti, Y Guo, R Thakur Proceedings of the 29th European MPI Users' Group Meeting, 1-10, 2022 | 14 | 2022 |
Memory compression techniques for network address management in MPI Y Guo, CJ Archer, M Blocksome, S Parker, W Bland, K Raffenetti, P Balaji 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS …, 2017 | 10 | 2017 |
C-coll: Introducing error-bounded lossy compression into mpi collectives J Huang, S Di, X Yu, Y Zhai, J Liu, K Raffenetti, H Zhou, K Zhao, Z Chen, ... arXiv preprint arXiv:2304.03890, 2023 | 9 | 2023 |
MPICH user’s guide A Amer, P Balaji, W Bland, W Gropp, Y Guo, R Latham, H Lu, L Oden, ... Mathematics and Computer Science Division-Argonne National Laboratory, 2015 | 9 | 2015 |
MPICH User's Guide, Version 3.1. 1 P Balaji, W Bland, W Gropp, R Latham, H Lu, A Pena, K Raffenetti, ... Mathematics and Computer Science Division Argonne National Laboratory …, 2014 | 8 | 2014 |
gzccl: Compression-accelerated collective communication framework for gpu clusters J Huang, S Di, X Yu, Y Zhai, J Liu, Y Huang, K Raffenetti, H Zhou, K Zhao, ... Proceedings of the 38th ACM International Conference on Supercomputing, 437-448, 2024 | 7 | 2024 |
Toward implementing robust support for portals 4 networks in mpich K Raffenetti, AJ Pena, P Balaji 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid …, 2015 | 7 | 2015 |
An optimized error-controlled mpi collective framework integrated with lossy compression J Huang, S Di, X Yu, Y Zhai, Z Zhang, J Liu, X Lu, K Raffenetti, H Zhou, ... 2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS …, 2024 | 6 | 2024 |
Quantifying the performance benefits of partitioned communication in mpi T Gillis, K Raffenetti, H Zhou, Y Guo, R Thakur Proceedings of the 52nd International Conference on Parallel Processing, 285-294, 2023 | 6 | 2023 |
Implementing flexible threading support in Open MPI N Evans, J Ciesko, SL Olivier, H Pritchard, S Iwasaki, K Raffenetti, P Balaji 2020 Workshop on Exascale MPI (ExaMPI), 21-30, 2020 | 6 | 2020 |
MPICH User’s Guide, Version 3.2 A Amer, P Balaji, W Bland, W Gropp, R Latham, H Lu, L Oden, A Pena, ... Argonne National Laboratory, 2015 | 5 | 2015 |
MPICH Installer’s Guide P Balaji, W Bland, W Gropp, R Latham, H Lu, AJ Pena, K Raffenetti, S Seo, ... | 4 | 2014 |
POSTER: Optimizing Collective Communications with Error-bounded Lossy Compression for GPU Clusters J Huang, S Di, X Yu, Y Zhai, J Liu, Y Huang, K Raffenetti, H Zhou, K Zhao, ... Proceedings of the 29th ACM SIGPLAN Annual Symposium on Principles and …, 2024 | 3 | 2024 |
Locality-aware pmi usage for efficient mpi startup K Raffenetti, N Bayyapu, D Durnov, M Takagi, P Balaji 2018 IEEE 4th International Conference on Computer and Communications (ICCC …, 2018 | 2 | 2018 |
Generating Bindings in MPICH H Zhou, K Raffenetti, W Bland, Y Guo arXiv preprint arXiv:2401.16547, 2024 | 1 | 2024 |