Follow
Saeed Rashidi
Saeed Rashidi
Hewlett Packard Labs (HPE)
Verified email at hpe.com
Title
Cited by
Cited by
Year
Astra-sim: Enabling sw/hw co-design exploration for distributed dl training platforms
S Rashidi, S Sridharan, S Srinivasan, T Krishna
2020 IEEE International Symposium on Performance Analysis of Systems and …, 2020
342020
Enabling compute-communication overlap in distributed deep learning training platforms
S Rashidi, M Denton, S Sridharan, S Srinivasan, A Suresh, J Nie, ...
2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture …, 2021
272021
Improving MLC PCM performance through relaxed write and read for intermediate resistance levels
S Rashidi, M Jalili, H Sarbazi-Azad
ACM Transactions on Architecture and Code Optimization (TACO) 15 (1), 1-31, 2018
242018
A survey on pcm lifetime enhancement schemes
S Rashidi, M Jalili, H Sarbazi-Azad
ACM Computing Surveys (CSUR) 52 (4), 1-38, 2019
202019
Themis: A network bandwidth-aware collective scheduling policy for distributed training of dl models
S Rashidi, W Won, S Srinivasan, S Sridharan, T Krishna
Proceedings of the 49th Annual International Symposium on Computer …, 2022
132022
Scalable distributed training of recommendation models: An astra-sim+ ns3 case-study with tcp/ip transport
S Rashidi, P Shurpali, S Sridharan, N Hassani, D Mudigere, K Nair, ...
2020 IEEE Symposium on High-Performance Interconnects (HOTI), 33-42, 2020
82020
ASTRA-sim2. 0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
W Won, T Heo, S Rashidi, S Sridharan, S Srinivasan, T Krishna
2023 IEEE International Symposium on Performance Analysis of Systems and …, 2023
72023
Impact of RoCE congestion control policies on distributed training of DNNs
T Khan, S Rashidi, S Sridharan, P Shurpali, A Akella, T Krishna
2022 IEEE Symposium on High-Performance Interconnects (HOTI), 39-48, 2022
72022
Efficient distributed inference of deep neural networks via restructuring and pruning
A Abdi, S Rashidi, F Fekri, T Krishna
Proceedings of the AAAI Conference on Artificial Intelligence 37 (6), 6640-6648, 2023
5*2023
COMET: A comprehensive cluster design methodology for distributed deep learning training
DK Kadiyala, S Rashidi, T Heo, AR Bambhaniya, T Krishna, A Daglis
arXiv preprint arXiv:2211.16648, 2022
22022
Exploring multi-dimensional hierarchical network topologies for efficient distributed training of trillion parameter dl models
W Won, S Rashidi, S Srinivasan, T Krishna
arXiv preprint arXiv:2109.11762, 2021
22021
Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
S Sridharan, T Heo, L Feng, Z Wang, M Bergeron, W Fu, S Zheng, ...
arXiv preprint arXiv:2305.14516, 2023
2023
HW-SW Methods for Modeling and Optimizing Communication for Scalable Training of Deep Learning Models
S Rashidi
Georgia Institute of Technology, 2023
2023
Exploring Memory Expansion Designs for Training Mixture-of-Experts Models
T Heo, S Rashidi, C Man, DK Kadiyala, W Won, S Srinivasan, ...
Xin, Yao 21 Xu, Yang 21
A Yan, K Al-hemyari, J Carretero, A Cascajo, CC Chen, Y Chen, ...
The system can't perform the operation now. Try again later.
Articles 1–15