Collective communication algorithms are an important component of distributed computation. Indeed, in the case of deep-learning, collective communication is the Amdahl’s bottleneck of data-parallel training.
This paper introduces SCCL (for Synthesized Collective Communication Library), a systematic approach to synthesizing collective communication algorithms that are explicitly tailored to a particular hardware topology. SCCL synthesizes algorithms along the Pareto-frontier spanning from latency-optimal to bandwidth-optimal implementations of a collective. The paper demonstrates how to encode the synthesis problem as a quantifier-free SMT formula which can be discharged to a theorem prover. We show how our carefully built encoding enables SCCL to scale.
We synthesize novel latency and bandwidth optimal algorithms not seen in the literature on two popular hardware topologies. We also show how SCCL efficiently lowers algorithms to implementations on two hardware architectures (NVIDIA and AMD) and demonstrate competitive performance with hand optimized collective communication libraries.
Conference DayMon 1 MarDisplayed time zone: Eastern Time (US & Canada) change
11:10 - 12:10
|Synthesizing Optimal Collective Algorithms|
Zixian CaiAustralian National University, Zhengyang LiuUniversity of Utah, Saeed MalekiMicrosoft Research, Madan MusuvathiMicrosoft Research, Todd MytkowiczMicrosoft Research, Jacob NelsonMicrosoft Research, Olli SaarikiviMicrosoft Research, RedmondLink to publication
|Parallel Binary Code Analysis|
Xiaozhu MengRice University, Jonathon AndersonRice University, John Mellor-CrummeyRice University, Mark W. KrentelRice University, Barton P. MillerUniversity of Wisconsin - Madison, Srđan MilakovićRice UniversityLink to publication
|Compiler Support for Near Data Computing|
Mahmut Taylan KandemirPenn State University, USA, Jihyun RyooPenn State University, USA, Xulong TangUniversity of Pittsburgh, USA, Mustafa KarakoyTUBITAK-BILGEM, TurkeyLink to publication
|Scaling Implicit Parallelism via Dynamic Control Replication|
Michael BauerNVIDIA, Wonchan LeeNVIDIA, Elliott SlaughterSLAC National Accelerator Laboratory, Zhihao JiaCarnegie Mellon University, Mario Di RenzoSapienza University of Rome, Manolis PapadakisNVIDIA, Galen ShipmanLos Alamos National Laboratory, Patrick McCormickLos Alamos National Laboratory, Michael GarlandNVIDIA, Alex AikenStanford UniveristyLink to publication