Video: Innovation – Communication-Efficient Optimization & Learning
From Jill Peckham
views
comments
From Jill Peckham
Large scale distributed optimization has become increasingly important with the emergence of edge computation architectures such as in the federated learning setup, where large amounts of data is massively distributed across tactical devices and systems. A key bottleneck for many such large-scale problems is in the communication overhead of exchanging information between devices over bandwidth limited networks. Existing approaches propose to mitigate these bottlenecks either by using different forms of compression or by iterative mixing of local models. We first propose a novel class of highly communication efficient operators that employ quantization with explicit sparsification. Furthermore, we incorporate local iterations into our algorithm, which allows the communication to be infrequent and possibly asynchronous, thereby enabling significantly reduced communication.
This presentation highlights how the IOBT CRA also connects industry (e.g. Amazon and Adobe) colleagues/researchers who are not part of the Alliance to work in collaboration on critical Army problems.