Finally, how many efficiencies the supercomputers have?
- PDF / 1,673,918 Bytes
- 26 Pages / 439.37 x 666.142 pts Page_size
- 76 Downloads / 189 Views
Finally, how many efficiencies the supercomputers have? And, what do they measure? János Végh1
© The Author(s) 2020
Abstract Using extremely large number of processing elements in computing systems leads to unexpected phenomena, such as different efficiencies of the same system for different tasks, that cannot be explained in the frame of the classical computing paradigm. The introduced simple non-technical model enables to set up a frame and formalism needed to explain the unexpected experiences around supercomputing. The paper shows that the degradation of the efficiency of the parallelized sequential system is a natural consequence of the computing paradigm, rather than an engineering imperfectness. The workload is greatly responsible for wasting the energy as well as limiting the size and the type of tasks the supercomputers can run. Case studies provide insight how different contributions compete for dominating the resulting payload performance of the computing system and how enhancing the technology made the computing + communication the dominating contribution in defining the efficiency of supercomputers. The model also enables to derive predictions about the supercomputer performance limitations for the near future and provides hints for enhancing the supercomputer components. The phenomena show interesting parallels with the phenomena experienced in science more than a century ago, and through their studying, a modern science was developed. Keywords Supercomputer performance · Parallelized sequential processing · Efficiency of supercomputers · Limitations of parallel processing · Behavior of extreme-scale systems
* János Végh [email protected] 1
Kalimános BT, Debrecen, Hungary
13
Vol.:(0123456789)
J. Végh
1 Introduction After that the dynamic growing of the single-processor performance has stalled about 2 decades ago [1], the only way to achieve the required high computing performance remained parallelizing the work of a very large number of sequentially working single processors. However, as was very early predicted [2] and decades later experimentally confirmed [3], the scaling of the parallelized computing is not linear. Even, “there comes a point when using more processors ...actually increases the execution time rather than reducing it” [3]. The parallelized sequential processing has different rules of game [3, 4]: the performance gain (“the speedup”) has its inherent bounds [5]. Akin to as the laws of science limit the performance of the single-thread processors [6], the commonly used computing paradigm (and its technical implementation) limits the performance of supercomputers [4]. On the one side, experts expected the performance1 to achieve the magic 1 Eflops around year 2020, see Fig. 1 in [7].2 The authors noticed that “the performance increase in the No. 1 systems slowed down around 2013, and it was the same for the sum performance,” but they extrapolated linearly expecting that the development continues and the “zettascale computing” (i.e., 104 times more than the present pe
Data Loading...