The “Cloudspecs: Cloud Hardware Evolution Through the Looking Glass” CIDR paper by Till Steinert, Maximilian Kuschewski, and Viktor Leis was the first paper I and Murat read this year. It was a short, but interesting read. Below is our reading video and my one-paragraph summary.
The paper discusses the evolution of AWS cloud (virtual) hardware capabilities over the past 10 years from a cost-efficiency perspective. The paper also provides an interactive tool/dataset for cloud hardware evolution. The authors find that cloud CPUs are not much more cost-efficient (2x improvement in cost efficiency compared to 10x improvement in core count for non-Graviton offerings). At the same time, network bandwidth cost-efficiency improves substantially more, suggesting a possible shift in future bottlenecks from network to compute and corroborating a shift to disaggregated architectures. Some aspects of the study, however, are lacking deeper insight. For example, looking at memory bandwidth improvements, it might have been a good idea to look at bandwidth per core to see if each of the faster core actually get more memory bandwidth now versus in the past. I’d also love to see a better exploration of non-standard features, like NIC offload capabilities or CPU encryption offloads, and more sophisticated instruction sets (AVX512?), as all of these provide value that may fall outside a typical “basic” compute benchmark cost analysis presented in the paper.