MLPerf Results Show Advances in Machine Learning Inference
Today, the open engineering consortium MLCommons® announced fresh results from MLPerfTM Inference v2.1, which analyzes the performance of inference - the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. This round established new benchmarks with nearly 5,300 performance results and 2,400 power measures, 1.37X and 1.09X more than the previous round, respectively, reflecting the community's vigor.
MLPerf benchmarks are comprehensive system tests that stress machine learning models, software, and hardware, and optionally monitor energy consumption. The open-source and peer-reviewed benchmark suites level the playing ground for competitiveness, which fosters innovation, performance, and energy efficiency for the whole sector.
"We are very excited with the growth in the ML community and welcome new submitters across the globe such as Biren, Moffett AI, Neural Magic, and SAPEON," said MLCommons Executive Director David Kanter. "The exciting new architectures all demonstrate the creativity and innovation in the industry designed to create greater AI functionality that will bring new and exciting capability to business and consumers alike."
The MLPerf Inference benchmarks are focused on datacenter and edge systems, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lnovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the contributors to the submission round.
To view the results and find additional information about the benchmarks please visit https://mlcommons.org/en/inference-datacenter-21/ and https://mlcommons.org/en/inference-edge-21/. These results reveal extensive industry participation, a focus on energy saving, paving the path for more capable intelligent systems that will benefit society as a whole.
MLCommons is an open engineering consortium with a mission to benefit society by accelerating innovation in machine learning. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.
HR Spotlight on FOW: Skills and Training Needed for Workers and Managers
Keynote Presentation - Open to all Badge Holders