Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Sunshine And March Vibes (2025 Wallpapers Edition)

      May 16, 2025

      The Case For Minimal WordPress Setups: A Contrarian View On Theme Frameworks

      May 16, 2025

      How To Fix Largest Contentful Paint Issues With Subpart Analysis

      May 16, 2025

      How To Prevent WordPress SQL Injection Attacks

      May 16, 2025

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025

      Minecraft licensing robbed us of this controversial NFL schedule release video

      May 16, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The power of generators

      May 16, 2025
      Recent

      The power of generators

      May 16, 2025

      Simplify Factory Associations with Laravel’s UseFactory Attribute

      May 16, 2025

      This Week in Laravel: React Native, PhpStorm Junie, and more

      May 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025
      Recent

      Microsoft has closed its “Experience Center” store in Sydney, Australia — as it ramps up a continued digital growth campaign

      May 16, 2025

      Bing Search APIs to be “decommissioned completely” as Microsoft urges developers to use its Azure agentic AI alternative

      May 16, 2025

      Microsoft might kill the Surface Laptop Studio as production is quietly halted

      May 16, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Development»13 Most Powerful Supercomputers in the World

    13 Most Powerful Supercomputers in the World

    November 22, 2024

    Supercomputers are the pinnacle of computational technology, which is made to tackle complex problems. These devices manage enormous databases, facilitating advances in sophisticated scientific research, artificial intelligence, nuclear simulations, and climate modeling. They push the limits of what is feasible, enabling simulations and analyses that were previously thought to be unattainable. Their speeds are measured in petaflops or quadrillions of calculations per second. The top 13 supercomputers in the world have been discussed in this article, emphasizing their remarkable contributions and capabilities. 

    Fugaku

      Specifications:

      • Speed: 442 petaFLOPS
      • Cores: 7,630,848
      • Peak Performance: 537 petaFLOPS
      • Vendor: Fujitsu
      • Location: RIKEN Center for Computational Science, Kobe, Japan
      • Primary Use: COVID-19 research, AI training, climate modeling

      Fugaku, created by Fujitsu and RIKEN, was the fastest in the world from 2020 to 2022. With its ARM-based A64FX CPUs and more than 7.6 million cores, it represented a major breakthrough in computational research. Fugaku’s capabilities surpass the combined output of the next four supercomputers on the HPCG benchmark, with a peak performance of 537 petaFLOPS.

      Fugaku, which takes its name from the famous Mount Fuji in Japan, was instrumental in the COVID-19 epidemic by demonstrating the effectiveness of masks made of non-woven fabric. It keeps developing AI and climate science research, including training huge language models in Japanese. Fugaku, a $1 billion endeavor spanning ten years, is a prime example of Japan’s dedication to technical leadership and scientific innovation.

      Summit 

        Specifications:

        • Speed: 148.6 petaFLOPS
        • Cores: 2,414,592
        • Peak Performance: 200 petaFLOPS
        • Vendor: IBM
        • Location: Oak Ridge National Laboratory, Tennessee, USA
        • Primary Use: Scientific research, AI applications

        From 2018 until 2020, IBM’s Summit supercomputer, built for the Oak Ridge National Laboratory, was the most potent supercomputer in the world. Over 9,200 IBM Power9 CPUs and 27,600 NVIDIA Tesla V100 GPUs are integrated within Summit’s 4,600 servers, which are spread across two basketball courts. One hundred eighty-five kilometers of fiber optic cables provide its connection, which reaches an astounding 200 petaFLOPS peak.

        When analyzing genetic data, this computational behemoth achieved 1.88 examples, breaking the exascale barrier for the first time. Summit has made numerous contributions to research, ranging from material discovery and turbulence modeling to COVID-19 drug screening. With an energy efficiency of 14.66 gigaFLOPS per watt, it demonstrates its sustainable design and powers 8,100 homes while promoting research on artificial intelligence and machine learning.

        Sierra 

          Specifications:

          • Speed: 94.6 petaFLOPS
          • Cores: 1,572,480
          • Vendor: IBM
          • Location: Lawrence Livermore National Laboratory, USA
          • Primary Use: Nuclear weapons research

          IBM Sierra was created especially for the stockpile stewardship initiative of the US Department of Energy. With the combination of NVIDIA’s Volta GPUs and IBM’s Power9 processors, Sierra offers seven times the workload efficiency and six times the sustained performance of Sequoia. Sierra, one of the fastest supercomputers in the world with a speed of 94.6 petaFLOPS, is particularly good in predictive simulations that guarantee the security and dependability of nuclear weapons without the need for live testing.

          Sierra’s state-of-the-art architecture, which supports GPU acceleration, allows for massive computational efficiency in extremely complicated models. A key partnership between IBM and NVIDIA, Sierra advances computational techniques in nuclear science while showcasing the potential of hybrid processor technology for national security.

          Sunway TaihuLight

            Specifications:

            • Speed: 93 petaFLOPS
            • Cores: 10,649,600
            • Peak Performance (Per CPU): 3+ teraFLOPS
            • Vendor: NRCPC
            • Location: National Supercomputing Center, Wuxi, China
            • Primary Use: Climate research, life sciences

            Sunway TaihuLight, which uses entirely domestic SW26010 CPUs, is evidence of China’s technological independence. With the integration of 260 processing components, each of these several-core CPUs can produce more than three teraFLOPS. TaihuLight reduces memory constraints and boosts efficiency for complex applications by integrating scratchpad memory into its computer components.

            The power of this supercomputer is used to promote pharmaceutical and biological sciences research as well as create ground-breaking simulations, such as modeling the universe with 10 trillion digital particles. China’s 2030 AI leadership objective is reflected in Sunway TaihuLight, which aims to dominate global AI and supercomputing. As a flagship system, it showcases the country’s advancements in high-performance computing innovation and independence.

            Tianhe-2A

              Specifications:

              • Speed: 61.4 petaFLOPS
              • Cores: 4,981,760
              • Memory: 1,375TB
              • Cost: $390 million
              • Vendor: NUDT
              • Location: National Supercomputing Center, Guangzhou, China
              • Primary Use: Government security & research

              One of China’s flagship supercomputers, Tianhe-2A, has more than 4.9 million cores and can achieve a peak speed of 61.4 petaFLOPS. With almost 16,000 computer nodes, each with 88GB of memory, it is the largest deployment of Intel Ivy Bridge and Xeon Phi processors in the world. The system can effectively manage large datasets because of its enormous memory capacity of 1,375TB.

              Tianhe-2A is largely used for high-level government research and security applications, such as simulations and analyses that serve national interests and has been invested in by a substantial $390 million. China’s increasing processing power is exemplified by this supercomputer, which is crucial to the country’s advancement in science and security.

              Frontera

                Specifications:

                • Speed: 23.5 petaFLOPS
                • Cores: 448,448
                • Special Features: Dual computing subsystems (double & single precision)
                • Vendor: Dell EMC
                • Location: Texas Advanced Computing Center, University of Texas, USA
                • Primary Use: Academic & scientific research

                With 448,448 cores, Frontera, the most potent academic supercomputer in the world, produces an astounding 23.5 petaFLOPS. It is housed at the Texas Advanced Computing Centre (TACC) and offers substantial computational resources to support researchers in a broad range of scientific and academic pursuits. Two specialized subsystems are included in the system: one is designed for single-precision, stream-memory computing, and the other is optimized for double-precision calculations. 

                Because of its design, Frontera may be used for a wide range of complicated simulations and calculations in domains such as biology, engineering, and climate science. Frontera is also compatible with virtual servers and cloud interfaces, which increases its adaptability and accessibility for scholarly study. It is essential for facilitating innovative discoveries across a variety of fields.

                Piz Daint

                  Specifications:

                  • Speed: 21.2 petaFLOPS
                  • Cores: 387,872
                  • Primary Features: Burst buffer mode, DataWarp
                  • Vendor: Cray Inc.
                  • Location: Swiss National Supercomputing Centre, Switzerland
                  • Primary Use: Scientific research & Large Hadron Collider data analysis

                  Piz Daint, housed in the Swiss Alps at the Swiss National Supercomputing Centre, uses 387,872 cores to provide an astounding 21.2 petaFLOPS of processing power. Designed for high-performance scientific computing, this supercomputer is powered by NVIDIA Tesla P100 GPUs and Intel Xeon E5-26xx microprocessors. 

                  One of Piz Daint’s primary features is its DataWarp-powered burst buffer mode, which dramatically increases input/output bandwidth and makes it possible to handle big, unstructured datasets quickly. Analyzing the enormous amounts of data produced by the Large Hadron Collider (LHC) experiments requires this skill. By effectively managing data-intensive computations, Piz Daint contributes to the advancement of scientific research by assisting initiatives in domains such as physics, climate science, and more.

                  Trinity

                    Specifications:

                    • Speed: 21.2 petaFLOPS
                    • Cores: 979,072
                    • Peak Performance: 41 petaFLOPS
                    • Vendor: Cray Inc.
                    • Location: Los Alamos National Laboratory, USA
                    • Primary Use: Nuclear security & weapons simulation
                    • Key Features: Dual-phase design with Intel processors

                    Trinity, a potent supercomputer located at Los Alamos National Laboratory, is essential to the Nuclear Security Enterprise of the National Nuclear Security Administration (NNSA). Trinity, which focuses on geometry and physics fidelities, is intended to increase the accuracy of nuclear weapons simulations with a sustained speed of 21.2 petaFLOPS and a peak performance of 41 petaFLOPS. 

                    Originally built with Intel Xeon Haswell processors, this supercomputer was upgraded to Intel Xeon Phi Knights Landing processors for increased processing power in a phased development process. Trinity is essential for high-performance simulations and computations that guarantee the safety, security, and efficacy of the U.S. nuclear stockpile.

                    AI Bridging Cloud Infrastructure

                      Specifications:

                      • Speed: 19.8 petaFLOPS
                      • Cores: 391,680
                      • Peak Performance: 32.577 petaFLOPS
                      • Vendor: Fujitsu
                      • Location: National Institute of Advanced Industrial Science and Technology (AIST), Japan
                      • Primary Use: AI research & development
                      • Key Features: Large-scale open AI infrastructure, advanced cooling system

                      The first extensive open AI computing infrastructure in the world, the AI Bridging Cloud Infrastructure (ABCI) was created by Fujitsu with the goal of promoting and accelerating AI research and development. ABCI, which is housed at the National Institute of Advanced Industrial Science and Technology in Japan, has 1,088 nodes overall and can achieve a peak performance of 32.577 petaFLOPS. With four NVIDIA Tesla V100 GPUs, two Intel Xeon Gold Scalable CPUs, and sophisticated network components, each node offers remarkable processing capacity for AI tasks.

                      One of ABCI’s unique features is its cooling technology, which achieves 20 times the thermal density of conventional data centres by using hot water and air cooling. By enabling the supercomputer to run with a cooling capacity of 70 kW per rack, this novel method greatly enhances sustainability and energy efficiency for large-scale AI calculations. Because it powers a variety of AI-driven applications, ABCI is essential to the advancement of AI research.

                      SuperMUC-NG

                        Specifications:

                        • Speed: 19.4 petaFLOPS
                        • Cores: 305,856
                        • Storage: 70 petabytes
                        • Vendor: Lenovo
                        • Location: Leibniz Supercomputing Centre, Germany
                        • Primary Use: European research initiatives
                        • Key Features: Advanced water cooling system, 5-sided CAVE VR environment

                        Lenovo created the high-performance supercomputer SuperMUC-NG, housed at the Leibniz Supercomputing Centre in Germany, to help in European research projects. With 305,856 cores, 70 petabytes of storage, and an operating speed of 19.4 petaFLOPS, it facilitates extensive simulations and data analysis in a variety of scientific domains. With its water-cooling technology for energy efficiency, SuperMUC-NG provides optimal performance while lessening its influence on the environment. 

                        Its visualization capabilities, which improve researchers’ comprehension of intricate simulations, include a 5-sided CAVE virtual reality (VR) environment and a 4K stereoscopic power wall. SuperMUC-NG plays a key role in promoting scientific breakthroughs and innovation throughout Europe by funding research in fields like environmental science, medicine, and quantum chromodynamics.

                        Lassen

                          Specifications:

                          • Speed: 18.2 petaFLOPS
                          • Peak Performance: 23 petaFLOPS
                          • Cores: 288,288
                          • Main Memory: 253 terabytes
                          • Architecture: IBM Power9 processors
                          • System Size: 40 racks (1/6 the size of Sierra)
                          • Vendor: IBM
                          • Location: Lawrence Livermore National Laboratory, United States
                          • Primary Use: Unclassified simulation and research

                          IBM created Lassen, a high-performance supercomputer used for unclassified research, and it is housed in the Lawrence Livermore National Laboratory in the United States. With 288,288 cores, 253 terabytes of main memory, and a speed of 18.2 petaFLOPS, it provides remarkable computational capacity for jobs involving analysis and simulation. 

                          Housed in 40 racks as opposed to Sierra’s 240, Lassen is a smaller sibling that is one-sixth as large. Lassen is a valuable tool for unclassified scientific research because it is outfitted with IBM Power9 processors, which can reach a maximum performance of 23 petaFLOPS. Lassen is an efficient and adaptable system that can handle a variety of computational tasks, advancing a number of scientific and technological domains.

                          Pangea 3

                            Specifications:

                            • Speed: 17.8 petaFLOPS
                            • Cores: 291,024
                            • Vendor: IBM & NVIDIA
                            • Location: CSTJF Technical and Scientific Research Center, Pau, France
                            • Architecture: IBM POWER9 CPUs and NVIDIA Tesla V100 Tensor Core GPUs
                            • Memory Bandwidth: 5x faster than traditional systems (via CPU-to-GPU NVLink connection)
                            • Energy Efficiency: Consumes less than 10% of the energy per petaFLOP compared to predecessors (Pangea I & II)

                            The IBM Pangea 3 is a powerful supercomputer with a focus on production modeling, asset assessment, and seismic imaging. It is housed at Total’s Scientific Computing Centre in Pau, France, and has 291,024 cores running at 17.8 petaFLOPS. It was created by IBM and NVIDIA in partnership, and it boasts a CPU-to-GPU NVLink connection that provides five times quicker memory bandwidth than traditional systems. 

                            With less than 10% less energy used per petaFLOP than its predecessors, this architecture greatly improves energy efficiency while increasing computing speed. Pangea 3 is an essential tool for Total’s operations since it enables crucial applications in resource optimization and oil and gas exploration by utilizing NVIDIA Tesla V100 Tensor Core GPUs and IBM POWER9 processors.

                            IBM Sequoia

                              Specifications:

                              • Speed: 17.1 petaFLOPS (Theoretical peak: 20 petaFLOPS)
                              • Cores: 1,572,864
                              • Vendor: IBM
                              • Location: Lawrence Livermore National Laboratory, United States
                              • Key Uses: Nuclear simulations, climate research, genome analysis, and medical simulations

                              Built on IBM’s BlueGene/Q architecture, the IBM Sequoia supercomputer is located at the Lawrence Livermore National Laboratory. As a component of the Stockpile Stewardship Program of the U.S. National Nuclear Security Administration, it is intended for extended nuclear weapons simulations. It is a potent instrument for guaranteeing the security and efficacy of the nuclear arsenal without the need for live testing, with 1,572,864 cores and a maximum capability of 20 petaFLOPS.

                              Additionally, Sequoia promotes scientific research in fields including human genome analysis, climate change modeling, and medical simulations, including the first 3D electrophysiological investigations of the human heart. Notably, it has 123% more cores and 37% less energy than its predecessor, K Computer, demonstrating its scalability and efficiency in managing a variety of computational tasks.

                              In conclusion, the world’s top 13 supercomputers are the pinnacle of computing power and are essential to the advancement of numerous fields of science, technology, and industry. In addition to pushing the boundaries of speed and efficiency, these devices are essential resources for addressing global issues like healthcare and climate change. Supercomputers will surely play a key role in the future’s deeper integration of AI, machine learning, and data-driven innovation.


                              Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 55k+ ML SubReddit.

                              [FREE AI VIRTUAL CONFERENCE] SmallCon: Free Virtual GenAI Conference ft. Meta, Mistral, Salesforce, Harvey AI & more. Join us on Dec 11th for this free virtual event to learn what it takes to build big with small models from AI trailblazers like Meta, Mistral AI, Salesforce, Harvey AI, Upstage, Nubank, Nvidia, Hugging Face, and more.

                              The post 13 Most Powerful Supercomputers in the World appeared first on MarkTechPost.

                              Source: Read More 

                              Hostinger
                              Facebook Twitter Reddit Email Copy Link
                              Previous ArticleBONE: A Unifying Machine Learning Framework for Methods that Perform Bayesian Online Learning in Non-Stationary Environments
                              Next Article Unveiling Interpretable Features in Protein Language Models through Sparse Autoencoders

                              Related Posts

                              Machine Learning

                              LLMs Struggle with Real Conversations: Microsoft and Salesforce Researchers Reveal a 39% Performance Drop in Multi-Turn Underspecified Tasks

                              May 17, 2025
                              Machine Learning

                              This AI paper from DeepSeek-AI Explores How DeepSeek-V3 Delivers High-Performance Language Modeling by Minimizing Hardware Overhead and Maximizing Computational Efficiency

                              May 17, 2025
                              Leave A Reply Cancel Reply

                              Continue Reading

                              The Haunted Theatre

                              Artificial Intelligence

                              UniME: A Two-Stage Framework for Enhancing Multimodal Representation Learning with MLLMs

                              Machine Learning

                              Learn Redux and Redux Toolkit for State Management

                              Development

                              Blizzard partnered with Gunnar to make some LEGENDARY Overwatch blue-light-blocking glasses for gamers

                              Development

                              Highlights

                              Databases

                              Use a DAO to govern LLM training data, Part 1: Retrieval Augmented Generation

                              November 1, 2024

                              Blockchain and generative AI are two technical fields that have received a lot of attention…

                              Distribution Release: Plamo Linux 8.2

                              May 7, 2025

                              Meta Llama 3 models are now available in Amazon SageMaker JumpStart

                              April 18, 2024

                              What Is Data Quality? Definition and Best Practices

                              June 3, 2024
                              © DevStackTips 2025. All rights reserved.
                              • Contact
                              • Privacy Policy

                              Type above and press Enter to search. Press Esc to cancel.