Автоматизация промышленности
Пятница, 17.05.2024, 06:35
Меню сайта

Вход на сайт

Категории раздела
Мои статьи [12]

Друзья сайта
  • Создать сайт
  • Официальный блог
  • Сообщество uCoz
  • FAQ по системе
  • Инструкции для uCoz

  • Статистика

    Облако тегов

    Главная » Статьи » Мои статьи

    THE HIGHEST PERFORMING COMPUTING AND WHY ENGINEERS NEED IT
    THE HIGHEST PERFORMING COMPUTING AND WHY ENGINEERS NEED IT

    Predicting wind noise around the Alfa Romeo Giulietta with Fluent. (Image courtesy of FCA Italy.)

    Compute power is available like never before. It may be local, under your desk or in your next workstation. An almost infinite amount of compute power is out of sight, in computers the size of pizza boxes; stacked in floor-to-ceiling racks; crammed with GPUs, CPUs and massive amounts of storage—all networked, all online, in row after row of racks in data centers spread as much as a quarter mile in each direction. These data centers are all over the world—over 1,800 are located in the U.S. alone. They are popping up in cornfields and deserts, wherever there is space—and power.
    This vast scale and number of data centers make up cloud computing, and in the cloud is where your split-second search and video streaming—as well as your project collaboration, file storage and, for the purposes of this article, where your engineering simulation—already lives, or will do so soon.
    In the engineering and design world, compute power is never enough. The ideas that there is unlimited computing power on high performance computers (HPC) on cloud networks is tantalizing.

    Data centers are popping up everywhere. Compute power may never be enough for engineers, but with over 1,800 data centers located in the continental U.S. alone, each crammed with compute power and storage, we are living in an age where compute power is cheaper and more plentiful than ever. The San Francisco Bay Area alone has 61. (Image courtesy of Datacentermap. com.)
    ISN’T A WORKSTATION ENOUGH?
    For engineers in startups, consulting practices and in small engineering firms, simulation is done on local computing resources, in most cases a personal workstation. These workstations can be limiting; but for many who are comfortable with their workstations, anything else—like HPC—raises questions and concerns. How can we leave our workstation? What would we look for in an HPC configuration? How should HPC be deployed? How much will HPC cost, and can we justify the expense?
    Understandably, engineers will need some reassurance before they let go of their trusted workstations. The engineer’s fascination for top-of-the-line gear may do the trick. A supercomputer is like a supercar; a Ferrari compared to the practical Toyota. An engineer may not be able to get a supercomputer— but they can use HPC can get supercomputer-type performance whenever they want.
    With this report, we will explore all the hardware choices available for simulation as well as try to demystify HPC. You will learn how HPC can useful, accessible and affordable for everyone—not just big companies. Read on and you will discover:
    • What is HPC?
    • Why engineers need extra-strength computing.
    • Myths about HPC.
    • Saving time with HPC.
    • How HPC can pay off.
    • What to look for in a workstation.
    • HPC hardware and configurations.
    • Buying or “renting” HPC.

    Your CAD/ CAE software is not doing you any favors by providing a million-element model that will take your workstation forever. Engineers spend time “defeaturing” a finite element model to reduce solution time. But HPC can solve large problems fast, which means engineers don’t have to defeature. (Image courtesy of Ansys.)
    INFINITE ELEMENT ANALYSIS
    The most common engineering simulations, finite element analysis (FEA) and computational fluid dynamics (CFD), solve systems of equations based on the nodes of a mesh. The meshes are approximations of the original NURBS-based (non-uniform rational basis spline) CAD geometry. The finer the mesh, the more fidelity it has to the geometry and therefore will yield the most accurate results. But as element size shrinks to infinitesimal, the number of elements and degrees of freedom approach infinity.
    Even a simple part can be meshed so fine that it will take hours on a supercomputer. So, if your simulation is not limited by hardware, you are not trying hard enough.
    Early on, the engineer doing simulation learns to compromise. Let’s not model this detail, or that one. But with every detail not modeled, the greater the deviation from the designed part and the more erosion of the model’s fidelity, the greater the approximation. But what can you do? You need to take the load off the hardware.
    “Engineers are constrained by turnaround time limitations,” said Wim Slagter, director of HPC and cloud alliances at Ansys. “So, they are spending time and effort on changing their model in order to be able to run it on their existing hardware or to get acceptable runtimes. They must make their models smaller. They are trading off accuracy of results by reducing the number of elements, the number of features—or using a less advanced turbulence model in their CFD application just to get acceptable runtimes.”
    Also, consider the opportunities lost because of the limits of local computing. The transient and turbulent problems that won’t get done, the fatigue that is not explored, nonlinearities not looked at and thousands of generative designs never generated – all because the computing resources you have on hand can’t handle it. We can’t all have supercomputers.
    MOORE’S LAW IS NOT ENOUGH
    But are supercomputers, at the cost of tens of millions of dollars, the way to go for today’s engineering simulations? An alternative to supercomputers is tying together ordinary computer hardware in massive networks to create what is known as high performance computing (HPC). The economies of scale of HPC have made solution times faster and cheaper than they have ever been—and circumvented Moore’s law.
    Gordon Moore, once CEO of Intel, famously predicted in 1965 that there would be an annual doubling in the number of components per integrated circuit.
    This has come to be known as Moore’s Law.
    Moore’s law has made for smartphones that are more powerful than the computer that put Apollo astronauts on moon. And mobile workstations now run circles around your dad’s minicomputer. But the wealth of computing on the cloud is faster by a long shot.
    Gordon Moore may not have seen HPC networks coming, which take advantage of parallel processing, with each processor handling a calculation. If the software is able to feed each processor a calculation, all of the calculations can happen at once, in parallel. Even with ordinary commodity processors, running in parallel can be faster than sequential processing, where one calculation occurs after the other—even if they are done on a supercomputer.
    WHAT IS HPC?
    If you are queuing up a simulation today on your workstation, it will be ready tomorrow; the same solution on HPC on the cloud will be done by lunch.
    The cost for simulation is fractions of a cent per second. You might wonder why we should buy any local workstations at all. Why not just plug into the massive grid of unlimited speed and power offered by this new world of computing and data centers?

    This is what computer hardware looks like in the big leagues of simulation. Pictured: The High Performance Computing Lab at George Washington University. (Image courtesy of Rackspace.)
    HPC systems came into play in the early days of Bitcoin mining. Today, HPCs are used in brick-and-mortar establishments for some serious problemsolving.
    In engineering, HPCs are used for complex simulations, including transient problems, acoustics, CFD and combustion in engines.
    In multi-physics simulations, the simultaneous effect of multiple states of matter is studied, including gas, liquid and solid, as well as phase changes.
    An example is analyzing how the effects of thermal expansion affect a moving structure, as well as the turbulence caused by the moving structure over several time steps. Here, paired partial differentiation and other complex mathematical modeling must be calculated to solve the problem.
    HPCs can speed up this process of simulation.
    In design and manufacturing, HPC has been used to solve structural and thermal design problems, and to optimize production lifecycles.
    Special networking software appropriates the resources it needs from among the networked computers, allowing faster solutions of large and complex problems that a personal workstation would not be able to solve in a reasonable time, or at all.
    HPC can be inside your facility—“on premise”—or outside your company and, as is very common these days, in the cloud.
    In a recent survey sponsored by Ansys, out of over 600 engineers and their managers, about a hundred of them (17 percent) were using the cloud for engineering simulation. But an additional 20 percent were planning to do so over the next 12 months.
    Is an HPC configuration a supercomputer? In strength and output, perhaps; but architecturally, there is a fundamental difference. An HPC configuration is an aggregation of many computer cores in separate computers which are connected by an operating system, whereas a supercomputer is a monolithic, standalone computer, all in one spot.
    The early Cray computers decades ago were standalone supercomputers.
    These computers were singularly equipped with the customized hardware needed to handle complex simulation and calculations. Today, Cray computers have embraced the HPC concept and offer clusters of supercomputers, the Cray CS (for cluster supercomputers), raising the high
    end of HPC configurations.
    Other important distinctions to consider when comparing and contrasting supercomputers and HPCs is cost and customization. Most supercomputers run only certain customized software applications, and not every engineering simulation software familiar to engineers may be compiled for a supercomputer.
    Then there is the cost, which can be considerable—up to tens of millions of dollars. The cost comes not only from acquisition of a supercomputer, but the cost of running and maintaining it. Cost alone puts supercomputers out of the reach of most engineers. HPCs on the other hand, are interconnectedaffordable computer systems that can run legacy software, and are scalable by adding even thousands of low-cost computer “nodes.” It is this sheer number of nodes that gives HPC the computing power of a supercomputer, without the cost.
    HOW HPC WORKS
    Every computer in an HPC system is known as a node. Each node is generally equipped with multiple processors called compute cores that handle the computation aspect of problem-solving. The processors, graphical processing units and memory of each node are then interconnected by a network to make a high-performance computing system.
    You can compare HPC systems to the rendering farms used for movie special effects and realistic architectural fly-throughs. Multiple compute nodes work together to deliver solutions for large, complex problems. When multiple processes line up for an HPC system, a scheduler comes into play. The scheduler allocates compute and storage resources to each process according to its requirements.
    Approximately 64 percent of HPC platforms now integrate cloud computing and the market is expected to grow annually by 12 percent within the next 5 years.
    CORES USED AS NEEDED
    HPC is good for simulation because a large number of cores can be used for calculation. A core, or CPU core, is like the brain of a CPU. CPUs can have multiple cores. The Lenovo ThinkPad P1 mobile workstation CPU has 8 cores. An HPC configuration can have hundreds of cores.
    Категория: Мои статьи | Добавил: boltsa (28.03.2020) | Автор: Alpas Sapo E W
    Просмотров: 162 | Рейтинг: 0.0/0
    Всего комментариев: 0
    avatar
    Copyright MyCorp © 2024
    Сделать бесплатный сайт с uCoz