The ABCs of High-Performance Computing for Financial Institutions
Activities such as high-frequency trading, complex simulations and real-time analytics are essential to the success of capital markets brokerages and other financial firms. These institutions rely on high-performance computing (HPC) systems to gather, parse, analyze and act on vast amounts of data. In a keenly competitive environment, the imperative to bring top-end computing power to bear is greater than ever before. Among the challenges:
Instant execution: Success or failure in the financial sector can be measured in milliseconds. High-frequency, algorithmic trading relies on high-speed interconnections, ultra-low-latency switches and powerful servers to ensure immediate action.
Escalating risk: The global financial crisis of 2008 exposed deep structural flaws and risks in the sector. HPC deployments and advanced analytics allow firms to detect, assess and manage risk factors before they can damage business interests.
Heightened competition: Financial firms are in a constant race to identify new market opportunities and gain critical, first-mover advantages. Data analytics and predictive modeling reveal opportunities that might otherwise be missed.
Evolving security: Every financial organization has a target on its back. HPC and data analytics combine to verify transactions in real time, detect anomalous patterns and improve fraud detection.
These challenges are driving the adoption of HPC solutions in the financial sector. In addition to growing activity around trading and options analysis, the research firm IDC cites advancing security solutions and the emergence of customer-facing channel services as key drivers of HPC adoption.
The Basics of HPC Build-Outs
Subsequently, the adoption of this technology means growth in HPC-related spending in the financial sector. According to IDC, total global revenue for the HPC market (including servers, storage, software and services) will increase from $21 billion in 2014 to $31.3 billion by 2019.
From a hardware perspective, HPC build-outs break down into four components:
Servers: Multicore processors, low-latency networks and parallel storage infrastructures combine to let clustered HPC servers scale to the most demanding computing tasks. Compact 1U and 2U form factors allow servers such as the HPE Apollo 6000 System to host as many as 144 individual servers in a standard data center rack. Shared power and management infrastructure improve efficiency and reduce operating cost, yielding sharply higher performance within existing data center footprints and thermal envelopes.
Processors: Today’s multicore processors provide parallel execution, streamlined memory access and cutting-edge processing techniques. The Intel Xeon E7-8800 family of processors, for example, is built on an advanced 22-nanometer process and incorporates as many as 18 processor cores, large secondary caches and 5.69 billion transistors for powerful performance. Often sitting next to these system CPUs are powerful coprocessors, such as the Intel Xeon Phi and Nvidia Tesla graphics processing unit (GPU). These accelerate specialized floating point, graphics and other computations, enabling robust parallel processing for complex tasks.
Low-latency networks: High-performance computing is the ultimate team game, employing hundreds or even thousands of rack-hosted servers linked via high-performance InfiniBand and 10 Gigabit Ethernet networks. To squeeze latency out of the environment, specialized hardware such as ultra-low-latency switches, message acceleration appliances and high-performance network monitoring and management tools combine to accelerate, streamline and manage the intense network traffic typical of HPC deployments. Network interface cards (NICs) from Exablaze, Myricom and Solarflare can squeeze round-trip latencies down to near a microsecond.
Parallel storage: Data analytics requires robust storage. A survey by Research and Markets found that 31 percent of all HPC storage systems contain more than a petabyte of capacity. Parallel file systems running across multiple nodes can erase bottlenecks for large file transfers and ensure performance on even the busiest storage infrastructures and networks. Solid-state disk storage sharply improves response time, throughput and reliability, while hybrid storage arrays position data across tiers of solid-state and spinning media to balance cost and performance. Leading providers include NetApp, EMC, HPE, Hitachi Data Systems and IBM.
Learn more about the benefits of HPC by downloading the white paper, "High Finance: Harnessing the Power of HPC."