设计 任务书 文档 开题 答辩 说明书 格式 模板 外文 翻译 范文 资料 作品 文献 课程 实习 指导 调研 下载 网络教育 计算机 网站 网页 小程序 商城 购物 订餐 电影 安卓 Android Html Html5 SSM SSH Python 爬虫 大数据 管理系统 图书 校园网 考试 选题 网络安全 推荐系统 机械 模具 夹具 自动化 数控 车床 汽车 故障 诊断 电机 建模 机械手 去壳机 千斤顶 变速器 减速器 图纸 电气 变电站 电子 Stm32 单片机 物联网 监控 密码锁 Plc 组态 控制 智能 Matlab 土木 建筑 结构 框架 教学楼 住宅楼 造价 施工 办公楼 给水 排水 桥梁 刚构桥 水利 重力坝 水库 采矿 环境 化工 固废 工厂 视觉传达 室内设计 产品设计 电子商务 物流 盈利 案例 分析 评估 报告 营销 报销 会计
 首 页 机械毕业设计 电子电气毕业设计 计算机毕业设计 土木工程毕业设计 视觉传达毕业设计 理工论文 文科论文 毕设资料 帮助中心 设计流程 
垫片
您现在所在的位置:首页 >>文科论文 >> 文章内容
                 
垫片
   我们提供全套毕业设计和毕业论文服务,联系微信号:biyezuopin QQ:2922748026   
ENHANCING APPLICATION PERFORMANCE USING MINI-APPS: COMPARISON OF HYBRID PARALLEL PROGRAMMING PARADIG
文章来源:www.biyezuopin.vip   发布者:毕业作品网站  

ABSTRACT

In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes.  A maximum speedup of 23 was measured for MPI+SMPI, but only 10 was measured for MPI+OpenMP.

Keywords: Mini-apps, Performance, VULCAN, Shared Memory, MPI, OpenMP

1 INTRODUCTION

In many fields, real-world applications have already been developed. For established applications to stay up-to-date, new parallel strategies must be explored to determine which may yield the best performance, especially with advances in computing hardware. However, restructuring or modifying a real-world application incurs increased cost depending on the size of the code and changes to be made. A mini-app may be created to quickly explore such options without modifying the entire code. Mini-apps reduce the overhead of applying new strategies, thus various strategies may be implemented and compared. This work presents the authors experiences when following this strategy for a real-world application developed by NASA.

VULCAN (Viscous Upwind Algorithm for Complex Flow Analysis) is a turbulent, no equilibrium, finite- rate chemical kinetics, Navier-Stokes flow solver for structured, cell-centered, multiblock grids that is maintained and distributed by the Hypersonic Air Breathing Propulsion Branch of the NASA Langley Research Center (NASA 2016). The mini-app developed in this work uses the Householder Reflector kernel for solving systems of linear equations. This kernel is used often by different workloads, and is a good candidate to decide what strategy type to apply to VULCAN. VULCAN is built on a single-layer of MPI and the code has been optimized to obtain perfect vectorization, therefore two-levels of parallelism are currently used. This work investigates two flavors of shared-memory parallelism, OpenMP and Shared MPI, which will provide the third-level of parallelism for the application. A third-level of parallelism increases performance, which decreases the time-to-solution.

MPI has extended the standard to MPI version 3.0, which includes the Shared Memory (SHM) model (Mikhail B. (Intel) 2015,Message Passing Interface Forum 2012), known in this work as Shared MPI (SMPI). This extension allows MPI to create memory windows that are shared between MPI tasks on the same physical node. In this way, MPI tasks are equivalent to threads, except Shared MPI is more difficult for a programmer to implement. OpenMP is the most common shared-memory library used to date because of its ease-of- use (OpenMP 2016). In most cases, only a few OpenMP pragmas are required to parallelize a loop; however, OpenMP is subject to increased overhead, which may decrease performance if not properly tuned.

As early as the year 2000, the authors in (Cappello and Etiemble 2000) found that latency sensitive codes seem to benefit from pure MPI implementations whereas bandwidth sensitive codes benefit from hybrid MPI+OpenMP. Also, the authors found that faster processors will benefit hybrid MPI+OpenMP codes if data movement is not an overwhelming bottleneck (Cappello and Etiemble 2000). Since this time, hybrid MPI+OpenMP implementations have improved, but not without difficulties. In (Drosi- nos and Koziris 2004,Chorley and Walker 2010), it was found that OpenMP incurs many performance reductions, including: overhead (fork/join, atomics, etc), false sharing, imbalanced message passing, and  a sensitivity to processor mapping. However, OpenMP overhead may be hidden when using more threads. In (Rabenseifner, Hager, and Jost 2009), the authors found that simply using OpenMP could incur per- formance penalties because the compiler avoids optimizing OpenMP loops – verified up to version 10.1. Although compilers have advanced considerably since this time, application users that still compile using older versions may be at risk if using OpenMP. In (Drosinos and Koziris 2004,Chorley and Walker 2010) the authors found that the hybrid MPI+OpenMP approach outperforms the pure MPI approach because the hybrid strategy diversifies the path to parallel execution. More recently, MPI extended its standard to include the SHM model (Mikhail B. (Intel) 2015). The authors in (Hoefler, Dinan, Thakur, Barrett, Balaji, Gropp, and Underwood 2015) present MPI RMA theory and examples, which are the basis of the SHM model. In (Gerstenberger, Besta, and Hoefler 2013), the authors conduct a thorough performance evaluation of MPI RMA, including an investigation of different synchronization techniques for memory windows. In (Hoefler, Dinan, Buntinas, Balaji, Barrett, Brightwell, Gropp, Kale,  and Thakur 2013),  the authors investigate the viability of MPI+SMPI execution,  as well  as compare it to MPI+OpenMP execution. It was found that an underlying limitation of OpenMP is the shared-by-default model for memory, which does not couple well with MPI since the memory model is private-by-default. For this reason, MPI+SMPI codes are expected to perform better, since shared memory is explicit and the memory model for the entire code is private-by-default. Most recently, a new MPI communication model has been introduced in (Gropp, Olson, and Samfass 2016), which better captures multinode communication performance, and offers an open-source benchmarking tool to capture the model parameters for a given system. Independent of the shared memory layer, MPI is the de facto standard in data movement between nodes and such a model can help any MPI program. The remainder of this paper is organized into the following sections: 2 introduces the Householder mini-apps, 3 presents the performance testing results for the mini-apps considered, and 4 concludes this paper.

2 HOUSEHOLDER MINI-APP

The mini-apps use the householder computation kernel from VULCAN, which is used in solving systems of linear equations.  The householder routine is an algorithm that is used to transform a square matrix   into triangular form, without increasing the magnitude of each element significantly (Hansen 1992). The Householder routine is numerically stable, in that it does not lose a significant amount of accuracy due to very small or very large intermediate values used in the computation.

Mini-apps are designed to perform specific functions. In this work, the important features are as follows:

Accept generic input, Validate the numerical result of the optimized routine, Measure performance of the original and optimized routines, Tune optimizations.

The generic input is read in from a file, where the file must contain at least one matrix A and resulting vector b. Should only one matrix and vector be supplied, the input will be duplicated for all instances of m. Validation of the optimized routine is performed by taking the difference of the output from the original and optimized routines. The mini-app will first compute the solution of the input using the original routine, and then the optimized routine. This way the output may be compared directly, and relative performance may also be measured using execution time. Should the optimized routine feature one or more parameters that may be varied, they are to be investigated such that the optimization may be tuned to the hardware. In this work, there is always at least one tunable parameter. One feature that should have been factored into the mini-app design was modularizing the different versions of the Householder routine. In this work, two mini-apps were designed because each implements a different version of the parallel Householder routine; however, it would have been better to design a single mini-app that uses modules to include other versions of the parallel Householder kernel. With this functionality, it would be less cumbersome to work on each version of the kernel. To parallelize the Householder routine, m is decomposed into separate, but equal chunks that are then solved by each thread – shared MPI tasks are equivalent to threads in this work for brevity. However, the original routine varies over m inside the inner-most computational loop (an optimization that benefits vectorization and caching), but the parallel loop must be the outer-most loop for best performance. Therefore, loop blocking has been invoked for the parallel sections of the code. Loop blocking is a technique commonly used to reduce the memory footprint of a computation such that it fits inside the cache for a given hardware. Therefore, the parallel Householder routine has at least one tunable parameter, block size.

In this work, two flavors of the shared memory model are investigated: OpenMP and SMPI. The difference between OpenMP and SMPI lies in how memory is managed. OpenMP uses a public-memory model where all data is available to all threads by default. Public-memory makes it easy to add parallel statements, since the threads will all share this data, but threads are then susceptible to false-sharing, where variables that should otherwise be private are inadvertently shared. Shared MPI uses a private-memory model where data must be explicitly shared between threads, and all data is private by default. Private-memory makes any parallel implementation more complicated, because threads must be instructed to access specific memory for computation. Further, OpenMP creates and destroys threads over the course of execution which is handled internally and is costly to performance. SMPI threads are created upon execution start and persist throughout. This makes managing SMPI threads more difficult, since each parallel phase must be explicitly managed by the programmer. However, the extra work by the programmer may pay off in terms of performance, since less overhead is incurred by SMPI.

3 PERFORMANCE EVALUATION

This section presents the procedure and results of performance testing for the MPI+OpenMP and MPI+Shared MPI Householder Reflector kernel optimizations. For performance testing, it was of interest to vary the number of nodes used for the calculation because many nodes are often used when executing VULCAN with real-world simulations. Up to four nodes have been investigated in this work on a multinode HPC cluster. The number of MPI tasks and OpenMP threads are varied, as well as block size for loop-blocking in the parallel section. 3.1 Parallel Householder

To parallelize the Householder routine, m is decomposed into separate, but equal chunks that are then solved by each thread – shared MPI tasks are equivalent to threads in this work for brevity. However, the original routine varies over m inside the inner-most computational loop (an optimization that benefits vectorization and caching), but the parallel loop must be the outer-most loop for best performance. Therefore, loop blocking has been invoked for the parallel sections of the code. Loop blocking is a technique commonly used to reduce the memory footprint of a computation such that it fits inside the cache for a given hardware. Therefore, the parallel Householder routine has at least one tunable parameter, block size.

In this work, two flavors of the shared memory model are investigated: OpenMP and SMPI. The difference between OpenMP and SMPI lies in how memory is managed. OpenMP uses a public-memory model where all data is available to all threads by default. Public-memory makes it easy to add parallel statements, since the threads will all share this data, but threads are then susceptible to false-sharing, where variables that should otherwise be private are inadvertently shared. Shared MPI uses a private-memory model where data must be explicitly shared ×between threads, and all data is private by default. Private-memory makes any parallel implementation more complicated, because threads must be instructed to access specific memory for computation. Further, OpenMP creates and destroys threads over the course of execution which is handled internally and is costly to performance. SMPI threads are created upon execution start and persist throughout. This makes managing SMPI threads more difficult, since each parallel phase must be explicitly managed by the programmer. However, the extra work by the programmer may pay off in terms of performance, since less overhead is incurred by SMPI.

3.2 PERFORMANCE EVALUATION

This section presents the procedure and results of performance testing for the MPI+OpenMP and MPI+Shared MPI Householder Reflector kernel optimizations. For performance testing, it was of interest to vary the number of nodes used for the calculation because many nodes are often used when executing VULCAN with real-world simulations. Up to four nodes have been investigated in this work on a multinode HPC cluster. The number of MPI tasks and OpenMP threads are varied, as well as block size for loop-blocking in the parallel section. 4 CONCLUSION

In this work, mini-apps were developed to optimize the Householder Reflector kernel within NASA real-world application, VULCAN. Two programming paradigms for shared memory parallelism were investigated, OpenMP and Shared MPI, and performance testing was conducted on a multi-node system Turing for up to four nodes. Speedup, the measure of performance, was found to be higher for the Shared MPI version of the Householder mini-app than that for the OpenMP version. Specifically, the speedup for SMPI was up to 1.9 that of OpenMP. With the maximum number of threads, SMPI obtains perfect speedup with sufficiently large workloads (m=50m). OpenMP was only able to achieve a speedup of 10 , which is half of the expected speedup based on the number of threads used.

  全套毕业设计论文现成成品资料请咨询微信号:biyezuopin QQ:2922748026     返回首页 如转载请注明来源于www.biyezuopin.vip  

                 

打印本页 | 关闭窗口
本类最新文章
The Honest Guide Sonar Target Cla Process Planning
Research on the Sustainable Land UniCycle: An And
| 关于我们 | 友情链接 | 毕业设计招聘 |

Email:biyeshejiba@163.com 微信号:biyezuopin QQ:2922748026  
本站毕业设计毕业论文资料均属原创者所有,仅供学习交流之用,请勿转载并做其他非法用途.如有侵犯您的版权有损您的利益,请联系我们会立即改正或删除有关内容!