随着科学研究和工程计算的发展，大规模计算和模拟已经无法避免。这些大规模计算往往涉及海量数据的运算和处理，因此并行计算被用来一方面解决大规模的快速计算，另一方面解决海量数据的处理。基于MPI的并行计算可以方便地进行分布式运算，把海量数据分散在集群超级计算机上，使得每单个处理器(CPU)处理一小部分数据，从而实现快速运算和大规模计算。本文基于MPI的并行编程，实现了大规模矩阵的相乘运算，并且测试了点对点通信下的不通信机制(阻塞通信、非阻塞通信及其混合通信)的标准通信的并行性能。针对大型矩阵相乘计算，组建了完整的快速标准通信方法，并且防止死锁的发生。为今后的进一步实际应用奠定基础，提供有用的参考。The large scale computing is difficult to be avoided following the requirement of modern scientific re- searches and practical engineering applications. The computing and processing of massive data have to be involved in these large scale computing. The parallel computing therefore is employed to solve these issues of large scale comput- ing not only on fast computing but also on data processing. MPI-based parallel computing can easily realize distributed computing and massive data scattered in the cluster supercomputer, making each a single processor to handle a small portion of data in order to achieve fast computing and large scale computing. Based on MPI parallel programming a large-scale matrix multiplication operation was developed. Through the testes on the parallel performance of the point to point communications with the blocking communication, non-blocking communication and mixed communication, a complete set of rapid communication to prevent the occurrence of deadlock was established. The results were significant for the future practical applications.