MPI_Allreduce - Online Linux Manual PageSection : 3
Updated : Jan 21, 2016
Source : 1.10.2
Note : Open MPI

NAMEMPI_Allreduce, MPI_Iallreduce − Combines values from all processes and distributes the result back to all processes.

SYNTAX

C Syntax#include <mpi.h> int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) int MPI_Iallreduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm, MPI_Request *request)

Fortran SyntaxINCLUDE 'mpif.h' MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR MPI_IALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

C++ Syntax#include <mpi.h> void MPI::Comm::Allreduce(const void* sendbuf, void* recvbuf, int count, const MPI::Datatype& datatype, const MPI::Op& op) const=0

INPUT PARAMETERSsendbuf  Starting address of send buffer (choice). count  Number of elements in send buffer (integer). datatype  Datatype of elements of send buffer (handle). op  Operation (handle). comm  Communicator (handle).

OUTPUT PARAMETERSrecvbuf  Starting address of receive buffer (choice). request  Request (handle, non-blocking only). IERROR  Fortran only: Error status (integer).

DESCRIPTIONSame as MPI_Reduce except that the result appears in the receive buffer of all the group members. Example 1: A routine that computes the product of a vector and an array that are distributed across a group of processes and returns the answer at all nodes (compare with Example 2, with MPI_Reduce, below). SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm) REAL a(m), b(m,n) ! local slice of array REAL c(n) ! result REAL sum(n) INTEGER n, comm, i, j, ierr ! local sum DO j= 1, n sum(j) = 0.0 DO i = 1, m sum(j) = sum(j) + a(i)*b(i,j) END DO END DO ! global sum CALL MPI_ALLREDUCE(sum, c, n, MPI_REAL, MPI_SUM, comm, ierr) ! return result at all nodes RETURNExample 2: A routine that computes the product of a vector and an array that are distributed across a group of processes and returns the answer at node zero. SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm) REAL a(m), b(m,n) ! local slice of array REAL c(n) ! result REAL sum(n) INTEGER n, comm, i, j, ierr ! local sum DO j= 1, n sum(j) = 0.0 DO i = 1, m sum(j) = sum(j) + a(i)*b(i,j) END DO END DO ! global sum CALL MPI_REDUCE(sum, c, n, MPI_REAL, MPI_SUM, 0, comm, ierr) ! return result at node zero (and garbage at the other nodes) RETURN

USE OF IN-PLACE OPTIONWhen the communicator is an intracommunicator, you can perform an all-reduce operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of sendbuf at all processes. Note that MPI_IN_PLACE is a special kind of value; it has the same restrictions on its use as MPI_BOTTOM. Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.

WHEN COMMUNICATOR IS AN INTER-COMMUNICATORWhen the communicator is an inter-communicator, the reduce operation occurs in two phases. The data is reduced from all the members of the first group and received by all the members of the second group. Then the data is reduced from all the members of the second group and received by all the members of the first. The operation exhibits a symmetric, full-duplex behavior. When the communicator is an intra-communicator, these groups are the same, and the operation occurs in a single phase.

NOTES ON COLLECTIVE OPERATIONSThe reduction functions ( MPI_Op ) do not return an error value. As a result, if the functions detect an error, all they can do is either call MPI_Abort or silently skip the problem. Thus, if you change the error handler from MPI_ERRORS_ARE_FATAL to something else, for example, MPI_ERRORS_RETURN , then no error may be indicated.

ERRORSAlmost all MPI routines return an error value; C routines as the value of the function and Fortran routines in the last argument. C++ functions do not return errors. If the default error handler is set to MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism will be used to throw an MPI::Exception object. Before the error value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.
0
Johanes Gumabo
Data Size   :   16,386 byte
man-MPI_Allreduce.3Build   :   2024-12-05, 20:55   :  
Visitor Screen   :   x
Visitor Counter ( page / site )   :   2 / 174,715
Visitor ID   :     :  
Visitor IP   :   18.119.116.125   :  
Visitor Provider   :   AMAZON-02   :  
Provider Position ( lat x lon )   :   39.962500 x -83.006100   :   x
Provider Accuracy Radius ( km )   :   1000   :  
Provider City   :   Columbus   :  
Provider Province   :   Ohio ,   :   ,
Provider Country   :   United States   :  
Provider Continent   :   North America   :  
Visitor Recorder   :   Version   :  
Visitor Recorder   :   Library   :  
Online Linux Manual Page   :   Version   :   Online Linux Manual Page - Fedora.40 - march=x86-64 - mtune=generic - 24.12.05
Online Linux Manual Page   :   Library   :   lib_c - 24.10.03 - march=x86-64 - mtune=generic - Fedora.40
Online Linux Manual Page   :   Library   :   lib_m - 24.10.03 - march=x86-64 - mtune=generic - Fedora.40
Data Base   :   Version   :   Online Linux Manual Page Database - 24.04.13 - march=x86-64 - mtune=generic - fedora-38
Data Base   :   Library   :   lib_c - 23.02.07 - march=x86-64 - mtune=generic - fedora.36

Very long time ago, I have the best tutor, Wenzel Svojanovsky . If someone knows the email address of Wenzel Svojanovsky , please send an email to johanes_gumabo@yahoo.co.id .
If error, please print screen and send to johanes_gumabo@yahoo.co.id
Under development. Support me via PayPal.