Do MPI collective operations involve multiple hops?

Broken]In summary, the reduce operation involves nodes sending data directly to the root or using a structure where a node receives data from multiple other nodes and passes intermediate results to other nodes. This can be achieved in two steps using appropriate grouping and MPI_ALLREDUCE instead of MPI_REDUCE.
  • #1
ektrules
35
0
Consider the reduce operation for example. Do all nodes send data directly to the root? Or is there some structure where a node will receive data from a few other nodes, perform a reduction, then pass the intermediate results to other nodes?
 
Technology news on Phys.org
  • #2
ektrules said:
Or is there some structure where a node will receive data from a few other nodes, perform a reduction, then pass the intermediate results to other nodes?

Sth like that could be achieved in two steps using appropriate grouping and using MPI_ALLREDUCE (instead of MPI_REDUCE) first

Pls attend:
http://www.mpi-forum.org/docs/mpi22-report/node109.htm#Node109
If comm is an intracommunicator, MPI_ALLREDUCE behaves the same as MPI_REDUCE except that the result appears in the receive buffer of all the group members.

and
http://www.mpi-forum.org/docs/mpi22-report/node87.htm#Node87
 
Last edited by a moderator:

Related to Do MPI collective operations involve multiple hops?

1. How do MPI collective operations work?

MPI collective operations involve multiple processes communicating and coordinating with each other to perform a collective action, such as sending or receiving data from a group of processes.

2. What is the purpose of MPI collective operations?

The purpose of MPI collective operations is to improve the performance of parallel programs by allowing multiple processes to work together and share data efficiently.

3. Do MPI collective operations involve multiple hops?

Yes, MPI collective operations typically involve multiple hops, meaning that data is passed through multiple processes before reaching its final destination.

4. How do multiple hops affect the performance of MPI collective operations?

The use of multiple hops in MPI collective operations can impact performance, as it adds overhead and increases the time it takes for data to be transferred between processes. However, this can be minimized by using efficient algorithms and optimizing the communication patterns.

5. Are there any alternatives to using MPI collective operations with multiple hops?

There are alternatives to using MPI collective operations with multiple hops, such as point-to-point communication where data is sent directly between two processes. However, this may not be as efficient for certain collective actions, and the use of multiple hops can still be optimized for better performance.

Similar threads

  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
15
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
15
Views
1K
Replies
3
Views
367
  • Programming and Computer Science
Replies
7
Views
643
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
2
Views
2K
  • Programming and Computer Science
Replies
12
Views
3K
  • Programming and Computer Science
Replies
2
Views
3K
Back
Top