Commit dcfe9e4a authored by l_opal's avatar l_opal

cleanup unused methods from FieldLayout classes

- broken tests have been disabled
parent 589cbc98
/*
A fairly simple load balancer inspired by Dan Quinlan's MLB.
It does recursive binary subdivision of a FieldLayout domain,
restricting the cuts to coordinate directions, so as to balance the
workload. The "workload" is given by a Field of weights passed in.
It decides on the cut axis by cutting the longest axis of a brick,
and the location of that cut by balancing the weights on each side
of the cut. The resulting distribution has one vnode per processor.
This is restricted to a processor number that is a power of two.
It performs log(P) parallel reductions.
It does nothing fancy when deciding on the splits to try to make the
new paritioning close to the previous. The same set of weights will
always give the same repartitioning, but similar sets of weights
could result in quite different partitionings.
There are two functions defined here:
NDIndex<Dim>
CalcBinaryRepartion(FieldLayout<Dim>&, BareField<double,Dim>&);
Given a FieldLayout and a Field of weights, find the domain for this
processor. This does not repartition the FieldLayout, it just
calculates the domain. If you want to further subdivide these
domains, just cut up what this function returns.
void
BinaryRepartition(FieldLayout<Dim>&, BareField<double,Dim>&);
Just call the above function and then repartition the FieldLayout
(and all the Fields defined on it).
*/
//
// A fairly simple load balancer inspired by Dan Quinlan's MLB.
//
// It does recursive binary subdivision of a FieldLayout domain,
// restricting the cuts to coordinate directions, so as to balance the
// workload. The "workload" is given by a Field of weights passed in.
// It decides on the cut axis by cutting the longest axis of a brick,
// and the location of that cut by balancing the weights on each side
// of the cut. The resulting distribution has one vnode per processor.
//
// This is restricted to a processor number that is a power of two.
//
// It performs log(P) parallel reductions.
//
// It does nothing fancy when deciding on the splits to try to make the
// new paritioning close to the previous. The same set of weights will
// always give the same repartitioning, but similar sets of weights
// could result in quite different partitionings.
//
// There are two functions defined here:
//
// NDIndex<Dim>
// CalcBinaryRepartion(FieldLayout<Dim>&, BareField<double,Dim>&);
//
// Given a FieldLayout and a Field of weights, find the domain for this
// processor. This does not repartition the FieldLayout, it just
// calculates the domain. If you want to further subdivide these
// domains, just cut up what this function returns.
//
// void
// BinaryRepartition(FieldLayout<Dim>&, BareField<double,Dim>&);
//
// Just call the above function and then repartition the FieldLayout
// (and all the Fields defined on it).
//
// Copyright (c) 2003 - 2020
// Paul Scherrer Institut, Villigen PSI, Switzerland
......@@ -54,9 +51,6 @@
#ifndef BINARY_BALANCER_H
#define BINARY_BALANCER_H
//////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////
// forward declarations
template<unsigned Dim> class FieldLayout;
template<class T, unsigned Dim> class BareField;
......@@ -76,8 +70,6 @@ BinaryRepartition(FieldLayout<Dim>& layout, BareField<double,Dim>& weights)
layout.Repartition( CalcBinaryRepartition(layout,weights) );
}
//////////////////////////////////////////////////////////////////////
#include "FieldLayout/BinaryBalancer.hpp"
#endif // BINARY_BALANCER_H
#endif
/*
Implementation of BinaryBalancer.
The general strategy is that you do log(P) splits on the domain. It
starts with the whole domain, does a reduction to find where to
split it, then does reductions on each of the resulting domains to
find where to split those, then reductions on those to split them,
and so on until it is done.
Suppose you're on the n'th split, so there are 2**n domains figured
out so far, and after the split there will be 2**(n+1) splits. In
each of those 2**n domains you need to find
a) The axis to split on. This is done by just finding the longest
axis in that domain.
b) The location within that domain to make the split. This is done
by reducing the weights on all dimensions except the axis to be
split and finding the location within that array that puts half
the weight on each side.
The reduction for b) is done is a scalable way. It is a parallel
reduction, and if there are 2**n domains being split, the reductions
are accumulated onto processors 0..2**n-1. Those processors
calculate the split locations and broadcast them to all the
processors.
At every stage of the process all the processors know all the
domains. This is necessary because the weight array could be
distributed arbitrarily, so the reductions could involve any
processors.
Nevertheless, the reductions are performed efficiently. Using
DomainMaps, only the processors that need to participate in a
reduction do participate.
*/
//
// Implementation of BinaryBalancer.
//
// The general strategy is that you do log(P) splits on the domain. It
// starts with the whole domain, does a reduction to find where to
// split it, then does reductions on each of the resulting domains to
// find where to split those, then reductions on those to split them,
// and so on until it is done.
//
// Suppose you're on the n'th split, so there are 2**n domains figured
// out so far, and after the split there will be 2**(n+1) splits. In
// each of those 2**n domains you need to find
//
// a) The axis to split on. This is done by just finding the longest
// axis in that domain.
//
// b) The location within that domain to make the split. This is done
// by reducing the weights on all dimensions except the axis to be
// split and finding the location within that array that puts half
// the weight on each side.
//
// The reduction for b) is done is a scalable way. It is a parallel
// reduction, and if there are 2**n domains being split, the reductions
// are accumulated onto processors 0..2**n-1. Those processors
// calculate the split locations and broadcast them to all the
// processors.
//
// At every stage of the process all the processors know all the
// domains. This is necessary because the weight array could be
// distributed arbitrarily, so the reductions could involve any
// processors.
//
// Nevertheless, the reductions are performed efficiently. Using
// DomainMaps, only the processors that need to participate in a
// reduction do participate.
//
// Copyright (c) 2003 - 2020
// Paul Scherrer Institut, Villigen PSI, Switzerland
......
......@@ -21,53 +21,33 @@
#include "FieldLayout/FieldLayout.h"
template<unsigned Dim, class Mesh, class Centering>
class CenteredFieldLayout : public FieldLayout<Dim>
{
template <unsigned Dim, class Mesh, class Centering>
class CenteredFieldLayout : public FieldLayout<Dim> {
public:
//---------------------------------------------------------------------------
// Constructors from a mesh object only and parallel/serial specifiers.
// If not doing this, user should be just using simple FieldLayout object,
// though no harm would be done in constructiong a CenteredFieldLayout with
// Index/NDIndex arguments via the inherited constructors from FieldLayout.
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
// These specify only a total number of vnodes, allowing the constructor
// complete control on how to do the vnode partitioning of the index space:
// Constructor for arbitrary dimension with parallel/serial specifier array:
// This one also works if nothing except mesh is specified:
CenteredFieldLayout(Mesh& mesh,
e_dim_tag *p=0,
int vnodes=-1);
// Special constructor which uses a existing partition
// particular from expde
//---------------------------------------------------------------------------
// These specify both the total number of vnodes and the numbers of vnodes
// along each dimension for the partitioning of the index space. Obviously
// this restricts the number of vnodes to be a product of the numbers along
// each dimension (the constructor implementation checks this):
// Constructor for arbitrary dimension with parallel/serial specifier array:
CenteredFieldLayout(Mesh& mesh, e_dim_tag *p,
unsigned* vnodesAlongDirection,
bool recurse=false,
int vnodes=-1);
//---------------------------------------------------------------------------
// A constructor a a completely user-specified partitioning of the
// mesh space.
CenteredFieldLayout(Mesh& mesh,
const NDIndex<Dim> *dombegin,
const NDIndex<Dim> *domend,
const int *nbegin,
const int *nend);
//---------------------------------------------------------------------------
// Constructors from a mesh object only and parallel/serial specifiers.
// If not doing this, user should be just using simple FieldLayout object,
// though no harm would be done in constructiong a CenteredFieldLayout with
// Index/NDIndex arguments via the inherited constructors from FieldLayout.
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
// These specify only a total number of vnodes, allowing the constructor
// complete control on how to do the vnode partitioning of the index space:
// Constructor for arbitrary dimension with parallel/serial specifier array:
// This one also works if nothing except mesh is specified:
CenteredFieldLayout(Mesh &mesh, e_dim_tag *p = 0, int vnodes = -1);
//---------------------------------------------------------------------------
// A constructor a a completely user-specified partitioning of the
// mesh space.
CenteredFieldLayout(
Mesh &mesh, const NDIndex<Dim> *dombegin, const NDIndex<Dim> *domend, const int *nbegin,
const int *nend);
};
#include "FieldLayout/CenteredFieldLayout.hpp"
#endif // CENTERED_FIELD_LAYOUT_H
#endif
......@@ -17,42 +17,34 @@
//
#include "FieldLayout/CenteredFieldLayout.h"
#include "Meshes/Centering.h"
#include "Meshes/CartesianCentering.h"
#include "Meshes/Centering.h"
#include "Utility/PAssert.h"
//=============================================================================
// Helper global functions:
// The constructors call these specialized global functions as a workaround for
// lack of partial specialization:
//=============================================================================
//===========================Arbitrary mesh type=============================
//-----------------------------------------------------------------------------
// These specify only a total number of vnodes, allowing the constructor
// complete control on how to do the vnode partitioning of the index space:
// Constructor for arbitrary dimension with parallel/serial specifier array:
//------------------Cell centering---------------------------------------------
template<unsigned Dim, class Mesh>
inline void
centeredInitialize(CenteredFieldLayout<Dim,Mesh,Cell> & cfl,
const Mesh& mesh,
e_dim_tag* edt,
int vnodes)
{
NDIndex<Dim> ndi;
for (unsigned int d=0; d<Dim; d++)
ndi[d] = Index(mesh.gridSizes[d] - 1);
cfl.initialize(ndi, edt, vnodes);
template <unsigned Dim, class Mesh>
inline void centeredInitialize(
CenteredFieldLayout<Dim, Mesh, Cell>& cfl, const Mesh& mesh, e_dim_tag* edt, int vnodes) {
NDIndex<Dim> ndi;
for (unsigned int d = 0; d < Dim; d++)
ndi[d] = Index(mesh.gridSizes[d] - 1);
cfl.initialize(ndi, edt, vnodes);
}
//=============================================================================
// General ctor calls specializations of global function (workaround for lack
// of partial specialization:
//=============================================================================
//------------------Vert centering---------------------------------------------
template <unsigned Dim, class Mesh>
inline void centeredInitialize(
CenteredFieldLayout<Dim, Mesh, Vert>& cfl, const Mesh& mesh, e_dim_tag* edt, int vnodes) {
NDIndex<Dim> ndi;
for (unsigned int d = 0; d < Dim; d++)
ndi[d] = Index(mesh.gridSizes[d]);
cfl.initialize(ndi, edt, vnodes);
}
//-----------------------------------------------------------------------------
// These specify only a total number of vnodes, allowing the constructor
......@@ -61,51 +53,11 @@ centeredInitialize(CenteredFieldLayout<Dim,Mesh,Cell> & cfl,
// Constructor for arbitrary dimension with parallel/serial specifier array:
// This one also works if nothing except mesh is specified:
template<unsigned Dim, class Mesh, class Centering>
CenteredFieldLayout<Dim,Mesh,Centering>::
CenteredFieldLayout(Mesh& mesh,
e_dim_tag *p,
int vnodes)
{
PInsist(Dim<=Mesh::Dimension,
"CenteredFieldLayout dimension cannot be greater than Mesh dimension!!");
centeredInitialize(*this, mesh, p, vnodes);
}
//-----------------------------------------------------------------------------
// These specify both the total number of vnodes and the numbers of vnodes
// along each dimension for the partitioning of the index space. Obviously this
// restricts the number of vnodes to be a product of the numbers along each
// dimension (the constructor implementation checks this):
// Constructor for arbitrary dimension with parallel/serial specifier array:
template<unsigned Dim, class Mesh, class Centering>
CenteredFieldLayout<Dim,Mesh,Centering>::
CenteredFieldLayout(Mesh& mesh,
e_dim_tag *p,
unsigned* vnodesAlongDirection,
bool recurse,
int vnodes)
{
PInsist(Dim<=Mesh::Dimension,
"CenteredFieldLayout dimension cannot be greater than Mesh dimension!!");
centeredInitialize(*this, mesh, p, vnodesAlongDirection, recurse, vnodes);
}
//-----------------------------------------------------------------------------
// A constructor for a completely user-specified partitioning of the
// mesh space.
template<unsigned Dim, class Mesh, class Centering>
CenteredFieldLayout<Dim,Mesh,Centering>::
CenteredFieldLayout(Mesh& mesh,
const NDIndex<Dim> *dombegin,
const NDIndex<Dim> *domend,
const int *nbegin,
const int *nend)
{
centeredInitialize(*this, mesh, dombegin, domend, nbegin, nend);
template <unsigned Dim, class Mesh, class Centering>
CenteredFieldLayout<Dim, Mesh, Centering>::CenteredFieldLayout(
Mesh& mesh, e_dim_tag* p, int vnodes) {
PInsist(
Dim <= Mesh::Dimension,
"CenteredFieldLayout dimension cannot be greater than Mesh dimension!!");
centeredInitialize(*this, mesh, p, vnodes);
}
This diff is collapsed.
This diff is collapsed.
//
//
// FieldLayoutUser is a base class for all classes which need to use
// a FieldLayout - it is derived from User, which provides a virtual
// function 'notifyUserOfDelete' which is called when the FieldLayout
......@@ -23,27 +23,24 @@
#ifndef FIELD_LAYOUT_USER_H
#define FIELD_LAYOUT_USER_H
#include "Utility/User.h"
#include "Utility/UserList.h"
// class definition
class FieldLayoutUser : public User {
public:
// constructor - the base class selects a unique ID value
FieldLayoutUser() {};
// constructor - the base class selects a unique ID value
FieldLayoutUser(){};
// destructor, nothing to do here
virtual ~FieldLayoutUser() {};
// destructor, nothing to do here
virtual ~FieldLayoutUser(){};
//
// virtual functions for FieldLayoutUser's
//
//
// virtual functions for FieldLayoutUser's
//
// Repartition onto a new layout
virtual void Repartition(UserList *) = 0;
// Repartition onto a new layout
virtual void Repartition(UserList *) = 0;
};
#endif // FIELD_LAYOUT_USER_H
#endif // FIELD_LAYOUT_USER_H
// -*- C++ -*-
/***************************************************************************
*
* The IPPL Framework
*
***************************************************************************/
y//
// Vnodes really have very little information.
// It knows its domain and what processor it resides on.
//
// Also, it has a global integer index for the vnode (useful with more recent
// FieldLayouts which store a logical "array" of vnodes; user specifies numbers
// of vnodes along each direction). Classes or user codes that use Vnode are
// responsible for setting and managing the values of this index; if unset, it
// has the value -1.
//
// Copyright (c) 2003 - 2020
// Paul Scherrer Institut, Villigen PSI, Switzerland
// All rights reserved.
//
// This file is part of OPAL.
//
// OPAL is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// You should have received a copy of the GNU General Public License
// along with OPAL. If not, see <https://www.gnu.org/licenses/>.
//
#ifndef VNODE_H
#define VNODE_H
// include files
#include "Utility/RefCounted.h"
#include "Index/NDIndex.h"
#include "Utility/RefCounted.h"
#include <iostream>
// forward declarations
template <unsigned Dim> class Vnode;
template <unsigned Dim>
class Vnode;
template <unsigned Dim>
std::ostream& operator<<(std::ostream&, const Vnode<Dim>&);
//----------------------------------------------------------------------
//
// Vnodes really have very little information.
// It knows its domain and what processor it resides on.
//
// Also, it has a global integer index for the vnode (useful with more recent
// FieldLayouts which store a logical "array" of vnodes; user specifies numbers
// of vnodes along each direction). Classes or user codes that use Vnode are
// responsible for setting and managing the values of this index; if unset, it
// has the value -1.
//
//----------------------------------------------------------------------
template<unsigned Dim>
class Vnode : public RefCounted
{
template <unsigned Dim>
class Vnode : public RefCounted {
private:
NDIndex<Dim> Domain;
int Node;
int vnode_m; // Global vnode ID number (between 0 and nvnodes - 1)
public:
// Null ctor does nothing.
Vnode() {}
// Normal ctor:
Vnode(const NDIndex<Dim>& domain, int node, int vnode=-1) :
Domain(domain), Node(node), vnode_m(vnode) {}
// Copy ctor:
Vnode(const Vnode<Dim>& vn) :
Domain(vn.Domain), Node(vn.Node), vnode_m(vn.vnode_m) {}
// operator= to copy one vnode into another
Vnode<Dim> &operator=(const Vnode<Dim> &vn) {
Domain = vn.Domain;
Node = vn.Node;
vnode_m = vn.vnode_m;
return *this;
}
int getNode() const { return Node; }
int getVnode() const { return vnode_m; }
const NDIndex<Dim>& getDomain() const { return Domain; }
// put data into a message to send to another node
Message& putMessage(Message& m) const {
Domain.putMessage(m);
m.put(Node);
m.put(vnode_m);
return m;
}
// get data out from a message
Message& getMessage(Message& m) {
Domain.getMessage(m);
m.get(Node);
m.get(vnode_m);
return m;
}
NDIndex<Dim> Domain;
int Node;
int vnode_m; // Global vnode ID number (between 0 and nvnodes - 1)
public:
// Null ctor does nothing.
Vnode() {
}
// Normal ctor:
Vnode(const NDIndex<Dim>& domain, int node, int vnode = -1)
: Domain(domain), Node(node), vnode_m(vnode) {
}
// Copy ctor:
Vnode(const Vnode<Dim>& vn) : Domain(vn.Domain), Node(vn.Node), vnode_m(vn.vnode_m) {
}
// operator= to copy one vnode into another
Vnode<Dim>& operator=(const Vnode<Dim>& vn) {
Domain = vn.Domain;
Node = vn.Node;
vnode_m = vn.vnode_m;
return *this;
}
int getNode() const {
return Node;
}
int getVnode() const {
return vnode_m;
}
const NDIndex<Dim>& getDomain() const {
return Domain;
}
// put data into a message to send to another node
Message& putMessage(Message& m) const {
Domain.putMessage(m);
m.put(Node);
m.put(vnode_m);
return m;
}
// get data out from a message
Message& getMessage(Message& m) {
Domain.getMessage(m);
m.get(Node);
m.get(vnode_m);
return m;
}
};
//////////////////////////////////////////////////////////////////////
template <unsigned Dim>
inline std::ostream&
operator<<(std::ostream& out, const Vnode<Dim>& v) {
out << "Node = " << v.getNode() << " ; vnode_m = " << v.getVnode()
<< " ; Domain = " << v.getDomain();
return out;
inline std::ostream& operator<<(std::ostream& out, const Vnode<Dim>& v) {
out << "Node = " << v.getNode() << " ; vnode_m = " << v.getVnode()
<< " ; Domain = " << v.getDomain();
return out;
}
#endif // VNODE_H
#endif
......@@ -67,6 +67,7 @@ TEST(Field, FieldDebug2)
agfp3(A);
EXPECT_NEAR(sum(A),288,roundOffError);
#if 0
// Scalar Field Face(0)-Centered---------------------------------------------
typedef CommonCartesianCenterings<D,1U,0U>::allFace FC;
CenteredFieldLayout<D,M,FC> fl(mesh,sp,nvnodes);
......@@ -97,7 +98,7 @@ TEST(Field, FieldDebug2)
fdi << endl << "--------ggfp3(B)-----------------------------" << endl;
ggfp3(B);
EXPECT_NEAR(sum(B),288,roundOffError);
// Vektor Field vectorFace-Centered------------------------------------------
typedef CommonCartesianCenterings<D,D>::vectorFace VFC;
CenteredFieldLayout<D,M,VFC> vfl(mesh,sp,nvnodes);
......@@ -175,4 +176,5 @@ TEST(Field, FieldDebug2)
EXPECT_NEAR(sum(C)[0],1016,roundOffError);
EXPECT_NEAR(sum(C)[1], 904,roundOffError);
EXPECT_NEAR(sum(C)[2], 456,roundOffError);
#endif
}
\ No newline at end of file
set (_SRCS
# Average.cpp
Cartesian.cpp
CartesianCentering.cpp
# CartesianCentering.cpp
)
include_directories (
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment