Vous êtes sur la page 1sur 28

Development of a Platform for Parallel Reservoir Simulators

Hui Liu Advisor: Dr. Zhangxing (John) Chen

Outline
Introduction Grid Management Linear Solver Pre-processing Visualization Numerical Experiments

Introduction: Motivation
Desktop/workstation Small cases, efficient Large cases, tens of millions of grid cells Large number of wells Multi-core, OpenMP

Our Methods
Cluster OpenMP - multi-core, easy to use, limited scalability MPI - communication - MPI-IO C language

Typical Simulator
Grid Physics

Data

SIM

Visualization

Solver

Key words

IO

Communication

Memory

Development Goals
Common platform for various simulators - Grid, data, linear solver and IO - Pre- and post-processing Hundreds of millions of grid cells (or more) Hundreds of processors (or more)

Grid Management

The most critical module - Type, choice of numerical methods - Partition, workload, communication - Data structure, info and data distribution - Numbering, bandwidth, input

Grid Type
Hexahedron Structured (FD and FV) Unstructured (to be implemented)

Grid Partition
Topological - Connection, dual graph - METIS, ParMETIS Geometry - Location info, coordinate (centroid) - Zoltan

Grid Partition: ParMETIS

Linear Solver PDE F(x) = 0 A * x = b


Newton-Raphson iteration Linear solvers - Krylov solvers: GMRES, BICGSTAB - Algebraic multi-grid solvers - Preconditioners The most time consuming module, 60% Efficient linear solvers are important

Parallel BLAS
Distributed matrix Distributed vector Matrix and vector management Parallel matrix & vector operations (BLAS 1/2) Global communication modules

Linear Solvers
Solver - GMRES(m) , ORTHOMIN(m) - BICGSTAB, CG , CGS - AMG (From Hypre) Preconditioner - Domain decomposition preconditioner - ILU(k), ILUT and Direct method - AMG - Constrained pressure residual (CPR)

Pre-processing
Define key words and well file Choose model Setup reservoir and well data Setup initial and boundary condition Setup parameters

Pre-processing
Set initial condition, read from file - Saturation - Porosity - Permeability, etc Parallel read, MPI-IO Distributed data among processors

Visualization
Display grid, well and physics properties Output format, structured and unstructured grid Parallel read/write operations Graphics engine, rendering

Visualization: VTK
VTK toolkit, graphics engine VTK formats Parallel write, MPI-IO Paraview displays VTK file

Visualization: Example

num

ana

Visualization: Example

Numerical Experiments
Parallel (U of Calgary), Westgrid 528 standard nodes - 2 6-core Intel Xeon E5649 processors - 24 G memory InfiniBand 4X QDR, 40 Gbit/s

Exp 1: 15M case


GMRES Grid: 250x250x250 Unknowns: 15 million

# processors Grid time (s)

2
44.93

4
21.74 59.82

8
10.63 27.48

16
5.36 13.54

32
2.81 6.90

64
1.59 3.49

Overall time (s) 122.68

Exp 2: 125M Case


GMRES Grid: 500x500x500 Unknowns: 125 million

# processors Grid time (s) Overall time (s)

16 49.19 1258.55

32 24.18 662.10

64 11.84 338.68

128 5.45 166.54

Exp 3: 200M Case


GMRES Grid: 585x585x585 Unknowns: 200 million

# processors Grid time (s) Overall time (s)

16 80.54 2471.98

32 38.93 1286.08

64 19.82 670.83

128 9.52 346.07

Exp 4: Very large Cases


GMRES Grid size: billion (B)
# processors Grid size Grid time (s) Overall time (s) 64 1B 115.76 4141.05 128 1B 57.56 2029.93 160 2B 94.74 3229.67 200 3B 123.62 4060.92

Conclusion
A parallel platform for reservoir simulation has been developed. The platform supports finite difference method and finite volume method. The platform has the capability to calculate very large problems.

Future Work
Serious tests, using more processors Optimize Develop reservoir simulators Develop physics-based preconditioners

Hui Liu
hui.j.liu@ucalgary.ca
http://schulich.ucalgary.ca/chemical/JohnChen

Sponsors

Vous aimerez peut-être aussi