Reservoir Characterization Using Production Data and Time-Lapse Seismic Data
MetadataVis full innførsel
The most commonly encountered, and probably the most challenging task in reservoir engineering, is to describe the reservoir accurately and efficiently. An accurate description of a reservoir is crucial to the management of production and efficiency of oil recovery. Reservoir modelling is an important step in a reservoir’s future performance, which is in direct proportion to reservoir management, risk analysis and making key economic decisions. The purpose of reservoir modelling is to not only build a model that is consistent with currently available data, but to build one that gives a good prediction of its future behaviour. Updating a reservoir model to behave as closely as possible to the real reservoir is called history matching, and the estimation of reservoir properties using this method is known as parameter estimation problem, which is an inversion process. Parameter estimation is a time consuming and non-unique problem with a large solution space. Saturation and pressure changes, and porosity and permeability distributions are the most common parameters to estimate in the oil industry. These parameters must be specified in every node within a petroleum reservoir simulator. These parameters will be adjusted until the model prediction data match the observation data to a sufficient degree. The solution space reduction in this project is done by adding time-lapse seismic data as a new set of dynamic data to the traditional production histories. Time-lapse (or 4D) seismic consists of two or more 3D seismic surveys shot at different calendar times. Time-lapse seismic surveys produce images at different times in a reservoir’s history. The seismic response of a reservoir may change due to changes in pressure, fluid saturation and temperature. These changes in seismic images due to a variation in saturation and pressure can be used as additional observation data. Time-lapse seismic data are dynamical measurements which have a high resolution in the horizontal direction, and give a significant image of fluid and pressure changes from the entire reservoir. However, the results are associated with errors and uncertainties that are related to the repeatability of data acquisition, data processing sequences, low resolution in vertical direction, lack of rock physics understanding, and an error in up-scaling and cross-scaling seismic and simulation data. This project uses exact amplitudes related to the seismic images after processing them in two different forms: Zero offset amplitudes and Amplitude versus offset (AVO) gradients. The effect of adding AVO gradients in the objective function (the misfit between responses of model and real reservoir) is discussed in Chapter 3. One of the key issues in parameter estimation is to develop an efficient and reliable non-linear regression procedure. This procedure is based on three concepts: mathematical model, objective function and optimization algorithm. The mathematical model (Forward model) in this project consists of two parts: a Reservoir simulator and a Forward seismic model. A three phase black oil commercial simulator (ECLIPSE 100) is used to simulate fluid and pressure changes within the reservoir due to depletion and water injection. Forward seismic modelling software, based on rock physic formulations (Gassmann equation and Hertz-Mindlin model) and matrix propagating techniques developed at NTNU, is used to provide 4D seismic amplitudes from saturation and pressure changes. A new objective function which is defined as the difference between observation data and simulated data contains 4D seismic and production parts. Because these data are from different natures, integrating them still present a challenge. A key issue is the type of 4D seismic data and the weighting factor between these terms. Different scenarios and weighting factors are tested and discussed in Chapter 3 and Appendix C2. Different optimization techniques are tested to choose the easiest, fastest and most efficient and robust optimization algorithm in conditions which are case dependent and under some technical limitations. Both derivative based (Gauss-Newton (GN), and Spares non-linear optimizer (SNOPT)) and derivative-free (Hooke-Jeeves direct search (HJDS), General pattern search (GPS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA)) approaches are tested in two different type of inversion problem (Chapter 5) In the First case The optimization variable is a facies indicator (i.e., sand or shale) at every grid block. The observation data consists of production and seismic data. We study two different types of seismic observables (diffraction and travel time tomography). Both of them are computed along the two perpendicular cross-well sections from the injectors, and only at the end of production. While in the second case, the optimization variables are porosity and permeability at every grid block and the observation data were production and 4D seismic data. 4D seismic data was in the form of zero offset amplitude and Amplitude versus offset (AVO) gradients. The main objective of this work is to estimate the distribution of porosity and permeability by using time-lapse seismic and production data. The first attempt (Chapter 2) was to purpose a least-squares inversion technique to invert for key parameters used in fluid flow simulation, like, saturation, and pressure. Norne Field (offshore Norway) is chosen as a PhD data set. Most of the work is done in a 2D semi-synthetic model which is a 2D section of this reservoir. The second attempt was to estimate two main important reservoir parameters the distribution of porosity and permeability from 4D seismic data (Appendices C4 and C5), and production data is then added to the observation data (Chapter 3). Some simple improvements in this part are done in this part based on geological knowledge, reservoir properties, and so on which is applied as a bound constraint in the parameter estimation routine. The inversion process is a time consuming problem and without speeding-up the inversion is difficult to implement in reality. We tried to speed-up the optimization in two ways. First, we introduced a distributed environment framework using 20 processors (Chapter 3 and 5). The second attempt to speed-up the optimization was to reduce the number of variables while retaining the geological properties as much as possible. We introduce and tested a traditional method, Gradzone Analysis, and then introduced principal component analysis (PCA) to reduce the number of parameters. The result shows that PCA is not only efficient in reducing the number of parameters to the acceptable number without loosing too much of the reservoir information, but also, by incorporates some additional complex spatial constraints in order to force the parameter estimation to honour geology By using this combination the model inversion approach will be more efficient (Chapters 4 and 5). One attempt of this work is to estimate the distribution of porosity and permeability in a part of Norne Field along well E-3CH by matching the dynamic behaviours, which led us to work under some limited conditions. These limitations are in the technical parts such as the number of licences to the commercial simulation software, the number of available processors, and so on. In model inversion problems we usually have one or more optimization variables per grid block. In large reservoirs, we have at least a few thousands grid blocks which makes the gradient-based optimization techniques fairly non-efficient. A substitute method could be an improvement of this method, or by using derivative-free approaches. Many different optimization algorithms are tested in the synthetic model and the results show that the Hooke-Jeeves direct search method is faster, more efficient, and easier to implement than derivative based (Gauss-Newton, Spares non-linear optimizer) and derivative-free (General pattern search, Particle Swarm Optimization and Genetic Algorithm) in the situations which are not possible to use a parallel distributed framework. But otherwise Spares non-linear optimizer and General pattern search are preferable. (Chapter 5 and 6). In this work because of the limitations in the number of licences and processors we are supposed to choose only one of these two approaches to significantly increase the efficiency of the optimization methodologies: using a distributed environment framework or using a message passing interface (MPI) to parallelize the reservoir simulation model. The main advantage of Hooke-Jeeves is that it works fast in a series and does not need to use a distributed environment. The combination of all of this is applied in estimating the distribution of porosity and permeability in a part of Norne Field along well E-3CH by using timelapse seismic and production data (Chapter 6). The result shows that in this case, using the Hooke-Jeeves direct search in combining PCA and the parallel simulation run can be robust, fairly efficient and very simple to implement.