Extending a derivative-free model-based trust-region optimization algorithm to account for constraints and partial gradient information - Application to oil field development
MetadataShow full item record
This master's thesis is motivated by the oil field development challenge. This is a very big and complicated task and only one part of it will be in focus, namely, the well placement challenge. When planning a new oil field (e.g., in the North sea) or when more wells are being added to an existing field, then the placement of the wells are crucial. If the wells have been placed wisely, then the total amount of recovered oil may be improved considerably. In addition, it is preferable to not produce (i.e, extract from the reservoir) water. Produced water must be cleaned before it is released back into the ocean and there is a limitation on how much water that can be processed at once. To help aid in the decision of the placement of the wells, an oil reservoir simulator can be used. Collected data from the real field are fed into the simulator, and this simulator can be used in the decision making process. This task can be viewed upon as mathematical programming: there is an input (the well location), a function (the simulator) and an output (e.g., a number that represents the value of the accumulated production of oil, gas and water). This master's thesis addresses the challenge of finding a minimum of an unknown function. The function may be either an ordinary mathematical function, or a function whose value depends on the output of a simulation. The available information, when searching for the minimum, is only the function evaluations. The first step towards finding the minimum, is to sample the unknown function and create a model of the relationship between the inputs and the outputs. The model will be valid within a region which is known as the trust-region. The second step, involves minimizing the model, and hopefully this minimum will lead towards a minimum point of the unknown function. New points around the newly found point will be needed to create a new model that is valid within the new trust-region. The trust-region is centered at the newly found point. These steps are repeated until a termination criterion is satisfied. This type of method is called a derivative-free model-based trust-region because the optimization is done on a model only trusted within a specific region, and the unknown function is not differentiated. In addition to learn the theory for the given method, two extensions are made. For most real-life usages, the user would need to be able to impose constraints on the decision variables. There exists several different approach, which each has their different pros and cons. The selected method makes sure that all the points that are to be evaluated will always obey the constraints. This is imposed by adding all the user defined constraints into the minimization of the model within the trust-region. This means that the area where a minimum is searched for will be smaller. If there are constraints on some of the outputs of the unknown function, then these must be handled differently. They are modelled in the same way as the unknown function, and these constraint models can be included into the minimization of the model of the unknown function. Adding these constraints makes it harder to find the minimum of the model. Therefore a Sequential Quadratic Programming software is used to solve this task. The given constraint handling technique for input constraints has been implemented and the preliminary results are satisfying. The second extension is concerned with the possibility of using less sample points to create the model. Two different approaches have been explored. The first approach is to use the old model as an approximation to the new model and perform some minimization of the difference after the interpolation conditions have been satisfied. The interpolation conditions will make sure that the model provides the same output as the unknown functions at the sample points, whereas the minimization will make sure that the model is uniquely defined. The second approach is concerned with a slightly different scenario. Until now only the function evaluations have been available. If derivatives of the unknown function with respect to some of the variables of interest were available, this information could be used to speed up the model-making process. Simulators often provide this additional type of information. This is included by solving a very similar minimization problem as is done for the first approach. These two approaches can be combined. Including gradient information into the model-making process makes the algorithm converge faster. I.e., less function evaluations are needed. However, the found local optimum is often worse than the one found without using gradient information.