Show simple item record

dc.contributor.authorArnø, Mikkel Leite
dc.contributor.authorGodhavn, John-Morten
dc.contributor.authorAamo, Ole Morten
dc.date.accessioned2021-09-07T05:25:39Z
dc.date.available2021-09-07T05:25:39Z
dc.date.created2021-01-17T12:57:53Z
dc.date.issued2020
dc.identifier.isbn978-1-61399-718-5
dc.identifier.urihttps://hdl.handle.net/11250/2773873
dc.description.abstractDuring drilling operations, maintaining a desired downhole pressure between pressure margins is crucial to avoid damage to the formation and the well. The process is highly nonlinear, changing with depth, and every section in every well is different. Standard solutions with PID controllers are widely accepted for this purpose, although methods such as deep reinforcement learning (DRL) could be investigated as an alternative approach. A smooth update deep Q learning algorithm is used to train an agent embedded in a managed pressure drilling system. The aim is to control downhole pressure during pipe connections by use of a topside choke valve with nonlinear characteristics. The agent is trained on previously gathered data, from situations featuring step changes in pressure setpoint and changing mud flows, all at various well depths. After training, the agent is tasked with controlling BHP during connection, herein demonstrated by use of a numerically simulated low-order hydraulics model. Through episodic training, it becomes clear that the agent purely through interaction with the environment, and without any prerequisite knowledge of system dynamics and reward design, converges to an optimal control policy. The trained agent is then tested on pipe connections with well depths in the lower and upper bounds of the training data. The pipe connection scenario presents changes in operating conditions in terms of changing mud flows with changing conditions like increased frictional pressure losses due to increased depth. Still, the results presented show the agent's ability to track a pressure setpoint at various depths in the changing conditions present during connection, while seamlessly incorporating controller constraints. There are several advantages associated with this approach, among them eliminating the need for development of a complex dynamic model for the process. Also, the approach is applicable to both linear and nonlinear systems, deterministic and stochastic systems, and lower- and higher-level decision-making. These methods could possibly also be applied to other key challenges in drilling such as ROP optimization or autonomous directional drilling.en_US
dc.language.isoengen_US
dc.publisherSociety of Petroleum Engineersen_US
dc.relation.ispartofProceedings of SPE Norway Subsurface Conference
dc.titleDeep Reinforcement Learning Applied to Managed Pressure Drillingen_US
dc.typeChapteren_US
dc.description.versionpublishedVersionen_US
dc.identifier.doi10.2118/200757-MS
dc.identifier.cristin1872641
dc.description.localcodeThis article will not be available due to copyright restrictions (c) 2020 by Society of Petroleum Engineersen_US
cristin.ispublishedtrue
cristin.fulltextoriginal


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record