Understanding and Analyzing Emergent Misbehavior on Cyber-Physical Systems.
Doctoral thesis
Permanent lenke
https://hdl.handle.net/11250/3064543Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Sammendrag
In a constantly evolving world, various technologies such as artificial intelligence (AI) influence the development of digital transformation through its decision-making capabilities. Cyber-physical systems (CPSs) are becoming more complex, and advanced adversaries are contriving new sophisticated ways to perpetrate their missions. Given the increased complexity of modern systems, cyber risks are getting more difficult to be handled, leading to an expansion of the attack surface. In response, there is a need to improve their safety and security, which are of the utmost importance.
The challenges are twofold: (i) advances in CPSs; and (ii) evolving cyber threat landscape that entails new challenges for risk identification. On the one hand, CPSs are likely to exhibit emergent behaviors, considering the system as a whole, success and failure are increasingly understood as emergent rather than resultant. On the other hand, attack strategies are constantly evolving that affect CPSs, especially when AI can be used as a malicious tool by adversaries, which makes the situation even more challenging. Therefore, sufficient understanding and analysis of potential risks to support relevant operations should be at the forefront to ensure a safe transition to a connected world.
This thesis aims to provide knowledge on the investigation of emergent misbehavior in the context of CPS, with a focus on the reciprocal influence of AI on cyberthreat behaviors. It develops new knowledge, methods, and guidance that provide the set of processes and practices to identify potential risks, as well as suggestions to address them. The thesis provides systematic studies on four sets of aspects:1) investigation of emergent risk concept; 2) new knowledge and a framework for mapping AI offensive capabilities; 3) new knowledge and a taxonomy of machine learning (ML)-based sensor data deception approaches as a particular type of ML-based attack strategy, which targets the sensor data of CPS; and 4) a methodology for safety and security co-analysis targeting CPS to address the ML-based sensor data deception risk. The resulting contributions provide an improved understanding of the changing threats and risks, as well as propose ways for preventing undesirable emergent misbehaviors on CPS. Finally, this thesis discusses research and practical implications and sheds light on avenues for future research.