Vis enkel innførsel

dc.contributor.advisorYang, Bian
dc.contributor.advisorFierrez, Julian
dc.contributor.advisorBusch, Christoph
dc.contributor.authorHassanpour, Ahmad
dc.date.accessioned2024-09-10T11:14:36Z
dc.date.available2024-09-10T11:14:36Z
dc.date.issued2024
dc.identifier.isbn978-82-326-8285-0
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/3151115
dc.description.abstractIn the digital age, online social networks (OSNs) have become essential platforms for communication, information sharing, and social interaction. However, this widespread use has introduced significant privacy concerns. Users often share personal information without a full understanding of the potential risks, which are compounded by the complex and dynamic nature of digital interactions. These interactions span multiple platforms, making privacy management increasingly challenging. The ease with which personal data can be accessed and potentially misused highlights the urgent need for more effective privacy protection mechanisms. One critical aspect of this problem is the static nature of current privacy settings on most OSNs, which require manual adjustments. Users’ privacy needs and preferences evolve over time and with varying types of content, but current systems do not adequately accommodate these changes. This often results in a mismatch between the desired level of privacy and the actual settings applied. There is a clear need for adaptive privacy settings that can respond to users’ changing requirements more intuitively and efficiently. The primary goal of this thesis is to develop more accurate methods for measuring privacy leakage and to explore how ML models can mitigate, intensify, or alert users about potential privacy breaches. It aims to develop a conceptual framework for measuring privacy leakage on OSNs, helping users understand their privacy settings and associated risks. This framework will also provide a basis for users to make more informed decisions about their data sharing practices. This research also examines the role of data linkability in privacy breaches within and across different OSNs. By analyzing how the linkability of data—where separate pieces of information can be connected to form a comprehensive user profile—affects privacy leakage scores, this work provides valuable insights into managing privacy risks. The study further investigates methods for continuously adjusting user privacy settings in real-time without compromising privacy, evaluating the effectiveness of deep learning models in automating privacy settings. The introduction of advanced technologies like machine learning (ML) in managing privacy settings offers both opportunities and challenges. Generative AI (GAI) has recently gained popularity, capable of uncovering and potentially exposing sensitive information shared on OSNs. GAI models can extrapolate and recreate sensitive data, raising concerns about security and privacy. As these models become more advanced, they enhance user experience by providing personalized content but also pose risks of unintentional data leakage. This duality necessitates a thorough exploration of the implications of GAI on data privacy and security. This thesis explores the potential for private information disclosure through public data and GAI technologies, with a focus on facial features. It addresses the ”eyes-to-face” problem, where only the eyes are visible, and assesses the potential for GAI technologies to reconstruct the rest of the face, thereby compromising individual privacy. This analysis highlights the privacy vulnerabilities inherent in sharing partial biometric data and proposes methodologies to mitigate such risks. Moreover, while ML models can provide more sophisticated and adaptable privacy controls, they also introduce issues related to transparency, fairness, and ethical considerations. The algorithms behind these models can be opaque, making it difficult for users to understand how their privacy is managed. Additionally, there is a risk that these models may perpetuate biases, leading to unfair or discriminatory outcomes. We investigated the intricate interactions between privacy, accuracy, and fairness in image classification tasks. Our study highlights a consequential trade-off between privacy or utility and fairness, as applying the generalization techniques. In conclusion, the increasing use of online social networks (OSNs) has significantly amplified privacy concerns due to the dynamic and multi-platform nature of digital interactions. This thesis underscores the inadequacy of static privacy settings and the necessity for adaptive mechanisms that cater to the evolving needs of users. By developing a conceptual framework for measuring privacy leakage and examining the role of data linkability, this research provides critical insights into managing privacy risks. Furthermore, the exploration of advanced technologies like machine learning and generative AI reveals both the potential and the challenges of these tools in enhancing privacy controls. This work emphasizes the importance of balancing privacy, accuracy, and fairness, proposing innovative methods to mitigate privacy vulnerabilities, especially in the context of biometric data and image classification. The findings advocate for more transparent, fair, and ethical approaches in privacy management, paving the way for safer digital environments.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2024:348
dc.relation.haspartHassanpour, Ahmad; Yang, Bian. PriMe: A Novel Privacy Measuring Framework for Online Social Networks. IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining https://doi.org/10.1109/ASONAM55673.2022.10068701en_US
dc.relation.haspartHassanpour, Ahmad; utsash, masrur masqub; Yang, Bian. The Impact of Linkability On Privacy Leakage. IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining - ASONAM '23: Proceedings of the International Conference on Advances in Social Networks Analysis and Mining Pages 364 - 370 https://doi.org/10.1145/3625007.3627832 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)en_US
dc.relation.haspartHassanpour, Ahmad; Daryani, Amir Etefaghi; Mirmahdi, Mahdieh; Raja, Kiran; Yang, Bian; Busch, Christoph; Fierrez, Julian. E2F-GAN: Eyes-to-Face Inpainting via Edge-Aware Coarse-to-Fine GANs. IEEE Access 2022 ;Volum 10. s. 32406-32417 https://doi.org/ 10.1109/ACCESS.2022.3160174 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)en_US
dc.relation.haspartHassanpour, Ahmad; Jamalbafrani, Fatemeh; Yang, Bian; Raja, Kiran; Veldhuis, Raymond Nicolaas Johan; Fierrez, Julian. E2F-Net: Eyes-to-face inpainting via StyleGAN latent space. Pattern Recognition 2024 ;Volum 152. https://doi.org/10.1016/j.patcog.2024.110442 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)en_US
dc.relation.haspartHassanpour, Ahmad; Moradikia, Majid; Yang, Bian; Abdelhadi, Ahmed; Busch, Christoph; Fierrez, Julian. Differential Privacy Preservation in Robust Continual Learning. IEEE Access 2022 ;Volum 10. s. 24273-24287 https://doi.org/: 10.1109/ACCESS.2022.3154826 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)en_US
dc.relation.haspartHassanpour, Ahmad; Zarei,Amir; Mallat,Khawla; Oliveira,AS de; Yang,Bian. The Impact of Generalization Techniques on the Interplay Among Privacy, Utility, and Fairness in Image Classificationen_US
dc.titleModelling and Preserving Privacy in Online Social Networksen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550en_US


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel