TrustLab Projects (Security4AI, AI4Security)



Trustworthy and Responsible AI

Trustworthy and Responsible AI is about building AI systems that operate reliably and ethically, aligning with societal values and expectations. Ensuring that these systems behave as expected under various conditions is essential to minimize the risk of unintended outcomes. Equally important is controlling how AI models are used, ensuring they adhere to specific guidelines and are not misapplied. Moreover, it's crucial to maintain a clear focus on the intended purposes of these models, preventing their use in ways that could lead to ethical or legal concerns. By integrating these principles, AI systems can be developed and deployed in a manner that is both reliable and responsible.


Privacy-preserving Machine Learning

Privacy-preserving machine learning addresses the critical need to protect sensitive data in machine learning applications. As models are increasingly deployed in sensitive areas like healthcare and finance, threats such as membership inference, where an attacker can determine if specific data was used in training, and gradient inversion attacks, which can reconstruct input data from model gradients, pose serious risks. Additionally, model extraction attacks can replicate a model's functionality, compromising both data privacy and intellectual property. Privacy-preserving techniques aim to mitigate these risks, ensuring that the benefits of machine learning are realized without sacrificing privacy.


Security and Privacy Compliance

As data protection laws such as GDPR and CCPA become increasingly complex and stringent, privacy compliance has become a core challenge across many industries, particularly in sectors that handle large volumes of personal data. Consequently, ensuring that applications meet regulatory requirements while safeguarding user privacy has emerged as a significant issue. TrustLab has conducted in-depth explorations of both the traditional Android domain and the emerging VPA market, analyzing the quality of official privacy documents, such as privacy policies and privacy change lists, and extracting relevant entities. Additionally, we conducted systematic testing and evaluation of applications' functionality and data collection practices during real-world operation. Our findings provide concrete guidance and practical insights for optimizing privacy compliance.


AI for Software Engineering

TrustLab has conducted in-depth and systematic research primarily on Web-based collaboration platforms, Deep Learning libraries, and the emerging third-party applications integrated with LLMs. Our rigorous defect testing has particularly focused on these platforms' performance in complex scenarios such as permission invocation, memory consumption, computational errors, data transmission, and secure API calls. Our research not only uncovers high-risk vulnerabilities hidden within these systems but also provides specific recommendations for improvement. These insights serve as valuable guidance for developers and engineers, helping them optimize system design and enhance software security and robustness.