Cognitive Security Technologies

Artificial Intelligence and IT Security

The department Cognitive Security Technologies conducts research at the intersection between artificial intelligence (AI) and IT security.
The focus is on two aspects:

Applying AI methods in IT security

State-of-the-art IT systems are characterized by a fast-growing complexity. The development of current and future information and communication technology introduces new, previously unexpected challenges: Starting with the increasing connectivity of even the smallest communicating units and their merging into the Internet of Things, continuing with the global connection of critical infrastructures to unsecured communication networks, and ending with the protection of digital identities: People are faced with the challenge of ensuring the security and stability of all these systems.

To keep up with this fast-paced development, IT security needs to further develop and rethink automation. The research department Cognitive Security Technologies develops semi-automated security solutions that use AI to support humans in investigating and securing security-critical systems.

Security of machine learning and AI algorithms

Just like conventional IT systems, AI systems can also be attacked. For example, through adversarial examples, it is possible to manipulate AI for facial recognition. Attackers can thus gain unauthorized access to sensitive access management systems that rely on AI systems. Similar attack scenarios also affect the field of autonomous driving, for example, where humans must rely on the robustness and stability of assistance systems.

One field of research of the department Cognitive Security Technologies at Fraunhofer AISEC it the exploration of those vulnerabilities in AI algorithms and solutions to fix them. Furthermore, the department offers tests to harden such AI systems.

GPU clusters with high processing power

Deep Learning and AI require very high processing power. Fraunhofer AISEC therefore maintains several GPU clusters that are specifically optimized for Deep Learning. These resources are continuously upgraded with the latest technology. This provides the ability to train the latest models quickly and efficiently to keep development cycles short.

 

Offerings

Our goal is to systematically improve the security of systems and products in close cooperation with our partners and customers. In doing so, we utilize the capabilities of state-of-the-art AI algorithms to comprehensively evaluate system reliability and sustainably maintain reliability and robustness throughout the entire lifecycle.

Evaluate Security

  • Evaluating AI-based security products, such as facial recognition cameras or audio systems like speech synthesis, voice recognition, or voice-based user recognition.
  • Explainability of AI methods (Explainable AI).
  • Hardware reversing and pentesting using artificial intelligence, e.g., side-channel attacks on embedded devices
  • Assessing the correctness of datasets, both against random errors (such as incorrect annotations) and attacks (adversarial data poisoning)
  • Evaluating machine learning (ML) training pipelines: examining the correctness of the applied preprocessing methods, algorithms, and metrics

 

Design Security 

  • Implementation and further development of approaches from the field of Privacy Preserving Machine Learning: training of models on foreign datasets, while maintaining the confidentiality of datasets or models
  • Authentication and Human Machine Interface (HMI) Security
  • Support in the evaluation of security log files using Natural Language Processing
  • Information aggregation for system analysis and monitoring using ML-based analysis of data streams, log files and other data sources

 

Maintain security

  • Conception and prototyping of performance-aware, AI-assisted anomaly detection
  • Conception and prototyping of AI-assisted fraud detection
  • Situational awareness using imagery, text, and audio (including open source intelligence)
  • Development of algorithms in predictive security
  • Creation of automated solutions for the implementation of the DSGVO guidelines
  • Seminar and training courses on AI for IT security
  • Development of detection algorithms for deepfake materials
  • Implementation of AI-based elements for IP protection

Expertise

Fraunhofer AISEC is a national leader in the field of hardening and robustness analysis of AI methods. Through high-profile publications in international conferences and close cooperation with our industrial partners, the department Cognitive Security Technologies understands the current challenges and provides corresponding solution approaches.

For example, one of the main research areas is the development of a testing procedure that evaluates AI models for their vulnerability and creates appropriate key performance indicators (KPI). This allows the model owner to estimate the vulnerability of his own system, comparable to classical penetration tests. In a second step, the models can then be hardened accordingly.

The department Cognitive Security Technologies has in-depth expertise in the following areas:

  • Adversarial Machine Learning
  • Anomaly Detection
  • Natural Language Processing
  • AI-based fuzzing
  • User Behaviour Analysis
  • Analysis of Encrypted Network Traffic
  • AI for Embedded Systems
  • General Machine Learning

Deepfakes: AI systems reliably expose manipulated audio and video

Artificial Intelligence (AI) brings a wealth of new opportunities, but it also entails new risks, one of which is the creation of “deepfakes”. These are deceptively real but manipulated video and audio recordings that can be created only through the use of AI. The risks and challenges associated with deepfakes are considerable — not only for the media landscape but also for companies and individuals. Luckily, AI also offers a way to reliably expose deepfakes.

IT experts in the Cognitive Security Technologies (CST) research department are hard at work creating systems for the reliable, automated recognition of deepfakes. They are also investigating methods to improve the robustness of systems that evaluate video and audio material. 

Learn more in our Deepfake Spotlight.

Other Projects

 

ECOSSIAN

Detection and warning systems for critical infrastructures.

 

 

 

 

CyberFactory#1

Design, development, integration, and demonstration of highly connected and resilient industrial production.

 

SeCoIIA

AI-based protection of highly connected, fully automated industrial production.

 

BayQS

In the Bavarian Competence Center for Quantum Security and Data Science, researchers are studying relevant software issues in the context of quantum computing and developing solutions to help the industry identify the advantages that quantum methods offer for practical problems.

Publications

  • Ching-Yu Kao, Iheb Ghanmi, Houcemeddine Ben Ayed, Ayush Kumar, Konstantin Böttinger: »Near Real-Time Detection and Rectification of Adversarial Patches«. In: Future of Information and Communication Conference. Springer, 2024, pp. 174–196.
  • Nicolas M. Müller, Nick Evans, Hemlata Tack, Philip Sperl, Konstantin Böttinger: »Harder or Different? Understanding Generalization of Audio Deepfake Detection «. In: Interspeech 2024 (2024).
  • Nicolas M. Müller, Piotr Kawa, Wei Herng Choong, Edresson Casanova, Eren Gölge, Thorsten Müller, Piotr Syga, Philip Sperl, Konstantin Böttinger: »MLAAD: The Multi-Language Audio Anti-Spoofing Dataset«. In: International Joint Conference on Neural Networks (IJCNN), 2024.
  • Nicolas M. Müller, Piotr Kawa, Shen Hu, Matthias Neu, Jennifer Williams, Philip Sperl, Konstantin Böttinger: »A New Approach to Voice Authenticity«. In: Interspeech 2024 (2024).
  • Nicolas M. Müller, Simon Roschmann, Shahbaz Khan, Philip Sperl, Konstantin Böttinger: »Shortcut Detection with Variational Autoencoders«. In: International Joint Conference on Neural Networks (IJCNN), 2024.

  • Nicolas M. Müller, Maximilian Burgert, Pascal Debus, Jennifer Williams, Philip Sperl, and Konstantin Böttinger. Protecting Publicly Available Data With Machine Learning Shortcuts. In: BMVC 2023.
  • N. Müller, J. Jochen, J. Williams, K. Böttinger.  „Localized Shortcut Removal”. In: 2nd XAI4CV Workshop at CVPR. 2023.

  • Müller, N. M., Czempin, P., Dieckmann, F., Froghyar, A. and Böttinger, K. “Does Audio Deepfake Detection Generalize?” In: Interspeech (2022).
  • Müller, N. M., Dieckmann, F. and Williams J. “Attacker Attribution of Audio Deepfakes”. In: Interspeech (2022).
  • Sava, P.-A., Schulze, J-Ph, Sperl, P., Böttinger, K. "Assessing the Impact of Transformations on Physical Adversarial Attacks." Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security (AISec 2022).
  • Schulze, J.-Ph., Sperl, P., Radutoiu, A., Sagebiel, C., Böttinger, K. "R2-AD2: Detecting Anomalies by Analysing the Raw Gradient." In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2022).
  • Schulze, J.-Ph., Sperl, P., Böttinger, K. "Double-Adversarial Activation Anomaly Detection: Adversarial Autoencoders are Anomaly Generators." In International Joint Conference on Neural Networks (IJCNN 2022).
  • Schulze, J.-Ph., Sperl, P., Böttinger, K. "Anomaly Detection by Recombining Gated Unsupervised Experts." In International Joint Conference on Neural Networks (IJCNN 2022).
  • Kao, C., Chen, J. Pizzi, K, Böttinger, K. "Rectifying adversarial inputs using XAI Techniques." Proceedings of the European Association for Signal Processing 2022 (EURASIP 2022).
  • Kao, C., Wan, H., Pizzi, K., Böttinger, K. "Real or Fake? A Practical Method for Detecting Tempered Images." Proceedings of the international image processing application and systems 2022 (IPAS 2022). (Best session paper award)
  • Choosaksakunwiboon, S., Pizzi, K., & Kao, C. Y. Comparing Unsupervised Detection Algorithms for Audio Adversarial Examples. In International Conference on Speech and Computer (pp. 114-127). Springer, Cham. (2022).
  • Müller, N. M., Markert, K., Böttinger, K. "Human Perception of Audio Deepfakes". ACM Multimedia. (2022).

  • Karla Markert, Donika Mirdita, and Konstantin Böttinger. “Language Dependencies in Adversarial Attacks on Speech Recognition Systems”. In: Proc. 2021 ISCA Symposium on Security and Privacy in Speech Communication. 2021, pp. 25–31. DOI: 10.21437/SPSC.2021-6.
  • Karla Markert, Romain Parracone, Mykhailo Kulakov, Philip Sperl, Ching-Yu Kao, and Konstantin Böttinger. “Visualizing Automatic Speech Recognition – Means for a Better Understanding?” In: Proc. 2021 ISCA Symposium on Security and Privacy in Speech Communication. 2021, pp. 14–20. DOI: 10.21437/SPSC.2021-4.
  • N. Müller and K. Böttinger. “Adversarial Vulnerability of Active Transfer Learning”. In: Symposium on Intelligent Data Analysis 2021. 2021.
  • Nicolas Müller, Franziska Dieckmann, Pavel Czempin, Roman Canals, and Konstantin Böttinger. “Speech is Silver, Silence is Golden: What do ASV-spoof-trained Models Really Learn?” In: ASV-Spoof 2021 Workshop. 2021.
  • Jan-Philipp Schulze, Philip Sperl, and Konstantin Böttinger. “DA3G: Detecting Adversarial Attacks by Analysing Gradients”. In: Computer Security – ESORICS 2021. Springer, 2021. DOI: 10.1007/978- 3- 030- 88418- 5_27. URL: https://doi.org/10.1007/978- 3- 030-88418-5_27.
  • Philip Sperl, Jan-Philipp Schulze, and Konstantin Böttinger. “Activation Anomaly Analysis”. In: Machine Learning and Knowledge Discovery in Databases. Ed. by Frank Hutter, Kristian Kersting, Jefrey Lijffijt, and Isabel Valera. Cham: Springer International Publishing, 2021, pp. 69–84. ISBN: 9783030676612.

  • Tom Dörr, Karla Markert, Nicolas M. Müller, and Konstantin Böttinger. “Towards Resistant AudioAdversarial Examples”. In: 1st Security and Privacy on Artificial Intelligent Workshop (SPAI’20). ACMAsiaCCS. Taipei, Taiwan, 2020. DOI: https://doi.org/10.1145/3385003.3410921.
  • Karla Markert, Donika Mirdita, and Konstantin Böttinger. “Adversarial Attacks on Speech Recognition Systems: Language Bias in Literature”. In: ACM Computer Science in Cars Symposium (CSCS). Online, 2020.
  • Karla Markert, Romain Parracone, Philip Sperl, and Konstantin Böttinger. “Visualizing Automatic Speech Recognition”. In: Annual Computer Security Applications Conference (ACSAC). Online, 2020.
  • N. Müller, D. Kowatsch, and K. Böttinger. “Data Poisoning Attacks on Regression Learning and Corresponding Defenses”. In: 25th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC). 2020
  • N. Müller, S. Roschmann, and K. Böttinger. “Defending Against Adversarial Denial-of-Service Data Poisoning Attacks”. InDYNAMICS Workshop, Annual Computer Security Applications Conference (ACSAC). 2020.
  • P. Sperl and K. Böttinger. “Optimizing Information Loss Towards Robust Neural Networks”. In: DYNAMICS Workshop, Annual Computer Security Applications Conference (ACSAC). 2020.
  • Sperl P., Kao C., Chen P., Lei X., Böttinger K. (2020) DLA: Dense-Layer-Analysis for Adversarial Example Detection. 5th IEEE European Symposium on Security and Privacy (EuroS&P 2020).
  • Müller, N.,  Debus, P., Kowatsch, D. & Böttinger, K. (2019, July). Distributed Anomaly Detection of Single Mote Attacks in RPL Networks. Accepted for publication at 16th International Conference on Security and Cryptography (SECRYPT). Scitepress.
  • Schulze, J.-Ph., Mrowca, A., Ren, E., Loeliger, H.-A., Böttinger, K. (2019, July). Context by Proxy: Identifying Contextual Anomalies Using an Output Proxy. Accepted for publication at The 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’19).
  • Fischer, F., Xiao, H., Kao, C., Stachelscheid, Y., Johnson, B., Razar, D., Furley, P., Buckley, N, Böttinger, K., Muntean, P., Grossklags, J. (2019) Stack Overflow Considered Helpful! Deep Learning Security Nudges Towards Stronger Cryptography, Proceedings of the 28th USENIX Security Symposium (USENIX Security).
  • Müller, N., & Kowatsch, D., Debus, P., Mirdita, D. & Böttinger, K. (2019, September). On GDPR compliance of companies' privacy policies. Accepted for publication at TSD 2019.
  • Müller, N., & Markert, K. (2019, July). Identifying Mislabeled Instances in Classification Datasets. Accepted for publication at IJCNN 2019.
  • Sperl, P., Böttinger, K. (2019). Side-Channel Aware Fuzzing. In Proceedings of 24rd European Symposium on Research in Computer Security (ESORICS). Springer.
  • Engelmann, S., Chen, M., Fischer, F., Kao, C. Y., & Grossklags, J. (2019, January). Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines “Good” and “Bad” Behavior. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 69-78). ACM.

  • Xiao, H. (2017). Adversarial and Secure Machine Learning (Doctoral dissertation, Universität München).
  • Schneider, P., & Böttinger, K. (2018, October). High-Performance Unsupervised Anomaly Detection for Cyber-Physical System Networks. In Proceedings of the 2018 Workshop on Cyber-Physical Systems Security and PrivaCy (pp. 1-12). ACM.
  • Fischer, F., Böttinger, K., Xiao, H., Stransky, C., Acar, Y., Backes, M., & Fahl, S. (2017, May). Stack overflow considered harmful? the impact of copy&paste on android application security. In Security and Privacy (SP), 2017 IEEE Symposium on (pp. 121-136). IEEE.
  • Böttinger, R. Singh, and P. Godefroid. Deep Reinforcement Fuzzing. In IEEE Symposium on Security and Privacy Workshops, 2018.
  • Böttinger, K. (2017, May). Guiding a Colony of Black-Box Fuzzers with Chemotaxis. In Security and Privacy Workshops (SPW), 2017 IEEE (pp. 11-16). IEEE.
  • Böttinger, K. (2016). Fuzzing binaries with Lévy flight swarms. EURASIP Journal on Information Security, 2016(1), 28.
  • Böttinger, K., & Eckert, C. (2016, July). Deepfuzz: triggering vulnerabilities deeply hidden in binaries. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (pp. 25-34). Springer, Cham.
  • Böttinger, K. (2016, May). Hunting bugs with Lévy flight foraging. In Security and Privacy Workshops (SPW), 2016 IEEE (pp. 111-117). IEEE.
  • Settanni, G., Skopik, F., Shovgenya, Y., Fiedler, R., Carolan, M., Conroy, D., ... & Haustein, M. (2017). A collaborative cyber incident management system for European interconnected critical infrastructures. Journal of Information Security and Applications, 34, 166-182

  • Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., & Roli, F. (2015). Support vector machines under adversarial label contamination. Neurocomputing, 160, 53-62.
  • Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., & Roli, F. (2015, June). Is feature selection secure against training data poisoning?. In International Conference on Machine Learning (pp. 1689-1698).
  • Böttinger, K., Schuster, D., & Eckert, C. (2015, April). Detecting Fingerprinted Data in TLS Traffic. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security (pp. 633-638). ACM.
  • Schuster, D., & Hesselbarth, R. (2014, June). Evaluation of bistable ring PUFs using single layer neural networks. In International Conference on Trust and Trustworthy Computing (pp. 101-109). Springer, Cham.