
- Introduction (J. MOSS)
Jeff MOSS (Founder of DefCon and Black Hat) highlighted some points:
- AI is about using predictions.
- AI brings new issues with Intellectual Properties. He cited the example of Zoom™ that just decided that all our interactions could be used for their ML training.
- Need for authentic data.
- Keynote: Guardians of the AI Era: Navigating the Cybersecurity Landscape of Tomorrow (M. MARKSTEDTER)
The current ML models are insecure, but people trust them. Labs had LLMs available for many years but kept them. With OpenAI going public, it started the race.
She presents trends for enterprise:
- Enterprise’s answer to ChatGPT is Machine Learning as a Service (MLaaS). But these services are not secure.
- The next generation should be multi-modal models (using audio, image, video, text…). More potent than monomodal ones such as LLMs.
- Autonomous agent mixes the data collection of LLM and takes decisions and actions. These models will need secure authorized access to enterprise data. Unfortunately, their actions are non-deterministic.
- Data security for training is critical. It is even more challenging when using real-time data.
She pointed to an interesting paper about poisoning multi-modal data via image or sound.

- Video-Based Cryptanalysis: Recovering Cryptographic Keys from Non-compromised Devices Using Video Footage of a Device’s Power LED (B. NASSI, E. ILUZ)
Often, the power LED is more or less at the entry of the power supply circuit. Thus, intensity is correlated to the consumption.
They recorded only the image of the LED to see the effect of the rolling shutter. Thus, they increase the sampling rate on the LED with the same video frequency. This is a clever, “cheap” trick.
To attack ECDSA, they used the Minerva attack (2020)
Conclusion: They turned timing attacks into a power attack. The attacks need two conditions:
- The implementation must be prone to some side-channel timing attack.
- The target must have a power LED in a simple setting, such as a smart card reader, or USB speakers.
Despite these limitations, it is clever.
- Risks of AI Risk Policy: Five Lessons (R. S. S. KUMAR, J. PENNEY)
Once more, users trust AI blindly.
The global environment is complex and extends further than ML code.
All traditional security issues are still present, such as dependency injection.
The current systems are not secure against adversarial examples. They may not even present the same robustness of all data points.
Explainability is insufficient if it is not trustworthy. Furthermore, the fairness and trustworthiness of the entity using the explanation are essential.
- Small Leaks, Billions Of Dollars: Practical Cryptographic Exploits That Undermine Leading Crypto Wallets (N. MAKRYANNIS, O. YOMTOV)
The Multi-Party Computation (MPC) Lindel17 specifies that all further interactions should be blocked when a finalized signature fails. In other words, the wallet should be blocked. They found a way to exfiltrate the part key if the wallet is not blocked (it was the case for several wallets)
In the case of GG18 and GG20, they gained the full key by zeroing the ZKP using the CRT (Chinese Remainder Theorem) and choosing a small factor prime.
Conclusion: Adding ZKP in protocols to ensure that some design hypotheses are enforced.
- The Most Dangerous Codec in the World: Finding and Exploiting Vulnerabilities in H.264 Decoders (W. VASQUEZ, S. CHECKOWAY)
They created H26forge to create vulnerable H264 content. They attack the semantics out of its specified range. Decoders may not test all of them. The tool helps with handling the creation of forged H264.
Conclusion
This may be devastating if combined with fuzzing.
Enforce the limits in the code.
If the EKU (extended key use) is not properly verified for its purpose, bingo.
Some tested implementations failed the verification. The speaker forged the signing tools to accept domain-validated certificates for signing code.
- Keynote: Phoenix Soaring: What We Can Learn from Ukraine’s Cyber Defenders about Building a More Resilient Future (J. EASTERLY, V. ZHORA)
Politically correct but not really informative.