Black Hat 2024: Day 1

Jeff MOSS introduction

Jeff MOSS is the founder of Black Hat and Defcon. He always presents his latest thoughts.

New (probably) unforeseen threats have risen in the geopolitical landscape in the last few years.  For instance, what do you do if some of your development teams are in a war zone?  What if the IP is stored in these zones?  Can you stay neutral?  What are the cyber consequences if you cannot?

Keynote: Democracy’s Biggest Year: The Fight for Secure Elections Around the World

BUECHEL E. (CISA), De VRIES H. (European Union Agency for Cybersecurity), EASTERLY J. (Cybersecurity and Infrastructure Security), OSWALD F. (National Cyber Security Centre)

Nihil Nove Sub Sole. The usual expected stuff.

Practical LLM Security: Takeaways From a Year in the Trenches

HARANG R. (NVIDIA)

First, he provided a high-level explanation of Large Language MOdel (LLM). The interesting point is that although the candidate tokens are ranked by their highest probability, the sampling is random. Thus, LLM sometimes makes bad/weird selections (hallucination,…).

Sampled tokens are locked (no go-back).  Thus, the lousy selection continues and cannot be reversed, at least by the LLM.  The same is true for prompts (Forgetting previous prompts is not going back).

This is why Retrieval Augmented Generation (RAG) is used.  RAG allows better fine-tuned knowledge.

He highlighted some RAG-related issues.  But RAG increases the attack surface.  It is easier to poison a RAG dataset than the LLM dataset.  For instance, he described the Phantom attack.  The attacker can direct the expected answer for a poisoned concept.

Therefore, the security and access control of the RAG is crucial.  Furthermore, RAG is excellent at searching.  Thus if the document classification (and reinforcement) and access control are lax, it is game over.  It is relatively easy to leak confidential data inadvertently.

The RAG’s use of emails is a promising but dangerous domain. It is an easily accessible point of poisoning for an attacker and does not require penetration.

What is logged and who can view the logs is also a concern. Logging the prompts and their responses is very sensitive. Sensitive information may leak and, in any case, bypass the boundaries.

Do not rely on guardrails.   They do not work or protect against a serious attacker.

Privacy Side Channels in Machine Learning Systems, Debendedetti et al., 2023 is an interesting paper to read.

15 Ways to Break Your Copilot

EFRAT A. (Zenity)

Copilot is a brand name that encompasses all of Microsoft’s AI products. All Copilots share the same low-level layers (i.e., they use the same kernel LLM) and are specialized for a set of tasks.

Copilot Studios allows with no code to create a Gen AI-based chatbot.  The speaker presented many default configuration issues that opened devastating attacks.  Meanwhile, Microsoft has fixed some of them to be less permissive.  Nevertheless, there are still many ways to allow the leaking of information.  This is especially true as the tool targets non-experts and thus has a rudimentary security stance if there is even a security stance)

Be careful who you authorize to use such tools and review the outcome.

Kicking in the Door to the Cloud: Exploiting Cloud Provider Vulnerabilities for Initial Access

RICHETTE N. (Datadog)

The speaker presented cross-tenant issues in AWS.  Datadog found some vulnerabilities in the policies managing `sts:AssumeRole`.

Lesson:  When using `sts:AssumeRole`, add restrictive conditions in the policy based on the ARN, or Source, and so on.

Compromising Confidential Compute, One Bug at a Time

VILLARD Maxime (Microsoft)

To isolate a tenant from the cloud provider, Intel proposes a new technology called TDX.  It will be present in the next generation of Intel chips.  The host sends a set of commands to enter the TDX mode for a module.  In this mode, the TDX module can launch its own VM to execute independently from the cloud hypervisor.[1]

 The team found two vulnerabilities.  One enabled a DoS attack from within the TDX to crash all the other tenants executing on the host processor.


[1] TDX is not an enclave like SGX.


Black Hat 2023 Day 2

  1. Keynote: Acting national cyber director  discusses the national cybersecurity strategy  and workforce efforts (K. WALDEN)

A new team at the White House of about 100 people dedicated to this task. No comment


The people ḍeciding which features require security reviews are not security experts. Can AI help?

The first issue is that engineering language is different than the normal language.   There is a lot of jargon and acronyms.  Thus, standard LLM may fail.

They explored several strategies of ML.

They used unsupervised training to define vector size (300 dimensions).  Then, they used convolution network with these vectors to make their decision.

The presentation is a good high-level introduction to basic techniques and the journey.

Missed 2% and false 5%.


The standard does not forbid JWE and JWS with asymmetric keys.  By changing the header, it was able to confuse the default behavior.

The second attack uses applications that use two different libraries, crypto and claims.  Each library handles different JSON parsing.   It is then possible to create inconsistency.

The third attack is a DOS by putting the PBKDF2 iteration value extremely high.

My Conclusion

As a developer, ensure at the validation the use of limited known algorithms and parameters.

ChatGPT demonstrates the vulnerability of humans to being bad at testing

When demonstrating a model, are we sure they are not using trained data as input to the demonstration.  This trick ensures PREDICTABILITY.

Train yourself in ML as you will need it.

Very manual methodology using traditional reverse engineering techniques

Laion5B is THE dataset of 5T images.   It is a list of URLs.  But registered domains expire and can be bought.  Thus, they may be poisoned.  It is not a targeted attack, as the attacker does not control who uses it.

0.01% may be sufficient to poison.

It shows the risk of untrusted Internet data.  Curated data may be untrustworthy.

The attack is to use Java polymorphism to override the normal deserialization.  The purpose is to detect this chain.

Their approach uses tainted data analysis and then fuzz.

Black Hat 2023 Day 1

  1. Introduction (J. MOSS)

Jeff MOSS (Founder of DefCon and Black Hat) highlighted some points:

  • AI is about using predictions. 
  • AI brings new issues with Intellectual Properties.   He cited the example of Zoom™ that just decided that all our interactions could be used for their ML training.
  • Need for authentic data.

The current ML models are insecure, but people trust them.  Labs had LLMs available for many years but kept them.  With OpenAI going public, it started the race.

She presents trends for enterprise:

  • Enterprise’s answer to ChatGPT is Machine Learning as a Service (MLaaS).  But these services are not secure.
  • The next generation should be multi-modal models (using audio, image, video, text…).  More potent than monomodal ones such as LLMs.
  • Autonomous agent mixes the data collection of LLM and takes decisions and actions.  These models will need secure authorized access to enterprise data.  Unfortunately, their actions are non-deterministic.
  • Data security for training is critical.  It is even more challenging when using real-time data.

She pointed to an interesting paper about poisoning multi-modal data via image or sound.


Often, the power LED is more or less at the entry of the power supply circuit.  Thus, intensity is correlated to the consumption.

They recorded only the image of the LED to see the effect of the rolling shutter.  Thus, they increase the sampling rate on the LED with the same video frequency.  This is a clever, “cheap” trick.

To attack ECDSA, they used the Minerva attack (2020)

Conclusion: They turned timing attacks into a power attack.  The attacks need two conditions:

  1. The implementation must be prone to some side-channel timing attack.
  2. The target must have a power LED in a simple setting, such as a smart card reader, or USB speakers. 

Despite these limitations, it is clever.


Once more, users trust AI blindly.

The global environment is complex and extends further than ML code.

All traditional security issues are still present, such as dependency injection.

The current systems are not secure against adversarial examples.  They may not even present the same robustness of all data points.

Explainability is insufficient if it is not trustworthy.  Furthermore, the fairness and trustworthiness of the entity using the explanation are essential.


The Multi-Party Computation (MPC) Lindel17 specifies that all further interactions should be blocked when a finalized signature fails.  In other words, the wallet should be blocked.  They found a way to exfiltrate the part key if the wallet is not blocked (it was the case for several wallets)

In the case of GG18 and GG20, they gained the full key by zeroing the ZKP using the CRT (Chinese Remainder Theorem) and choosing a small factor prime.

Conclusion: Adding ZKP in protocols to ensure that some design hypotheses are enforced.


They created H26forge to create vulnerable H264 content.  They attack the semantics out of its specified range.  Decoders may not test all of them.  The tool helps with handling the creation of forged H264. 

Conclusion

This may be devastating if combined with fuzzing.

Enforce the limits in the code.


If the EKU (extended key use) is not properly verified for its purpose, bingo.

Some tested implementations failed the verification.  The speaker forged the signing tools to accept domain-validated certificates for signing code.


Politically correct but not really informative.

Policing in the metaverse

The metaverse(s), whatever it will be, may be essential to our near digital future.  It is sometimes referred to as the next iteration of the Internet.  As Web 2.0 has many security issues, without a doubt, we can forecast that the Web 3.0/metaverse(s) will have as many, and most probably more, risks.  Thus, it is interesting to analyze some potential threats even if the metaverse(s) is not yet here.

Europol (The European Union Agency for Law Enforcement Cooperation) is the law enforcement agency of the European Union.  Therefore, Europol is knowledgeable about crime.  Their innovation laboratory published an interesting report: “Policing in the metaverse.”

The report does not define precisely what metaverse is.  It gives a relatively good idea of what it may be.  It does not only tackle the visible part of the metaverse (AR, VR, XR).  It also describes the foreseen underlying infrastructure with decentralized networks and blockchains.

The report explores seven topics related to crime in the metaverse

  1. Identity:  A large focus is put on the collection and reuse of additional biometric information.

With more advanced ways to interact with the system by using different sensors, eye tracking, face tracking and haptics for instance, there will be far more detailed biometric information about individual users.  That information will allow criminals to even more convincingly impersonate and steal someone’s identity.  Moreover, this information may be used to manipulate users in a far more nuanced, but far more effective way than is possible at present on the Internet

It will become difficult to trust the identity or the avatars.  Impersonation of virtual personas will be an interesting threat.

The more detailed that data becomes and the more closely that avatar resembles and represents the actual user, the more this becomes a question of who owns the user’s identity, the biometric and spatial information that the user provides to the system.

The more detailed that data becomes and the more closely that avatar resembles and represents the actual user, the more this becomes a question of who owns the user’s identity, the biometric and spatial information that the user provides to the system.

  1. Financial money laundering, scams:  the current state of cryptocurrencies and NFTs paints a scary picture of the future. 
  2. Harassment
  3. Terrorism:  Europol foresees that terrorist organizations will use it as recruiting services and a training playground.
  4. Mis- and disinformation
  5. Feasibility of monitoring and logging evidence:  This will be a challenging task.
  6. Impact in the physical world.  This will be an extraordinary playground for attackers.  Device manufacturers will have to put countermeasures from the start.

An immersive XR experience provides an opportunity to influence a user in the physical world through the manipulation of the virtual environment.  Users can be tricked into hitting objects and walls, or being moved to another physical location, through what is called a ‘Human Joystick Attack’.  A perhaps simpler way is to alter the boundaries of a user’s virtual world through a ‘Chaperone Attack’.  A third attack type is the ‘Overlay Attack’, in which the attacker takes complete control over the user’s virtual environment and provides their own overlay – the input which defines what users see and perceive in a virtual environment.

The report highlighted the need of moderation.  It explained that the challenge will be larger than the current one for Web 2.0

It will not just be a matter of moderating vastly more content, but also of behaviour, which is both ephemeral in nature and even more context-dependent than the content we are currently used to

This report is a must-read for anyone interested in security for Web 3.0 and the metaverse(s).  It is not technical and provides a long list of worrying issues.  The mere fact that Europol publishes on the topic is already a good indicator that this matter will be critical in the future.

IS RSA2048 broken?

Recently, an academic paper from a large team of Chinese researchers made the headlines of the specialized press [1].  Reporters claimed that “small” quantum computers may break RSA2048.  Breaking RSA2048 may need only 372 qubits.  Qubits are similar to bits in the quantum domain.  IBM already proposes Osprey: a 433-qubit chip.  So, is RSA2048 dead?

The security of RSA relies on the assumption that factorizing the product of two large prime numbers is extremely difficult.  It is assumed currently that conventional computers cannot solve this problem.  Recent studies showed that, in theory, quantum computers may succeed.

The paper is not for the faint heart.  Its summary is as follows:

Shor’s algorithm has seriously challenged information security based on public key cryptosystems.  However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities.  Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA).  The number of qubits required is O(logN/loglog N), which is sublinear in the bit length of the integer $N$, making it the most qubit-saving factorization algorithm to date.  We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device.  We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm.  Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance.

As mentioned, RSA’s security assumes it is tough to factorize the product of two very large prime numbers.  The researchers use Schnorr’s algorithm [2] to factor these large numbers rather than the Schor one.  On the one hand, the Schor algorithm requires millions of qubits, but it is a theoretically proven solution.  Unfortunately, it is out of the current feasibility realm.  On the other hand, Schnorr’s algorithm seems not yet to be a proven solution at a large scale.  The expected speed-up using quantum computing is highly controversial and not demonstrated.  The paper stays in the realm of unproven expectations.

The consensus seems to be that the threat is not yet here.  Following is a list of posts of people who know far better than me:

  • [3] highlights one crucial point (present in all papers): It should be pointed out that the quantum speed-up of the algorithm is unclear due to the ambiguous convergence of QAOA.  In other words, the paper does not demonstrate that it is faster than Schor’s.  Scott is a quantum computing expert.
  • [4] highlights that the paper never claims to be faster.  It omits “running time”; what is merely claimed is that the quantum circuit is very small.
  • [5] Bruce Schneier reminds that Schnorr’s paper works well for small moduli, but does not scale well for larger prime numbers.
  • [6] highlights that the quantum computer should have a 99.999% fidelity.  This would require a NISQ computer with gate level fidelities of 99.999%.  That level is more than two orders of magnitude better than the best machines we have today. 

Conclusion

Keep calm.  RSA 2048 is still safe for many years.  Nevertheless, it is key to be aware of the latest progress of post-quantum cryptography.  Do we have to switch to post-quantum cryptography?  Not right now, especially if you do not handle secrets that have to last for many decades.

Reference

[1]          B. Yan et al., “Factoring integers with sublinear resources on a superconducting quantum processor.” arXiv, Dec. 23, 2022.   Available: http://arxiv.org/abs/2212.12372

[2]          C. P. Schnorr, “Fast Factoring Integers by SVP Algorithms, corrected.” 2021.  Available: https://eprint.iacr.org/2021/933

[3]          S. Aaronson, “Cargo Cult Quantum Factoring,” Shtetl-Optimized, Jan. 04, 2023.  https://scottaaronson.blog/?p=6957

[4]          “Paper claims to break RSA-2048 with only 372 physical quibits.” https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/AkfdRQS4yoY/m/3plDftUEAgAJ .

[5]          “Breaking RSA with a Quantum Computer – Schneier on Security.” https://www.schneier.com/blog/archives/2023/01/breaking-rsa-with-a-quantum-computer.html

[6]          dougfinke, “Quantum Experts Debunk China Quantum Factoring Claims,” Quantum Computing Report, Jan. 06, 2023.  https://quantumcomputingreport.com/quantum-experts-debunk-china-quantum-factoring-claims

NIST selected the post-quantum cryptosystems

Post-quantum cryptography encompasses the algorithms that are allegedly immune to quantum computing.  In 2017, NIST initiated the process of selecting and standardizing a set of post-quantum cryptosystems. In 2020, NIST started the third round with 15 remaining candidates.

NIST announced the four winners.  CRYSTALS-KYBER is the new key establishment protocol for post-quantum. 

“Among its advantages are comparatively small encryption keys that two parties can exchange easily, as well as its speed of operation. ”

CRYSTALS-DILITHIUM, Falcon, and SPHINCS+ are the new digital signature systems.

“ Reviewers noted the high efficiency of the first two, and NIST recommends CRYSTALS-Dilithium as the primary algorithm, with FALCON for applications that need smaller signatures than Dilithium can provide. The third, SPHINCS+, is somewhat larger and slower than the other two, but it is valuable as a backup for one chief reason: It is based on a different math approach than all three of NIST’s other selections.”

Interestingly, version 9.0 of OpenSSH proposes a post-quantum algorithm.  It is NTRU prime and not CRYSTALS-KYBER.

Intel SGX™ is dead

Intel announced that the next generations of CPUs (11th and 12th) will no longer support the SGX technology (see data sheet).  SGX is the secure enclave in Intel CPU.   The SGX isolates the program and data in its environment from the insecure Rich Execution Environment (REE).  Thus SGX-based applications could act as a Root of Trust.

At least, this was the promise.  Unfortunately, starting with Spectre-like attacks, SGX was under the fire of many interesting exploits (for instance, VoltPillager).  Thus, it seems that in its current form, SGX cannot be a trusted secure enclave.

For most consumers, the main consequence is that future PCs will not support any more UHD Blu-ray.  Indeed, the content protection standard AACS2 mandates a Secure Execution Environment with a Hardware Root of Trust (HRoT).  For Microsoft Windows, the solution was the use of SGX.  Some applications were also basing their security model on SGX.  They will have to find an alternative that is not necessarily available.  TPM offers a valid HRoT but not a Secure Execution Environment.  Current tamper-resistant software and obfuscation technologies may not be sufficient.