Policing in the metaverse

The metaverse(s), whatever it will be, may be essential to our near digital future.  It is sometimes referred to as the next iteration of the Internet.  As Web 2.0 has many security issues, without a doubt, we can forecast that the Web 3.0/metaverse(s) will have as many, and most probably more, risks.  Thus, it is interesting to analyze some potential threats even if the metaverse(s) is not yet here.

Europol (The European Union Agency for Law Enforcement Cooperation) is the law enforcement agency of the European Union.  Therefore, Europol is knowledgeable about crime.  Their innovation laboratory published an interesting report: “Policing in the metaverse.”

The report does not define precisely what metaverse is.  It gives a relatively good idea of what it may be.  It does not only tackle the visible part of the metaverse (AR, VR, XR).  It also describes the foreseen underlying infrastructure with decentralized networks and blockchains.

The report explores seven topics related to crime in the metaverse

  1. Identity:  A large focus is put on the collection and reuse of additional biometric information.

With more advanced ways to interact with the system by using different sensors, eye tracking, face tracking and haptics for instance, there will be far more detailed biometric information about individual users.  That information will allow criminals to even more convincingly impersonate and steal someone’s identity.  Moreover, this information may be used to manipulate users in a far more nuanced, but far more effective way than is possible at present on the Internet

It will become difficult to trust the identity or the avatars.  Impersonation of virtual personas will be an interesting threat.

The more detailed that data becomes and the more closely that avatar resembles and represents the actual user, the more this becomes a question of who owns the user’s identity, the biometric and spatial information that the user provides to the system.

The more detailed that data becomes and the more closely that avatar resembles and represents the actual user, the more this becomes a question of who owns the user’s identity, the biometric and spatial information that the user provides to the system.

  1. Financial money laundering, scams:  the current state of cryptocurrencies and NFTs paints a scary picture of the future. 
  2. Harassment
  3. Terrorism:  Europol foresees that terrorist organizations will use it as recruiting services and a training playground.
  4. Mis- and disinformation
  5. Feasibility of monitoring and logging evidence:  This will be a challenging task.
  6. Impact in the physical world.  This will be an extraordinary playground for attackers.  Device manufacturers will have to put countermeasures from the start.

An immersive XR experience provides an opportunity to influence a user in the physical world through the manipulation of the virtual environment.  Users can be tricked into hitting objects and walls, or being moved to another physical location, through what is called a ‘Human Joystick Attack’.  A perhaps simpler way is to alter the boundaries of a user’s virtual world through a ‘Chaperone Attack’.  A third attack type is the ‘Overlay Attack’, in which the attacker takes complete control over the user’s virtual environment and provides their own overlay – the input which defines what users see and perceive in a virtual environment.

The report highlighted the need of moderation.  It explained that the challenge will be larger than the current one for Web 2.0

It will not just be a matter of moderating vastly more content, but also of behaviour, which is both ephemeral in nature and even more context-dependent than the content we are currently used to

This report is a must-read for anyone interested in security for Web 3.0 and the metaverse(s).  It is not technical and provides a long list of worrying issues.  The mere fact that Europol publishes on the topic is already a good indicator that this matter will be critical in the future.

IS RSA2048 broken?

Recently, an academic paper from a large team of Chinese researchers made the headlines of the specialized press [1].  Reporters claimed that “small” quantum computers may break RSA2048.  Breaking RSA2048 may need only 372 qubits.  Qubits are similar to bits in the quantum domain.  IBM already proposes Osprey: a 433-qubit chip.  So, is RSA2048 dead?

The security of RSA relies on the assumption that factorizing the product of two large prime numbers is extremely difficult.  It is assumed currently that conventional computers cannot solve this problem.  Recent studies showed that, in theory, quantum computers may succeed.

The paper is not for the faint heart.  Its summary is as follows:

Shor’s algorithm has seriously challenged information security based on public key cryptosystems.  However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities.  Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA).  The number of qubits required is O(logN/loglog N), which is sublinear in the bit length of the integer $N$, making it the most qubit-saving factorization algorithm to date.  We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device.  We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm.  Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance.

As mentioned, RSA’s security assumes it is tough to factorize the product of two very large prime numbers.  The researchers use Schnorr’s algorithm [2] to factor these large numbers rather than the Schor one.  On the one hand, the Schor algorithm requires millions of qubits, but it is a theoretically proven solution.  Unfortunately, it is out of the current feasibility realm.  On the other hand, Schnorr’s algorithm seems not yet to be a proven solution at a large scale.  The expected speed-up using quantum computing is highly controversial and not demonstrated.  The paper stays in the realm of unproven expectations.

The consensus seems to be that the threat is not yet here.  Following is a list of posts of people who know far better than me:

  • [3] highlights one crucial point (present in all papers): It should be pointed out that the quantum speed-up of the algorithm is unclear due to the ambiguous convergence of QAOA.  In other words, the paper does not demonstrate that it is faster than Schor’s.  Scott is a quantum computing expert.
  • [4] highlights that the paper never claims to be faster.  It omits “running time”; what is merely claimed is that the quantum circuit is very small.
  • [5] Bruce Schneier reminds that Schnorr’s paper works well for small moduli, but does not scale well for larger prime numbers.
  • [6] highlights that the quantum computer should have a 99.999% fidelity.  This would require a NISQ computer with gate level fidelities of 99.999%.  That level is more than two orders of magnitude better than the best machines we have today. 

Conclusion

Keep calm.  RSA 2048 is still safe for many years.  Nevertheless, it is key to be aware of the latest progress of post-quantum cryptography.  Do we have to switch to post-quantum cryptography?  Not right now, especially if you do not handle secrets that have to last for many decades.

Reference

[1]          B. Yan et al., “Factoring integers with sublinear resources on a superconducting quantum processor.” arXiv, Dec. 23, 2022.   Available: http://arxiv.org/abs/2212.12372

[2]          C. P. Schnorr, “Fast Factoring Integers by SVP Algorithms, corrected.” 2021.  Available: https://eprint.iacr.org/2021/933

[3]          S. Aaronson, “Cargo Cult Quantum Factoring,” Shtetl-Optimized, Jan. 04, 2023.  https://scottaaronson.blog/?p=6957

[4]          “Paper claims to break RSA-2048 with only 372 physical quibits.” https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/AkfdRQS4yoY/m/3plDftUEAgAJ .

[5]          “Breaking RSA with a Quantum Computer – Schneier on Security.” https://www.schneier.com/blog/archives/2023/01/breaking-rsa-with-a-quantum-computer.html

[6]          dougfinke, “Quantum Experts Debunk China Quantum Factoring Claims,” Quantum Computing Report, Jan. 06, 2023.  https://quantumcomputingreport.com/quantum-experts-debunk-china-quantum-factoring-claims

Breaching the Samsung S9 Keystore

Most Android devices implement an Android Hardware-backed Keystore.  The Rich Execution Environment (REE) applications, i.e., the unsecure ones, use a hardware root of trust and an application in the Trusted Execution Environment (TEE).  Usually, as all the cryptographic operations occur only in the trusted part, these keys should be safe.

Three researchers from the Tel-Aviv university demonstrated that it is not necessarily the case.  ARM’s TrustZone is one of the most used TEEs.  Each vendor must write its own Trusted Application (TA) that executes in the TrustZone for its key store.  The researchers reverse-engineered the Samsung implementation for S8, S9, S20, and S21.  They succeeded in breaching the keys protected by the key store.

The breach is not due to a vulnerability in TrustZone.  It is due to design errors in the TA.

When REE requests to generate a new key, the TA returns a wrapped key, i.e., a key encrypted with a key stored in the root of trust.  In a simplified explanation, the wrapped key is the newly generated key AES-CGM-encrypted with an IV provided by the REE application and a Hardware-Derived Key (HDK) derived from some information supplied by the REE application and the hardware root of trust key.

 In other words, the REE application provides the IV and some data that generate the HDK.  AES-CGM is a stream cipher (uses AES CTR), and thus it is sensitive to IV reuse.  With a streamcipher, you must never reuse an IV with the same key.  Else, it is easy to retrieve the encrypted message with a known ciphertext.  In this case, the attacker has access to the IV used to encrypt the wrapped key and can provide the same `seed` for generating the HDK.   Game over!

In S20 and S21, the key derivation function adds some randomness for each new HDK.  The attacker cannot anymore generate the same HDK.  Unfortunately, the S20 andS21 TA contains the old derivation function.  The researchers found a way to downgrade to the S9 HDK.  Once more, game over!

Lessons:

  1. Never reuse an IV with a streamcipher.  Do not trust the user to generate a new IV, do it yourself.
  2. A Trusted Execution Environment does not protect from a weak/wicked “trusted” application. 
  3. If not necessary, remove all unused software from the implementation.  You reduce the attack surface.

Reference

A. Shakevsky, E. Ronen, and A. Wool, “Trust Dies in Darkness: Shedding Light on Samsung’s TrustZone Keymaster Design,” 208, 2022.  Available: http://eprint.iacr.org/2022/208

Black Hat 2021: my preferred talks

Last week, I attended Black Hat 2021. It was a hybrid conference, i.e., both on-site and virtual. As a consequence, there were only four concurrent “physical’ talks at any moment. The number of attendees was far lower than in 2019. I attended the physical ones exclusively with a focus on hacking.

I enjoyed the most the two following talks


Breaking the Isolation: Cross-Account AWS Vulnerabilities by Shir Tamari and Ami Luttwak
They explored the AWS cross services such as CloudTrail or the Serverless Repository. Such services allow to store some data in the same location for several services or read data from the same location for several services. They discovered that the security policy configuration did not define the referenced accounts. Thus, it was possible to use CloudTrail to store files in an S3 bucket that you did not control.
AWS has fixed the issue. Unfortunately, it is up to the customer to update the policies correspondingly; else, the holes are still present.
Fixing a Memory Forensics Blind Spot: Linux Kernel Tracing by Andrew Case and Golden Richard
The ePBF is a programming language that makes access to the Linux kernel tracing easy. The tracing system is mighty. It allows to read registers, hook subsystem calls, etc. From the userland!! Powerful but nasty.
They presented some extensions of their open-source tools to list the hooked calls and other stealthy operations.
I was not aware of ePBF. It opened my eyes and scared me. An earlier talk With Friends Like eBPF, Who Needs Enemies? The authors presented a rootkit based on ePBF. Unfortunately, I did not attend this talk. Would I have known ePBF, I would have attended it. It seems that there were three other ePBF-based talks at DefCon 2021.


In the coming weeks, I will listen to some virtual talks and report the ones I enjoyed.

The fall of Titans?

Two French security researchers, Victor Lomne and Thomas Roche, published in January an impressive 55-page report.  The report describes a successful Electro-Magnetic side-channel attack on Google’s Titan security key.  They succeeded in extracting the ECDSA private key.

Titan security key is a FIDO U2F compliant key also known as Google authenticator.  It is functionally similar to Yubikeys.  Its purpose is to serve as a physical token for Two-Factor Authentication (2FA).

Mounting side-channel attacks on secure components like smart cards is “common.”  It usually assumes the attacker has samples to analyze and that the attacker can store arbitrary known secrets in the samples.  This knowledge provides some reference points during the attack.  Once the attack is fine-tuned with the samples using a known secret, it is possible to extract the target’s secret. Unfortunately, this is not true in this specific use case.  When registering, the token generates its ECDSA key pair.  The private key never leaves the token.  It is why it is not possible to back up such tokens.  Thus, it is possible to purchase Titan tokens, but not to feed an arbitrary key pair.  The researchers used an interesting methodology to overcome this issue.

They first identified the secure component used by Titan. They removed the plastic cover and identified NXP A7005.  They found out that some JavaCards have similar characteristics to the NXP A7005.  Thus, they used JavaCards using NXP P5x chips.

Using a 500µm coil with 10µm precision micromanipulators, they measured the EM signature of the ECDSA signing for both Titan and the JavaCard.  The comparison of the two EM signatures confirmed that they used the same implementation.  Thus, they concentrated their effort on the Javacard to design the exploit.  They reverse-engineered the implementation using the EM traces to guess the calculations. They discovered a sensitive leakage and could mount a complex side-channel attack.  The document details the complexity of the attack.  With 4,000 sampled signatures for 2TB of data, they succeeded in extracting the key that they fed to the smart card.

Then, they implemented the same attack on the Titan chip.  They increased the number of samples to 6,000 for 3TB of data.   They succeeded in extracting the private key.

How devastating is this attack?

  • The specialized equipment is about 10K€ (about $12K). The needed skill set is high.  On the  Common Criteria (CC) scale, it has a rating of 27 corresponding to attackers with moderate attack potential.  The corresponding chips are old and are not any more covered by CC certificates.
  • The attack requires the attacker to get the Titan key for several hours to collect the 6,000 samples.  It is not possible to clone it.
  • The attack requires opening the plastic casing.  The operation seems destructive.  For stealthiness, the attacker must be able to repackage the chip in a legitimate case.
  • The attacker needs to return the “borrowed” recased key to the legitimate owner. Else this owner may detect the loss and block the access.
  • This attack impacts not only the Titan token but a long list of components.

Thus, we may forecast that such attack would be efficient only against very high-profile targets.

Conclusions

The attack is an impressive piece of work.  Reading the document gives an overview of the issues a side-channel attack requires to solve. It is extremely interesting.

Diversity of implementation across different products is a costly but secure option.

Continue to use your 2FA tokens.  It is more secure than not using them.  If you lost your 2FA token, change your accounts to use a new one as soon as possible (which should be the case, independently of this attack).

Use 2FA tokens as much as possible.

Reference

Lomne, Victor, and Thomas Roche. “A Side Journey to Titan.” NinjaLab, January 7, 2021. https://ninjalab.io/wp-content/uploads/2021/01/a_side_journey_to_titan.pdf.

VoltPillager: a new attack on SGX

In the past, several voltage-based fault injection attacks on Intel SGX were successful.  The attacks used software commands to control the voltage of the CPU.  Intel’s mitigation was to disable the access to these commands at the BIOS level.

A team of researchers from the University of Birmingham succeeded in performing the same type of attacks but via external hardware, therefore bypassing the mitigation.  They coined the attack: VoltPillager [1].

On the motherboard, the CPU controls its voltage by sending commands to a programmable Voltage Controller using an undocumented three-wire serial bus: Software Voltage Identification (SVID).  The bus is somewhat similar to I2C.  The researchers reverse-engineered its protocol.  SVID has no protection.  Furthermore, the bus uses open-drain outputs; thus, it is “easy” to piggy-back on SVID.  Open-drain outputs require resistors to pull the signal to `1’; therefore, the connection is accessible easily and can use jumpers or soldering. 

They used a $20 Teensy 4.0 microcontroller [2].  To synchronize the Teensy microcontroller’s commands with the target’s CPU activity, the team used an RS232 UART rather than the USB port.  RS232 has minimal jitter compared to USB enabling a more accurate synchronization.  Time accuracy is critical in fault injection attacks.  The VoltPillager hardware allows issuing commands that program the shape of the undervolting waveform accurately.  Hardware-driven undervolting is more precise and accurate than equivalent software-driven fault-injection attacks.

They succeeded in reproducing the exploits disclosed by the Plundervolt attack [3]. Due to its increased accuracy, VoltPillager requested 50 times fewer iterations than Plundervolt.  Furthermore, the team discovered a new class of attacks.  It seems that undervolting briefly delays memory writes to the cache.  This potential delay opens many opportunities for cache attacks.

The bad news is that VoltPillager bypasses the mitigation introduced by Intel against the Plundervolt-like attacks.  VoltPillager attacks the Voltage Controller directly.  As explained in the paper, protecting the bus would not solve the issue because a determined attacker could directly drive the Pulse Wave Modulation that manages the actual voltage.  The most efficient mitigation would use techniques inspired by the smart card world, such as the hardware detector of voltage glitches or redundant critical code.

Of course, VoltPillager requests physical access to the CPU.  Intel’s answer to VoltPillager is

“… opening the case and tampering of internal hardware to compromise SGX is out of scope for SGX threat model.” 

This assumption of the threat model may be valid in many scenarios.  Unfortunately, tampering with hardware is in the scope of the content protection threat model.

A thrilling paper that reminds us that software runs on hardware and hardware is difficult to secure.

[1]          Z. Chen, G. Vasilakis, K. Murdock, E. Dean, D. Oswald, and F. Garcia, “VoltPillager: Hardware-based fault injection attacks against Intel SGX Enclaves using the SVID voltage scaling interface.” Nov. 2020, [Online]. Available: https://www.usenix.org/system/files/sec21summer_chen-zitai.pdf.

[2]          “Teensy USB Development Board.” https://www.pjrc.com/teensy/.

[3]          “Plundervolt.” https://plundervolt.com/.

My preferred papers at Black Hat 2019

I attended the briefings at Black Hat 2019.  All the presentations I attended were engaging.  Nevertheless, here is the feedback on my preferred ones.  The link gives access to the corresponding slid decks.

A Decade After Bleichenbacher ’06, RSA Signature Forgery Still Works (Sze Yiu Chau)

One of the mitigations to Bleichenbacher’s attack is that the exponent d should be large. Unfortunately, in some standards d is still small, typically 3.  But even with larger exponents, forgery is possible due to vulnerable software.

Forgery uses the fact that many verifiers do not check garbage, parameter lengths, and padding.

He provides a list of vulnerable libraries (that are now fixed).

Lessons:

  • Check everything. No corner cutting.
  • Parsing in security should be bulletproof.  The complexity of the structures and syntax may become an issue.  Complexity is the enemy of security.

Lessons from 3 years of crypto and blockchain audit (Jean-Philippe Aumasson)

Jean Philippe is a Kudelski Security expert. 

He provides a view of most deployed mistakes.  Most are well known.  A few ones that I liked:

  • Weak key derivation from a password. Use a real derivation function.
  • Avoid using panic if the error is not unrecoverable, else it may become a potential DoS.
  • No way to erase securely sensitive memory with garbage collection (for Instance, go)

His preferred language for crypto is Rust.

The slide deck is an excellent refresher of what not to do.  Practitioners should have a look.

Breaking Encrypted Databases: Generic Attacks on Range Queries (Marie-Sarah Lacharite)

She presented how to use access pattern leakage and volume attack leakage to guess the content of the database even if encrypted.

Independently of the provided attacks, the researcher reminded that if using a common encryption key (and same IV) with server-side encryption, it is still possible to perform a range query because the same cleartext generates the same ciphertext.  This may be a PII issue. 

There are some partial solutions to this problem:

  • Order preserving encryption solves the issue
  • Order revealing encryption is even better

Pattern leakages measure the number of returned records per request. She used PQ trees to rebuild the order of the observed answer of access pattern. For N values in the database, N log N queries were needed.

Volume leakage is easier because the attacker may just monitor the communication. For N values in the database, N2 log N of observed queries are needed.

Some possible mitigations:

  • Restricting query types
  • Dummy records
  • Dummy values

The two last solutions may introduce some accuracy issues if not filtered out.

Everybody be Cool, This is a Robbery! (Gabriel CampanaJean-Baptiste Bédrune)

The studied the actual security of Hardware Security Modules (HSM).  HSMs are rarely studied because they are expensive and if attacked they will erase secret.

They used the PKCS11 API.

The targeted HSM used an old version of LINUX (10-year-old). Furthermore, every process runs as root, and there was NO secure boot.  Attackers used fuzzing to find 14 vulnerabilities.  Exploiting a few of them, they could get access to the private root key!

Lesson:

Hardware Tamper Resistance and controlled API are not enough.  The software should assume that the enclave has been breached and be protected correspondingly. 

Breaking Samsung’s ARM TrustZone (Maxime PeterlinAlexandre AdamskiJoffrey Guilbon)

Samsung’s Trustzone works only on Samsung chip Exonis and not Qualcomm’s Snapdragon

The secure OS is Kinibi by Trustonics.

Once more, adversaries used a fuzzier (AFL-based)

Currently, the trustlet has no ASLR and PIE (Position Independent Executable). They used buffer overflow on the trustlet and a trusted vulnerable driver to go inside the Trustzone. From there,  they attacked mmap for accessing Kinibi.

They were able to read and write memory arbitrarily. For instance, they accessed the master key both from EL3 and from EL1.  With the master key, the attacker has access to all the secrets in the Trustzone.

Lesson:

Once more, protect code within the secure enclave.  Defense in depth is critical.