Black Hat 2021: my preferred talks

Last week, I attended Black Hat 2021. It was a hybrid conference, i.e., both on-site and virtual. As a consequence, there were only four concurrent “physical’ talks at any moment. The number of attendees was far lower than in 2019. I attended the physical ones exclusively with a focus on hacking.

I enjoyed the most the two following talks


Breaking the Isolation: Cross-Account AWS Vulnerabilities by Shir Tamari and Ami Luttwak
They explored the AWS cross services such as CloudTrail or the Serverless Repository. Such services allow to store some data in the same location for several services or read data from the same location for several services. They discovered that the security policy configuration did not define the referenced accounts. Thus, it was possible to use CloudTrail to store files in an S3 bucket that you did not control.
AWS has fixed the issue. Unfortunately, it is up to the customer to update the policies correspondingly; else, the holes are still present.
Fixing a Memory Forensics Blind Spot: Linux Kernel Tracing by Andrew Case and Golden Richard
The ePBF is a programming language that makes access to the Linux kernel tracing easy. The tracing system is mighty. It allows to read registers, hook subsystem calls, etc. From the userland!! Powerful but nasty.
They presented some extensions of their open-source tools to list the hooked calls and other stealthy operations.
I was not aware of ePBF. It opened my eyes and scared me. An earlier talk With Friends Like eBPF, Who Needs Enemies? The authors presented a rootkit based on ePBF. Unfortunately, I did not attend this talk. Would I have known ePBF, I would have attended it. It seems that there were three other ePBF-based talks at DefCon 2021.


In the coming weeks, I will listen to some virtual talks and report the ones I enjoyed.

The fall of Titans?

Two French security researchers, Victor Lomne and Thomas Roche, published in January an impressive 55-page report.  The report describes a successful Electro-Magnetic side-channel attack on Google’s Titan security key.  They succeeded in extracting the ECDSA private key.

Titan security key is a FIDO U2F compliant key also known as Google authenticator.  It is functionally similar to Yubikeys.  Its purpose is to serve as a physical token for Two-Factor Authentication (2FA).

Mounting side-channel attacks on secure components like smart cards is “common.”  It usually assumes the attacker has samples to analyze and that the attacker can store arbitrary known secrets in the samples.  This knowledge provides some reference points during the attack.  Once the attack is fine-tuned with the samples using a known secret, it is possible to extract the target’s secret. Unfortunately, this is not true in this specific use case.  When registering, the token generates its ECDSA key pair.  The private key never leaves the token.  It is why it is not possible to back up such tokens.  Thus, it is possible to purchase Titan tokens, but not to feed an arbitrary key pair.  The researchers used an interesting methodology to overcome this issue.

They first identified the secure component used by Titan. They removed the plastic cover and identified NXP A7005.  They found out that some JavaCards have similar characteristics to the NXP A7005.  Thus, they used JavaCards using NXP P5x chips.

Using a 500µm coil with 10µm precision micromanipulators, they measured the EM signature of the ECDSA signing for both Titan and the JavaCard.  The comparison of the two EM signatures confirmed that they used the same implementation.  Thus, they concentrated their effort on the Javacard to design the exploit.  They reverse-engineered the implementation using the EM traces to guess the calculations. They discovered a sensitive leakage and could mount a complex side-channel attack.  The document details the complexity of the attack.  With 4,000 sampled signatures for 2TB of data, they succeeded in extracting the key that they fed to the smart card.

Then, they implemented the same attack on the Titan chip.  They increased the number of samples to 6,000 for 3TB of data.   They succeeded in extracting the private key.

How devastating is this attack?

  • The specialized equipment is about 10K€ (about $12K). The needed skill set is high.  On the  Common Criteria (CC) scale, it has a rating of 27 corresponding to attackers with moderate attack potential.  The corresponding chips are old and are not any more covered by CC certificates.
  • The attack requires the attacker to get the Titan key for several hours to collect the 6,000 samples.  It is not possible to clone it.
  • The attack requires opening the plastic casing.  The operation seems destructive.  For stealthiness, the attacker must be able to repackage the chip in a legitimate case.
  • The attacker needs to return the “borrowed” recased key to the legitimate owner. Else this owner may detect the loss and block the access.
  • This attack impacts not only the Titan token but a long list of components.

Thus, we may forecast that such attack would be efficient only against very high-profile targets.

Conclusions

The attack is an impressive piece of work.  Reading the document gives an overview of the issues a side-channel attack requires to solve. It is extremely interesting.

Diversity of implementation across different products is a costly but secure option.

Continue to use your 2FA tokens.  It is more secure than not using them.  If you lost your 2FA token, change your accounts to use a new one as soon as possible (which should be the case, independently of this attack).

Use 2FA tokens as much as possible.

Reference

Lomne, Victor, and Thomas Roche. “A Side Journey to Titan.” NinjaLab, January 7, 2021. https://ninjalab.io/wp-content/uploads/2021/01/a_side_journey_to_titan.pdf.

VoltPillager: a new attack on SGX

In the past, several voltage-based fault injection attacks on Intel SGX were successful.  The attacks used software commands to control the voltage of the CPU.  Intel’s mitigation was to disable the access to these commands at the BIOS level.

A team of researchers from the University of Birmingham succeeded in performing the same type of attacks but via external hardware, therefore bypassing the mitigation.  They coined the attack: VoltPillager [1].

On the motherboard, the CPU controls its voltage by sending commands to a programmable Voltage Controller using an undocumented three-wire serial bus: Software Voltage Identification (SVID).  The bus is somewhat similar to I2C.  The researchers reverse-engineered its protocol.  SVID has no protection.  Furthermore, the bus uses open-drain outputs; thus, it is “easy” to piggy-back on SVID.  Open-drain outputs require resistors to pull the signal to `1’; therefore, the connection is accessible easily and can use jumpers or soldering. 

They used a $20 Teensy 4.0 microcontroller [2].  To synchronize the Teensy microcontroller’s commands with the target’s CPU activity, the team used an RS232 UART rather than the USB port.  RS232 has minimal jitter compared to USB enabling a more accurate synchronization.  Time accuracy is critical in fault injection attacks.  The VoltPillager hardware allows issuing commands that program the shape of the undervolting waveform accurately.  Hardware-driven undervolting is more precise and accurate than equivalent software-driven fault-injection attacks.

They succeeded in reproducing the exploits disclosed by the Plundervolt attack [3]. Due to its increased accuracy, VoltPillager requested 50 times fewer iterations than Plundervolt.  Furthermore, the team discovered a new class of attacks.  It seems that undervolting briefly delays memory writes to the cache.  This potential delay opens many opportunities for cache attacks.

The bad news is that VoltPillager bypasses the mitigation introduced by Intel against the Plundervolt-like attacks.  VoltPillager attacks the Voltage Controller directly.  As explained in the paper, protecting the bus would not solve the issue because a determined attacker could directly drive the Pulse Wave Modulation that manages the actual voltage.  The most efficient mitigation would use techniques inspired by the smart card world, such as the hardware detector of voltage glitches or redundant critical code.

Of course, VoltPillager requests physical access to the CPU.  Intel’s answer to VoltPillager is

“… opening the case and tampering of internal hardware to compromise SGX is out of scope for SGX threat model.” 

This assumption of the threat model may be valid in many scenarios.  Unfortunately, tampering with hardware is in the scope of the content protection threat model.

A thrilling paper that reminds us that software runs on hardware and hardware is difficult to secure.

[1]          Z. Chen, G. Vasilakis, K. Murdock, E. Dean, D. Oswald, and F. Garcia, “VoltPillager: Hardware-based fault injection attacks against Intel SGX Enclaves using the SVID voltage scaling interface.” Nov. 2020, [Online]. Available: https://www.usenix.org/system/files/sec21summer_chen-zitai.pdf.

[2]          “Teensy USB Development Board.” https://www.pjrc.com/teensy/.

[3]          “Plundervolt.” https://plundervolt.com/.

My preferred papers at Black Hat 2019

I attended the briefings at Black Hat 2019.  All the presentations I attended were engaging.  Nevertheless, here is the feedback on my preferred ones.  The link gives access to the corresponding slid decks.

A Decade After Bleichenbacher ’06, RSA Signature Forgery Still Works (Sze Yiu Chau)

One of the mitigations to Bleichenbacher’s attack is that the exponent d should be large. Unfortunately, in some standards d is still small, typically 3.  But even with larger exponents, forgery is possible due to vulnerable software.

Forgery uses the fact that many verifiers do not check garbage, parameter lengths, and padding.

He provides a list of vulnerable libraries (that are now fixed).

Lessons:

  • Check everything. No corner cutting.
  • Parsing in security should be bulletproof.  The complexity of the structures and syntax may become an issue.  Complexity is the enemy of security.

Lessons from 3 years of crypto and blockchain audit (Jean-Philippe Aumasson)

Jean Philippe is a Kudelski Security expert. 

He provides a view of most deployed mistakes.  Most are well known.  A few ones that I liked:

  • Weak key derivation from a password. Use a real derivation function.
  • Avoid using panic if the error is not unrecoverable, else it may become a potential DoS.
  • No way to erase securely sensitive memory with garbage collection (for Instance, go)

His preferred language for crypto is Rust.

The slide deck is an excellent refresher of what not to do.  Practitioners should have a look.

Breaking Encrypted Databases: Generic Attacks on Range Queries (Marie-Sarah Lacharite)

She presented how to use access pattern leakage and volume attack leakage to guess the content of the database even if encrypted.

Independently of the provided attacks, the researcher reminded that if using a common encryption key (and same IV) with server-side encryption, it is still possible to perform a range query because the same cleartext generates the same ciphertext.  This may be a PII issue. 

There are some partial solutions to this problem:

  • Order preserving encryption solves the issue
  • Order revealing encryption is even better

Pattern leakages measure the number of returned records per request. She used PQ trees to rebuild the order of the observed answer of access pattern. For N values in the database, N log N queries were needed.

Volume leakage is easier because the attacker may just monitor the communication. For N values in the database, N2 log N of observed queries are needed.

Some possible mitigations:

  • Restricting query types
  • Dummy records
  • Dummy values

The two last solutions may introduce some accuracy issues if not filtered out.

Everybody be Cool, This is a Robbery! (Gabriel CampanaJean-Baptiste Bédrune)

The studied the actual security of Hardware Security Modules (HSM).  HSMs are rarely studied because they are expensive and if attacked they will erase secret.

They used the PKCS11 API.

The targeted HSM used an old version of LINUX (10-year-old). Furthermore, every process runs as root, and there was NO secure boot.  Attackers used fuzzing to find 14 vulnerabilities.  Exploiting a few of them, they could get access to the private root key!

Lesson:

Hardware Tamper Resistance and controlled API are not enough.  The software should assume that the enclave has been breached and be protected correspondingly. 

Breaking Samsung’s ARM TrustZone (Maxime PeterlinAlexandre AdamskiJoffrey Guilbon)

Samsung’s Trustzone works only on Samsung chip Exonis and not Qualcomm’s Snapdragon

The secure OS is Kinibi by Trustonics.

Once more, adversaries used a fuzzier (AFL-based)

Currently, the trustlet has no ASLR and PIE (Position Independent Executable). They used buffer overflow on the trustlet and a trusted vulnerable driver to go inside the Trustzone. From there,  they attacked mmap for accessing Kinibi.

They were able to read and write memory arbitrarily. For instance, they accessed the master key both from EL3 and from EL1.  With the master key, the attacker has access to all the secrets in the Trustzone.

Lesson:

Once more, protect code within the secure enclave.  Defense in depth is critical.

Biometric Vein Recognition Hacked

Biometric vein recognition is considered with iris recognition as the most secure biometrics system. Vein recognition is used in highly secure areas. Automatic Telling Machines starts to use this technology with, for instance in Japan. This statement was valid until December 2018. At the famous German Chaos Communication Congress (35c3), Krissler Ian, also known as Starbug, and Albrecht Julian demonstrated a method (German video) to create a lure hand that defeats commercial systems.

Starbug is a well known hacker in the field of biometrics. For instance, in 2016, he faked successfully the fingerprints of a German minister using high resolution captured photos.

For about 20 years, vein recognition is mainly a Japanese technology. Fujitsu and Hitachi are the two leaders. The network of veins is captured either by reflection from the palm or through transparency with Infra Red (IR) light for fingers. The captured network is turned into minutiae like a typical fingerprint.

The capture phase seems rather simple. The researchers removed the IR filter of a traditional high-end DSLR camera (in that case, Nikon D600) with good lenses. They were able to get a proper capture up to 6 meters with a flash. They also built a raspberry-based system that could be hidden into a device, for instance, a hand-dry-blower. The captured image is processed via a python script to generate a skeleton of the network of veins (as illustrated by the figure below).

Once the skeleton available, they build a fake hand (or finger) using bee wax. The fake hand covers the printed picture. They tried many different materials, but the wax presented the best performance concerning transparency and diffraction of IR light, in other words, it better emulated skin.

Once the fake hand available, the attacker has to use it on the detector. They performed a live demonstration. The demonstration highlighted that the lighting conditions were critical. The strong lighting of the scene spoiled the demonstration, and they had to shade the detector to success. On the other hand, the fake finger detection went on smoothly. The detector was a kind of tunnel. At the time of the presentation, Hitachi and Fujitsu did not have yet reacted.

The attacked detectors had no liveliness detection. As I highlighted in section 7.4.2 of “Ten Laws for Security,” detecting the presence of a real living being behind the captured biometrics is necessarily for robust systems. Unfortunately, such detection increases the complexity and cost of detectors.

Conclusion: Once more, Law 1: Attackers will always find their way
was demonstrated.

Deep Learning: A Critical Appraisal (paper review)

Deep learning is becoming extremely popular. It is one of the fields of Machine Learning that is the most explored and exploited. AlphaGo, Natural Language Processing, image recognition, and many more topics are iconic examples of the success of deep learning. It is so successful that it seems to become the golden answer to all our problems.

Gary Marcus, a respected ML/AI researcher, published an excellent critical appraisal of this technique. For instance, he listed ten challenges that deep learning faces. He concludes that deep learning is only one of the tools needed and not necessarily a silver bullet for all problems.

From the security point of view, here are the challenges that seem relevant:

“Deep Learning thus far works well as an approximation, but its answers often cannot be fully trusted.”

Indeed, the approach is probabilistic rather than heuristic. Thus, we must be cautious. Currently, the systems are too easily fooled. This blog reported several such attacks. The Generative Adversarial Networks are promising attack tools.

“Deep learning presumes a largely stable world, in ways that may be problematic.”

Stability is not necessarily the prime characteristics of our environments.

“Deep learning thus far cannot inherently distinguish causation from correlation.”

This challenge is not related to security. Nevertheless, it is imperative to understand it. Deep learning detects a correlation. Too often, people assume that there is causation when seeing the correlation. This assertion is often false. Causation may be real if the parameters are independent. If they are linked/triggered by an undisclosed parameter, it is instead this undisclosed parameter that produces the causation.

In any case, this paper is fascinating to read to keep an open, sane view of this field.

Marcus, Gary. “Deep Learning: A Critical Appraisal.” ArXiv:1801.00631 [Cs, Stat], January 2, 2018. http://arxiv.org/abs/1801.00631.

 

 

Watermarking Deep Neural Networks

Recently, an IBM team presented at ASIA CCS’18 a framework implementing watermark in a Deep Neural Network (DNN) network. Similarly, to what we do in the multimedia space, if a competitor uses or modifies a watermarked model, it should be possible to extract the watermark from the model to prove the ownership.

In a nutshell, the DNN model is trained with the normal set of data to produce the results that everybody would expect and an additional set of data (the watermarks) that produces an “unexpected” result that is known solely to the owner. To prove the ownership, the owner injects in the allegedly “stolen” model the watermarks and verifies whether the observed result is what it expected.

The authors explored thee techniques in the field of image recognition:

  • Meaningful content: the watermarks are modified images, for instance by adding a consistently visible mark. The training enforces that the presentation of such visible mark results in a given “unrelated” category.
  • Unrelated content: the watermarks are images that are totally unrelated to the task of the model; normally they should be rejected, but the training will enforce a known output for the detection
  • Noisy content: the watermarks are images that embed a consistent shaped noise and produce a given known answer.

The approach is interesting. Some remarks inherited from the multimedia space:

  • The method of creating the watermarks must remain secret. If the attacker guesses the method, for instance that the system uses a given logo, then the attacker may perhaps wash the watermark. The attacker may untrain the model, by supertraining the watermarked model with generated watermarks that will output an answer different from the one expected by the original owner. As the attacker has uncontrolled, unlimited access to the detector, the attacker can fine tune the model until the detection rate is too low.
  • The framework is most probably too expensive to be used for making traitor tracing at a large scale. Nevertheless, I am not sure whether traitor tracing at large scale makes any sense.
  • The method is most probably robust against an oracle attack.
  • Some of the described methods were related to image recognition but could be ported to other tasks.
  • It is possible to embed several successive orthogonal watermarks.

A paper interesting to read as it is probably the beginning of a new field. ML/AI security will be key in the coming years.

Reference

Zhang, Jialong, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph. Stoecklin, Heqing Huang, and Ian Molloy. “Protecting Intellectual Property of Deep Neural Networks with Watermarking.” In Proceedings of the 2018 on Asia Conference on Computer and Communications Security, 159–172. ASIACCS ’18. New York, NY, USA: ACM, 2018. https://doi.org/10.1145/3196494.3196550.