Apple’s Find My

Apple disclosed at the WWDC an interesting feature: “Find My.”   It will be possible to track the GPS location of your device if it is stolen or lost.  And Apple will not know this location.  Here is how it works. 

The prerequisite is that you have at least two Apple devices.   All your devices share a private key.  The trick is that instead of having one unique public key, the devices have a multitude of public keys linked to this private key.  This is possible, and there are numerous known cryptographic solutions that may fulfill this part.

The device broadcasts via Bluetooth its current public key.   The device broadcasts this beacon even while turned out.  Any Apple device nearby may catch the beacon.  Then the receiving device encrypts its current GPS location with the broadcast public key.  It sends the encrypted location as well as the cryptographic hash of the public key to Apple’s server.  Of course, the public key changes periodically.  The rotating period has not been disclosed.

If you want to locate one of your devices, you trigger the request on one of your devices.  It sends the hash of the public key to the Apple server, which returns the encrypted location.  The device has the private key and thus can decrypt the location. Et voila.

Of course, under the assumption that Apple does not have the private key, only your devices can decrypt the location.  Normally, Apple can neither get the location nor link different related public keys together.

Many questions that were not answered in the presentation.  The frequency of key rotation, is there a limited number of public keys, how to know which hash to send?  Waiting for some publications to deep dive.

The idea is interesting.  It is complex, thus subject to failures and vulnerabilities.   What would the system do if, from many locations, there is a beacon broadcasting the same public key?  Will the collection of multiple related public keys not reveal some partial information, for instance one of the exponents?

Biometric Vein Recognition Hacked

Biometric vein recognition is considered with iris recognition as the most secure biometrics system. Vein recognition is used in highly secure areas. Automatic Telling Machines starts to use this technology with, for instance in Japan. This statement was valid until December 2018. At the famous German Chaos Communication Congress (35c3), Krissler Ian, also known as Starbug, and Albrecht Julian demonstrated a method (German video) to create a lure hand that defeats commercial systems.

Starbug is a well known hacker in the field of biometrics. For instance, in 2016, he faked successfully the fingerprints of a German minister using high resolution captured photos.

For about 20 years, vein recognition is mainly a Japanese technology. Fujitsu and Hitachi are the two leaders. The network of veins is captured either by reflection from the palm or through transparency with Infra Red (IR) light for fingers. The captured network is turned into minutiae like a typical fingerprint.

The capture phase seems rather simple. The researchers removed the IR filter of a traditional high-end DSLR camera (in that case, Nikon D600) with good lenses. They were able to get a proper capture up to 6 meters with a flash. They also built a raspberry-based system that could be hidden into a device, for instance, a hand-dry-blower. The captured image is processed via a python script to generate a skeleton of the network of veins (as illustrated by the figure below).

Once the skeleton available, they build a fake hand (or finger) using bee wax. The fake hand covers the printed picture. They tried many different materials, but the wax presented the best performance concerning transparency and diffraction of IR light, in other words, it better emulated skin.

Once the fake hand available, the attacker has to use it on the detector. They performed a live demonstration. The demonstration highlighted that the lighting conditions were critical. The strong lighting of the scene spoiled the demonstration, and they had to shade the detector to success. On the other hand, the fake finger detection went on smoothly. The detector was a kind of tunnel. At the time of the presentation, Hitachi and Fujitsu did not have yet reacted.

The attacked detectors had no liveliness detection. As I highlighted in section 7.4.2 of “Ten Laws for Security,” detecting the presence of a real living being behind the captured biometrics is necessarily for robust systems. Unfortunately, such detection increases the complexity and cost of detectors.

Conclusion: Once more, Law 1: Attackers will always find their way
was demonstrated.

Deep Learning: A Critical Appraisal (paper review)

Deep learning is becoming extremely popular. It is one of the fields of Machine Learning that is the most explored and exploited. AlphaGo, Natural Language Processing, image recognition, and many more topics are iconic examples of the success of deep learning. It is so successful that it seems to become the golden answer to all our problems.

Gary Marcus, a respected ML/AI researcher, published an excellent critical appraisal of this technique. For instance, he listed ten challenges that deep learning faces. He concludes that deep learning is only one of the tools needed and not necessarily a silver bullet for all problems.

From the security point of view, here are the challenges that seem relevant:

“Deep Learning thus far works well as an approximation, but its answers often cannot be fully trusted.”

Indeed, the approach is probabilistic rather than heuristic. Thus, we must be cautious. Currently, the systems are too easily fooled. This blog reported several such attacks. The Generative Adversarial Networks are promising attack tools.

“Deep learning presumes a largely stable world, in ways that may be problematic.”

Stability is not necessarily the prime characteristics of our environments.

“Deep learning thus far cannot inherently distinguish causation from correlation.”

This challenge is not related to security. Nevertheless, it is imperative to understand it. Deep learning detects a correlation. Too often, people assume that there is causation when seeing the correlation. This assertion is often false. Causation may be real if the parameters are independent. If they are linked/triggered by an undisclosed parameter, it is instead this undisclosed parameter that produces the causation.

In any case, this paper is fascinating to read to keep an open, sane view of this field.

Marcus, Gary. “Deep Learning: A Critical Appraisal.” ArXiv:1801.00631 [Cs, Stat], January 2, 2018. http://arxiv.org/abs/1801.00631.

 

 

Blockchain: a “supply chain” attack

A hacker, by the handle Right9ctrl, has injected malicious code that steals Bitcoin and Bitcoin Cash funds stored inside BitPay’s Copay wallet apps. The hacker injected the malicious code in the JavaScript library Event_Stream. This malicious code lays dormant until used inside the source code of Copay, a Bitcoin wallet. There, it steals much information such as the private key.

Right9ctrl inherited the access to the Event_Stream library because the initial author passed him/her the control. The initial author did not want to maintain anymore the open source library.

In other words, this is an example of a software supply chain attack. One element in the supply chain (here a library) has been compromised. Such an attack is not a surprise. Nevertheless, it raises a question about the security of open source components.

Many years ago, the motto was “Open source is more secure than proprietary solutions.” The primary rationale was that many eyes reviewed the code and we all know that code review is key for secure software. In the early days of open source, this motto may have been mostly true, under some specific trust models ( see https://eric-diehl.com/is-open-source-more-secure/, Chapter 12 of Securing Digital Video…). Is it still true in our days?

Currently, there are a plethora of available open source libraries. With the advent of many new programming languages, the number of libraries is exploding. Many projects have a few contributors, sometimes even only one individual. How many users of these libraries review the code before using it? I would guess very very few. Thus, the motto is becoming weaker. It is probably true for large, well-established projects, such as OpenSSL, but what for projects with one contributor? The success of the library is most probably not a valid indicator of trustfulness.

In this case, a warning signal would have been that the malicious code was heavily obfuscated. There is no legitimate reason why a JavaScript open source should be obfuscated. Open source is about transparency. Obfuscation is about obscurity.

Conclusion: be very careful when selecting an open source library, you may not necessarily trust it.

Conclusion2: if you are using a copy wallet, Sophos provides some advices.

 

Blockchain: why have permissionless blockchains to be slow?

This post is the seventh post in a series dedicated to demystifying blockchains. The previous post introduced the permissioned blockchains based on Byzantine Fault Tolerant (BFT) protocols.  BFT-consensus is faster than lottery-based consensus.  This post attempts to explain why lottery-based consensuses are inherently slow.

First, we need to explain the concept of a temporary fork.  As the consensus is a lottery, sometimes two nodes win at the same time.  The consequence is that at a given time there are two different instances of the blockchain, so-called forks.  Depending on the spreading speed of the information of the wining block and the number of validation nodes, the two forks may grow independently.  The consensus algorithm decides that the longest fork wins.  The shorter fork is abandoned.  The transactions of the “abandoned” fork that were not approved in the winning fork are returned to the pool of “not yet approved” transactions.  This strategy is the reason why Bitcoin approves a transaction finally only once there are at least five following blocks in the chain.

Temporary forks are inherent to lottery-based consensus protocols.  BFT-based consensus protocols do not have temporary forks because their consensus is deterministic.  All the faithful nodes take the same decision.

For the sake of completion, there are two other types of fork: hard forks and soft forks.  They are due to the evolution of the protocol and are independent of the consensus mechanism.  Thus, they are out of the scope of this post.  We will come back to these forks in a future post.

Let us use Bitcoin to illustrate the issue.  Bitcoin validates a block every ten minutes (average time).  The difficulty of the Proof of Work drives this duration.  (see Blockchain: Proof of Work).  As a block holds a maximum number of transactions, this average time of validation defines the blockchain’s throughput.  Currently, Bitcoin’s capacity is about seven transactions per second.

How to increase this throughput?  An “obvious” solution seems to decrease the average validation time by reducing the difficulty of the cryptographic puzzle.  For instance, should the average duration be halved, then the throughput would double, isn’t it?  Unfortunately, it is not so simple.  If we reduce the difficulty of the challenge at constant global calculation power, the duration is reduced.  Another consequence is that the probability to solve the puzzle rises.  The easier the challenge, the higher the likelihood to solve it.  Thus, the probability to create a temporary fork increases also.  The shorter the average validation time, the more temporary forks.  Should the duration decrease drastically, the system would end up with a pandemonium of temporary forks.  Unfortunately, abandoned temporary forks inject back many “seemingly-approved” transactions in the pool of “not-yet-approved” transactions.  In other words, the global throughput is decreasing, and the predictability of when a transaction may be approved finally is impacted.  Of course, Sato understood this crucial aspect of his protocol and chose ten minutes rather than one minute.

Conclusion: Lottery-based consensus protocols need to have “long” approval duration to avoid the proliferation of temporary forks.

Blockchain: Consensus based on Byzantine Fault Tolerance

This post is the sixth post in a series dedicated to demystifying blockchains. The last post introduced the Proof of Stake. The third post introduced the concept of consensus and classified them into two categories: lottery-based and Byzantine Fault agreement-based protocols. This post presents the most known Byzantine Fault agreement-based consensus: Proof of Work

Byzantine Fault Tolerance is known for about four decades. A Byzantine Fault is any failure of a system. The failure may be that the system is not working, or the system is behaving maliciously. An f-Byzantine Fault-Tolerant network is a network that operates correctly under the assumption that f nodes may be concurrently experiencing Byzantine faults. The nodes are either well-behaving or ill-behaving, i.e., suffering from Byzantine Faults. The network comprises n nodes with a maximum of f ill-behaving nodes.

The best implementation of Byzantine Fault Tolerant system for asynchronous protocols is the Practical Byzantine Fault Tolerance (PBFT).  PBFT assumes that the size of the network is at least 3 f + 1.

At any given moment, one node is the primary and the other nodes are replicas. The simplified algorithm operates as follows:

  • A client sends a request to the primary node for an operation
  • The primary node broadcasts this request to all replicas
  • Replicas perform the requested operation and return an answer to the client
  • The client waits for at least replicas answering the same result. This is the consensus outcome of the operation.

The figure illustrates the basic behavior for the case of a 1-node fault tolerant system. It has at least four nodes. The client sends its request to the primary node. The primary node broadcasts the request of operation to the three other nodes: replica 1, replica 2 and replica 3. As soon as the client receives an affirmative answer from two replicas (in the figure, replica 1 and replica 3), it considers the operation successful. If one replica is failing, the client will still receive proper answers from the two other replicas. If the primary node is failing, then the protocol switches the role of primary to one replica.

The PBFT is well adapted for fixed networks with known trusted entities. As such, it may be useful for blockchain validated by a set of known, approved authorities, i.e., permissioned blockchains. The number of authorities defines the system’s resilience. The minimal size is four which allows for one ill-behaving authority. Each authority has the same level of trust and shares trust with the other authorities.

PBFT is straightforward and robust. The theoretical background is well studied, known for a long time and used by the community. The trust is not linked to the ownership of resources or stakes. It is fast and computationally efficient. There is no notion of mining. It is deterministic. A permissioned blockchain using PBFT can have a very high through output of transactions.

Unfortunately, the trust is not flexible. PBFT requires a pre-established list of collaborating trusted participants. Any new participant requires new settings. This adjustment cannot be dynamic. The consensus of permissionless blockchains cannot employ PBFT. By participants, we mean the validation nodes and not the users of the blockchain. The system does not need to trust the users. It is the role of the validation and consensus to screen bad transactions.

A known weakness is that the system is prone to Sybil attacks. An entity may attempt to register several participants that it controls stealthily. If the cheating entity controls at least f sibyls, then it controls the full system. Therefore, the initial trust in the participants is paramount. In most permissioned blockchains, the likelihood of Sybil attacks is negligible.

Conclusion: PBFT is a well-known, well-studied protocol. For decades, distributed systems have taken their decisions using it. It is fast and computationally efficient. PBFT is an interesting consensus protocol for permissioned blockchain. Some variants of BFT are also available.

Bibliography

M. Castro and B. Liskov, “Practical Byzantine Fault Tolerance,” in Proceedings of the Third Symposium on Operating Systems Design and Implementation, New Orleans, USA, 1999