Blockchain: Permissionless versus Permissioned

This post is the second one in a series dedicated to demystifying blockchains. The first post proposed a definition of blockchain. I intended that the topic of this second post would be consensus. The consensus is the cornerstone of blockchain. While starting to write it, I discovered that I needed first to introduce a fundamental characteristic of blockchain: permission.

Entities decide whether a block is valid and appended to the blockchain. They may be called blockchain nodes or validators. Validators are the pieces of software that determine which is the new block on the chain. In Sato’s vision, everybody could/should be a validator. Thus, his blockchain has no central authority. It is claimed that the blockchain is ruled by everybody (or nobody depending on your point of view). Bitcoin is a permissionless blockchain. This is the case for most cryptocurrencies and many other systems. Ethereum is another example of a permissionless blockchain. In a permissionless blockchain, users delegate their trust to uncontrolled, unknown validators under the assumption that the consensus mechanism does not allow a bad acting validator to cheat.

This delegation of trust is not always possible or desirable. Therefore, there is a second breed of blockchains that operate with a different configuration: permissioned blockchains. The validators are a set of finite known servers. A consortium manages this list following some defined governance rules. You may have noticed that the validators were not necessarily trusted. Depending on the chosen consensus mechanism, the level of expected trust may vary. The open source projects of Hyperledger offer many such permissioned architectures.

Which one is the best?

The advantage of the permissionless blockchain is that there is no (at least claimed) central authority. There is not a single point of failure that may be attacked. This advantage comes at a price: the consensus mechanism is complicated and/or extremely power consuming. It will have to be slow. Furthermore, it requires that the nodes have a robust method to validate a transaction. When managing financial ledgers, it is easy. Checking that Alice currently has the number of tokens she asks to transfer to Bob is straightforward. With more complex transactions, it may be less obvious. Would you trust an unknown validator to check whether your land deed belongs to you and to register it on a land registry blockchain? Or a copyright right? Smart contracts are not the golden answer to that issue.

The advantage of permissioned blockchain is that a set of entities that share a common interest in the fulfillment of the transactions can manage it efficiently. The validators have the authority and implement the complex validation rules that some use cases may be requiring. The consensus mechanisms are simpler and faster than the ones used by permissionless blockchains.

Many “purists” claim that permissionless blockchains are more secure than permissioned ones due to the absence of a central authority, arguing that the management of the validators is a weak point. As usual, the answer is more balanced. It mainly depends on the use cases. Some industrial use cases may benefit from permissioned blockchains. Personally, I would argue that the trust model of a permissioned blockchain can usually be more accurately defined than the trust model of permissionless blockchain. I have not yet read a convincing complete
convincing trust model of a permissionless blockchain.


Thus, a hyper-simplified definition: A permissionless blockchain does not trust nor know its validators whereas a permissioned blockchain knows all its validators but does not need to trust all of them.

Blockchain: A Definition

This post is the first one of a series dedicated to the blockchain. In the coming weeks, I will discuss many aspects of the blockchain. As some of my views may be perceived as pessimistic, a cautionary note is mandatory: I am a skeptical blockchain enthusiast. Blockchain has great potential but also many pitfalls. I hope that these posts will shed some lights on the blockchain.

The first step is to propose a definition for blockchain.

A blockchain is a secure distributed ledger.

Let us examine the four elements of this definition.

  • A blockchain is a ledger. It stores the complete chronological records of transactions. The transactions are combined in a data structure called a block. Each block is cryptographically bound to its predecessor, thus creating a chain of blocks. The blockchain is well suited for transactions and time series. For instance, Bitcoin records the exchange of bitcoins. Other types of information, for instance, graphs, are not necessarily well suited for blockchain. Nevertheless, many
    information can be transcribed in a set of transactions.
  • A blockchain is shared. Many entities use the same ledger. They may not all have the same access rights: Some entities may be allowed to submit transactions to the blockchain whereas other entities may only read these transactions. The use case defines the rules for access control. If the ledger is not to be shared, then probably a traditional database is more suitable than a blockchain.
  • A blockchain is distributed. No central server holds all the blockchain. Every node has the same complete copy of the ledger. And the nodes are connected through a peer-to-peer network. Therefore, the blockchain offers high availability and resilience. There is not one point of failure in the system.
  • A blockchain is secure. Each issuer signs its transactions. Each node validates every transaction according to validation rules that are defined by the blockchain governance. For instance, for a cryptocurrency, the validation of a transaction verifies that Alice owned the coins she transfers to Bob. In the case of a land registry or a supply chain tracking, the validation will be most probably more complicated.
    Once all the transactions of the block validated, the nodes engage in a consensus protocol to decide whether the block is to be appended to the blockchain. Once the consensus reached, all nodes add the new block to their copy of the blockchain. The consensus is the most complex element of the blockchain. Many consensus protocols are available. The most famous one is the Proof of Work (PoW) designed by Nakamoto Sato for Bitcoin. Mining is establishing the consensus for Bitcoin. In a next post, we will study in detail the PoW and other types of consensus. The consensus ensures that every node has the same ledger.
    Transactions are immutable. To alter an already recorded transaction, the attacker must modify the block containing the forged transaction, and also adjust all the subsequent blocks to maintain the cryptographic link. Furthermore, the attacker must trick the consensus protocol to vote the forged fork to be the valid one. Thus, it is reasonable to assume that the transactions are carved in stone. As a side note, immutability may become an issue in case of an error or if the “right to forget” is needed. What is in the blockchain stays in the blockchain.

This first post provides a broad definition of the blockchain. Next posts will explore technical elements of a blockchain.


Nakamoto, Satoshi. “Bitcoin: A Peer-to-Peer Electronic Cash System,” 2008.

Symposium on Foundations and Applications of Blockchain 2018

The University of South California (USC) will host on Friday March 9, 2018 the first Symposium on Foundations and Applications of Blockchain 2018.  Its program is available at   Note the presence of Leonard Adelman at the discussion panel!  I hope to meet some of you there.

Full disclosure:  I am member of its PC.

NIST overview on Blockchain

There are not many excellent available overviews of blockchain technologies. Thus, when NIST issues a draft “Blockchain Technology Overview,” it is interesting to have a look. It is a 57-page document open for public comments.

I like their description:

Blockchains are distributed digital ledgers of cryptographically signed transactions that are grouped into blocks. Each block is cryptographically linked to the previous one after validation and undergoing a consensus decision. As new blocks are added, older blocks become more difficult to modify. New blocks are replicated across all copies of the ledger within the network, and any conflicts are resolved automatically using established rules.

The document provides a high-level overview of blockchain. There are not many detailed technical descriptions. The document uses the bitcoin structure and vocabulary as all blockchains would use them. Thus, a generic block has necessary a nonce (for the Proof of Work) as well as a Merkle Tree. I am sure that many blockchains will not have such elements. Similarly, it uses the terminology of mining nodes for the validators. For consensus mechanisms that are not Proof of Work, it is not suitable. The sections dedicated to consensus (section 4) and Smart Contracts (section 6) are too light. The golden nugget is section 9: Blockchain Limitations and Misconceptions.

Nevertheless, it is worthwhile to read it and potentially to comment. Knowing the NIST, I am confident that the final document will be a reference document.

Meltdown and Spectre

On January 2018, security researchers disclosed two attacks coined Meltdown and Spectre. These attacks bypass the memory isolation of modern CPU by exploiting side-channel attacks on hardware-based optimization features of these CPUs. Thus, Meltdown and Spectre can gain arbitrary access to confidential information in the memory of the computer.

Modern CPUs, so-called superscalar computers, do not execute anymore the instructions sequentially. They implement many hardware-based optimization techniques that modify the normal instruction flow. For instance, the CPU executes multiple instructions concurrently to keep the processor’s sub-units as busy as possible (See Eben Upton’s post). Thus, out-of-order execution speculatively executes instructions further down the instruction flow as soon as all needed resources are available. Thus, the CPU may execute an instruction before it is sure that the instruction is needed. If later the CPU determines the instruction was not needed, it discards the corresponding results from its registers. This mechanism is sound architecturally but not at the microarchitecture level. The cache memory still holds the discarded results. Unfortunately, for many years, security researchers have designed side-channel attacks that leak confidential information from the cache. Modern CPUs’ branch predictors attempt to guess the future control flow and, execute the instructions of the predicted instruction flow preemptively. If the predicted decision is wrong, the CPU discards the “results” of the speculative instructions if the prediction was incorrect. Once more, this mechanism is sound architecturally. Unfortunately, the results remain in the cache memory. Covert-side-channel cache attacks can retrieve them.

The attacks

The goal of Meltdown is to dump the kernel memory space from a user-space process. In a simplified explanation, Meltdown operates in two steps. During the first step, Meltdown entices the CPU to access the kernel space through out-of-order instructions. When the instruction flow reaches this execution point, it detects the violation and triggers an exception handling that blocks actual access to the kernel space. During the second step, Meltdown uses covert-channel cache attacks to retrieve the cached “inaccessible” data. Intel memory management maps privileged kernel memory in the user-space. Thus, kernel memory becomes accessible. The usual security assumption is that kernel memory is secure and not accessible on a computer without root access. Meltdown breaks the hardware-enforced isolation between kernel space and user-space.

Meltdown may affect any CPU using out-of-order mechanism and is OS-independent. Meltdown has been successfully tested on Intel x86, Intel XEON processors, and ARM Cortex A57. Meltdown was mounted on cloud containers, such as Docker, successfully. The software countermeasures use KAISER. KAISER is a software patch that prevents the mapping of kernel memory into the user space, thus thwarting Meltdown. The KAISER patch is available for Windows 10, Linux, MacOS and iOS.

The goal of Spectre is to reach information from another process. Spectre exploits branch prediction and speculative execution. It operates in three steps. During the first step, Spectre mistrains the branch predictor by repeatedly executing a given branching. During the second step, Spectre entices the branch predictor to mispredict the control flow. The CPU then executes the speculative code that should perform the “illegal” operations, such as reading unauthorized memory. As in Meltdown, the third step exfiltrates the cached data using a covert-channel cache attack. Spectre accesses from a given user-space the memory of another user-space. Spectre breaks the hardware-enforced isolation between processes.

Spectre has been successfully implemented on recent Intel processors, AMD Ryzen, AMD FX, and AMD PRO. Spectre was implemented on Windows and Linux-based OS. It was written in C and also in JavaScript. The countermeasure would be to halt predictive execution on sensitive execution paths. This is a difficult task as the current instruction set is not fit for that purpose. The alternative solution is to implement in the code mechanisms that reduce the impact of the leaked information (for instance, combining conditional select and conditional move. In other words, developers must be aware of the covert-channel cache attack and implement adequate countermeasures. Compilers may also implement some tricks.

As Spectre can be mounted with JavasScript, malicious adware may become the first exploits using Spectre in the field. Thus, browsers are receiving patches to mitigate the risk. The exploitability via JavaScript is worrying.

Google’s Project Zero released concurrently three vulnerabilities, coined variant 1 to 3. These three vulnerabilities are identical to Meltdown and Spectre. Variant 1 and 2 correspond to Spectre whereas variant 3 maps to Meltdown.


Meltdown and Spectre are not due to bugs. They are the consequences of a new breed of side-channel attacks exploiting information leaking at the microarchitectural level for speed optimization.

It is interesting to notice that Paul Kocher is one of the researchers disclosing Meltdown and Spectre. In 1996, Paul designed the first side channel attack. His attack disrupted the security of smart cards. Since 1996, side-channel attacks have been among the most prolific, complex fields of research in security.

We want/need the CPUs to be faster. Thus, silicon designer added these optimization features to go faster. Unfortunately, most trivial countermeasures would defeat the benefit. For instance, cache attacks may be defeated by randomizing or equalizing the access time, which would annihilate the purpose of the cache. New hardware architecture, as well as new instruction sets, will help to defend. Nevertheless, we have a new class of side channel attacks to take into account. No doubts that variants will soon flourish.

Password complexity

Password complexity is one of the top conflictual topics of security. According to NIST, many companies may over-complicate their password policies.

In 2003, Bill BURR (NIST) established a set of guidelines for passwords asking for long passwords. Since then, many policies requested these complex, lengthy passwords mixing characters, digits and special characters. Recently, he confessed that he regretted to have written these guidelines. In June 2017, NIST published a more recent version of the NIST 800-63B document. These guidelines are user-friendly.

In a nutshell, if the user defines the password, then it should be at least eight characters long. If the service provider generates the password, it should be at least six characters long and can even be numerical. The service provider must use a NIST-approved random number generator. The chosen or generated password must be checked against a blacklist of compromised values. There are no other constraints on the selection.

On the user-friendly side, NIST recommends:

  • The password should not be requested to be changed unless there is evidence that it may be compromised.
  • The user should be allowed to use the “paste” command to favor the use of password managers
  • The user should be able to request the temporary display of the typed password.

Additional constraints are on the implementation of the verifier. The verifier shall not propose any hint. The verifier must implement a rate-limiting mechanism to thwart online brute-force attacks. The password shall be stored as a salted hash using an approved key derivation function such as PBDKDF2 or Balloon with enough iterations (at least 10,000 for PBKDF2).

Appendix A of the NIST document provides rationales for this simplification. For instance,

Many attacks associated with the use of passwords are not affected by password complexity and length. Keystroke logging, phishing, and social engineering attacks are equally effective on lengthy, complex passwords as simple ones.


Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules [Policies]. For example, a user that might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number, or “Password1!” if a symbol is also required.

A few cautionary notes; the addressed threat model is an online attack. It does not adequately cover offline attacks where the attacker gained access to the hashed password. The quality of the implementation of the salted hash mechanism is paramount for resisting offline attacks. Furthermore, it should be hoped that a theft of salted hash database should be identified and would trigger the immediate modification of all passwords, thus, mitigating the impact of the leak. NIST recommends using memorized secrets only for Assurance Level 1, i.e.,

AAL1 provides some assurance that the claimant controls an authenticator bound to the subscriber’s account. 

Higher assurance levels require multi-factor authentication methods. The guidelines explore them in depth. It may be the topic of a future post.

NIST is a reference in security. We may trust their judgment. As we will not get rid soon of the password login mechanism, we may perhaps revisit our password policy to make it user-friendlier and implement the proper background safeguard mechanisms.

I wish you a happy, secure new year.

DolphinAttack or How To Stealthily Fool Voice Assistants

Six researchers from the Zhejiang University published an excellent paper describing DolphinAttack: a new attack against voice-based assistants such as Siri or Alexa. As usual, the objective is to force the assistant to accept a command that the owner of the assistant did not issue. The attack is more powerful if the owner does not detect its occurrence (excepted, of course, the potential consequences of the accepted command). The owner should not hear a recognizable command or even better hear nothing.

Many attacks try to fool the Speech Recognition system by finding characteristics that may fool the machine learning system that powers the recognition without using actual phonemes. The proposed approach is different. The objective is to fool the audio capturing system rather than the speech recognition.

Humans do not hear ultrasounds, i.e., frequencies greater than 20 kHz. Speech is usually in the range of a few 100 HZ up to 5 kHz. The researchers’ great idea is to exploit the characteristics of the acquisition system.

  1. The acquisition system is a microphone, an amplifier, a low-pass filter (LPF), and an analog to digital converter (ADC), regardless of the Speech Recognition system in use. The LPF filters out the frequencies over 20 kHz and the ADC samples at 44.1 kHz.
  2. Any electronic system creates harmonics due to non-linearity. Thus, if you modulate a signal of fm
    with a carrier at fc, in the Fourier domain, many harmonics will appear such as fC – fm, fC + fm¸ and
    fC as well as their multiples.

You may have guessed the trick. If the attacker modulates the command (fm) with an ultrasound carrier fc, then the resulting signal is inaudible. However, the LPF will remove the carrier frequency before sending it to the ADC. The residual command will be present in the filtered signal and may be understood by the speech recognition system. Of course, the commands are more complicated than a mono-frequency, but the system stays valid.

They modulated the amplitude of a frequency carrier with a vocal command. The carrier was in the range 20 kHz to 25 kHz. They experimented with many hardware and speech recognition. As we may guess, the system is highly hardware dependent. There is an optimal frequency carrier that is device dependent (due to various microphones). Nevertheless, with the right parameters for a given device, they seemed to have fooled most devices. Of course, the optimal equipment requires an ultrasound speaker and adapted amplifier. Usually, speakers have a response curve that cut before 20 kHz.

I love this attack because it thinks out of the box and exploits “characteristics” of the hardware. It is also a good illustration of Law N°6: Security is not stronger than its weakest link.

A good paper to read.


Zhang, Guoming, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, and Wenyuan Xu. “DolphinAttack: Inaudible Voice Commands.” In ArXiv:1708.09537 [Cs], 103–17. Dallas, Texas, USA: ACM, 2017.


Picture by http: //