Intel SGX™ is dead

Intel announced that the next generations of CPUs (11th and 12th) will no longer support the SGX technology (see data sheet).  SGX is the secure enclave in Intel CPU.   The SGX isolates the program and data in its environment from the insecure Rich Execution Environment (REE).  Thus SGX-based applications could act as a Root of Trust.

At least, this was the promise.  Unfortunately, starting with Spectre-like attacks, SGX was under the fire of many interesting exploits (for instance, VoltPillager).  Thus, it seems that in its current form, SGX cannot be a trusted secure enclave.

For most consumers, the main consequence is that future PCs will not support any more UHD Blu-ray.  Indeed, the content protection standard AACS2 mandates a Secure Execution Environment with a Hardware Root of Trust (HRoT).  For Microsoft Windows, the solution was the use of SGX.  Some applications were also basing their security model on SGX.  They will have to find an alternative that is not necessarily available.  TPM offers a valid HRoT but not a Secure Execution Environment.  Current tamper-resistant software and obfuscation technologies may not be sufficient.

Apple’s Find My

Apple disclosed at the WWDC an interesting feature: “Find My.”   It will be possible to track the GPS location of your device if it is stolen or lost.  And Apple will not know this location.  Here is how it works. 

The prerequisite is that you have at least two Apple devices.   All your devices share a private key.  The trick is that instead of having one unique public key, the devices have a multitude of public keys linked to this private key.  This is possible, and there are numerous known cryptographic solutions that may fulfill this part.

The device broadcasts via Bluetooth its current public key.   The device broadcasts this beacon even while turned out.  Any Apple device nearby may catch the beacon.  Then the receiving device encrypts its current GPS location with the broadcast public key.  It sends the encrypted location as well as the cryptographic hash of the public key to Apple’s server.  Of course, the public key changes periodically.  The rotating period has not been disclosed.

If you want to locate one of your devices, you trigger the request on one of your devices.  It sends the hash of the public key to the Apple server, which returns the encrypted location.  The device has the private key and thus can decrypt the location. Et voila.

Of course, under the assumption that Apple does not have the private key, only your devices can decrypt the location.  Normally, Apple can neither get the location nor link different related public keys together.

Many questions that were not answered in the presentation.  The frequency of key rotation, is there a limited number of public keys, how to know which hash to send?  Waiting for some publications to deep dive.

The idea is interesting.  It is complex, thus subject to failures and vulnerabilities.   What would the system do if, from many locations, there is a beacon broadcasting the same public key?  Will the collection of multiple related public keys not reveal some partial information, for instance one of the exponents?

Security assessment: white or black box?

White or Black: Which is the best choice? Is white box testing cheating? To me, the answers seem trivial. Nevertheless, recent discussions highlighted that it is not clear for everybody.

White box testing means that the security assessor has access to the entire documentation, the source code and even sometimes an instrumented target of evaluation (TOE). Black box testing means that the security assessor has only access to the TOE and publicly available information. Thus, black box testing mimics the configuration encountered by hackers. Of course, hackers will use any possible means to collect related non-public information, for instance, using social engineering.

Many people believe that black box evaluation is the solution of choice. As it is similar to the hackers’ configuration, it should provide a realistic assessment of the security of the TOE. Unfortunately, this assumption is wrong. The main issue is the asymmetry of the situation. On one side, the black box evaluation is performed by one evaluator or a team of reviewers. On the other side, a legion of hackers probes the product. They may outnumber the evaluators by several magnitudes. They spend more time to discover vulnerabilities. Mathematically, their chance to find an exploit is higher than the likelihood of the evaluators to find the same vulnerability.

Do you evaluate your security to know whether an attacker may breach your system? According to Law 1, this will ineluctably happen. Then, if your evaluation team has not found any problem, you may only conclude that there were no blatant vulnerabilities. You have no idea whether (or rather when) a hacker will find a vulnerability.

Alternatively, do you evaluate the security to make your product more secure, i.e., with fewer vulnerabilities? In that case, you will give the evaluators the best tools to discover vulnerabilities. Therefore, white box testing will be your choice. For the same effort, white box testing will find more security issues than black box testing. Furthermore, the team will pinpoint the faulty part of the system whereas black box testing will disclose how the attacker may succeed but not where the actual issue is. Furthermore, let us assume that the white box assessment discovered a vulnerability through code review. The white box tester has just to explain the mistake and its consequences. For the same vulnerability, the black box tester has to blindly explore random attacks until he/she finds the vulnerability. Then, the evaluator has to write the exploit that demonstrates the feasibility of the attack. The required effort is too big. Thus, in term of Return On Investment (ROI), white box testing is far superior to black box testing. More discovered vulnerabilities for the money!

Fixing a vulnerability discovered by white box testing may also be cheaper. It is well known that the earlier a bug is found during the development cycle, the cheaper it is to fix. The same goes for vulnerabilities. As some white box security testing can occur before final integration (design review, code review…), fixing it is earlier thus cheaper than for black box testing which occurs after the final integration.

White box security testing is compliant with Kerckhoffs’s law. The selection of new cryptographic primitives uses white box testing. The algorithm is published for the cryptanalysts to break. ISO27001 is a kind of white box evaluation.

When a company claims that a third party audited the security of its product, in addition to the identity of this third party, it would be great to disclose whether it was white or black box testing. I would favor the first one.

Thus, when possible, prefer white box security testing over black box testing. This is the wise, winning choice. The bounty programs are the only exception. They operate as black box testing. They should be a complement to white box testing but never a replacement. Their ROI is high as you pay only when successful.

 

 

 

Smart Bottle

JW_Blue_Smart_Bottle_3Diageo and Thin Films have recently demonstrated a smart bottle.   The seal of the bottle contains a NFC tag.  This tag not only carries unique identity of the bottle, but it detects also whether the seal was opened or is still closed.  This smart tag allows interesting features:

  • As for traditional RFID tags, it enables the follow up of the bottle along the delivery chain.
  • As it uses NFC, the seal allows a mobile phone app to identify the bottle, and thus create a personalized experience (interesting features for privacy: it is possible to track who purchased the bottle (at the point of sale with the credit card) and see who actually drinks it (was it a gift?))
  • As it detects if the seal has been broken, it is a way to detect tampering of the bottle during the distribution chain.  This may thwart some forms of piracy and counterfeiting.
  • The tag is also a way to authenticate the origin of the product.  It may have interesting application for expensive rare bottles to verify counterfeiting.
  • It does not yet tell if you drank too much.  This will be the next application associated to the smart glass that will detect what you drink and how much 

See thinfilm brochure opensense

Tribler: a (worrying) P2P client

triblerTribler is a new P2P client that made the headlines last month.   It was claimed to make bitTorrent  unstoppable and offer anonymity.   I had a look at it and played with.

This is an open source project from the University of Delft.  It has been partly funded by the Dutch Ministry of Economic Affairs.  The project started in January 2008.  Tribler is worrying to both content owners and users.

To content owners, Tribler is worrying with its features.

  •  Tribler is more convivial than other P2P clients.   It integrates in the client several functions.  First, it allows to search torrents from the client user interface within its currently connected clients.  In other words, it does not need a central tracker to keep the torrents pointers.   Thus, it is more robust and also easier to use than other clients.  If the expected content is popular, the likelihood to find it within the connected community is high.  Thus, it is unnecessary to leave the application to find torrents on trackers. Of course, it can import torrents from any external trackers such as mininova.  Thus, when content is not available in the community, the user may use traditional trackers.
    The second interesting feature is that it emulates video streaming using standard torrents.  In this mode, it buffers the video and starts to play it within the application after a few seconds.  From the user point of view, it is similar to streaming from a cyberlocker (with the difference that, once viewing completed, there is a full copy of the content on the user’s computer).
    These features are not new (emule allowed to search within it, Bittorrent Pro offers an HD player inside it…).  However,  Tribler nicely packages them.  The user experience is neat.
  • Tribler promises anonymity.  It uses a Tor-like onion structure to access the different peers.  Or at least, it should do in the future.  With the current version, it is clearly announced that it is still beta.   Furthermore, all the current peers were directly connected.  Only an experiemental torrent used the feature.  However, once validated and activated, it should become harder to trace back the seeders.

To users,Tribler is worrying for its security.  Tribler promises anonymity.  Unfortunately, this is not the case.  “Yawning angel” analyzed the project.  Although his analysis was not thorough, it highlighted several critical flaws in the used protocol.  As it is possible to define circuits of arbitrary length, it would be possible to create congestion and thus create a kind of DoS.  More worrying there are several severe cryptographic mistakes such as improper use of ECB mode, fixed IV in OFB…  His conclusion was:

For users, “don’t”. Cursory analysis found enough fundamental flaws, and secure protocol design/implementation errors that I would be reluctant to consider this secure, even if the known issues were fixed. It may be worth revisiting in several years when the designers obtain more experience, and a thorough third party audit of the improved code and design has been done.

Lessons:

  • P2P seems not yet dead.  Streaming emulation may change the balance with streaming cyber lockers.
  • Be very cautious about claimed anonymity.  Developing a robust Tor-like solution requires an enormous effort and deep knowledge of cryptography and secure protocols.  Tor is continuously under attack.
  • Universities may finance projects that will facilitate piracy.  “Openess of the Internet” to fight censorship does not mandate to watch content within the client.  The illustrating screenshot of Tribler on the Delft university page clearly shows some copyrighted movies offered to sharing.

Unlocking the phone with a tap on your wrist

This is the new phone unlocking mode that vivalnk designed for Moto X phone.  The system is rather simple.   YScreen Shot 07-24-14 at 11.33 AM 001ou stick an NFC-based skin temporary tattoo on your wrist.   Once the tattoo is paired with your phone, to unlock the phone you just need to bring the phone in the range of the tattoo.  It is possible to unpair a tattoo if it was lost or stolen.

According to vivalnk, the tattoo’s adhesive lasts about five days, even under water.   It costs one dollar per tattoo.  Currently, it is only available for the Moto X.

This tattoo is a wearable authenticator.   I forecast that we will see this kind of authentication method using an NFC start to spread.   It may come in ewatches, rings, or key rings.  I believe that the ring would be a good device.  The mere fact to take your phone in your hand may unlock it.

Target and FireEye

Beginning of December 2013, US retail Target suffered a huge leak of data: 40 million valid credit card information were sent to Russian servers. This leak will have serious financial impact for Target as there are already more than 90 lawsuits filed against Target.

Target is undergoing deep investigation to understand why this data breach occurred. Recently, an interesting fact popped up. On the 30th November, a sophisticated, commercial, anti-malware system FireEye detected the spreading of an unknown malware within Target’s IT system . It spotted the customized malware that was installing on the point of sales to collect the credit card number before sending them to three compromised Target servers. Target’s security experts based at Bangalore (India) reported it to the US Security Operation Center in Minneapolis. The alert level was the highest from FireEye. The center did not react to this notification. On 2nd December, a new notification was sent without generating any reaction.

The exfiltration of the stolen data started after the 2nd December. Thus, if the Security Operation Center would have reacted to this alert, although it may not have stopped the collection but at least it would have stopped the exfiltration to Russian servers.

As we do not have the details on the daily volume of alerts reported from Bangalore to the Security Operation Center, it is difficult to blame anybody. Nevertheless, this is a good lesson with the conclusions:

  • Law 10: Security is not a product but a process. You may have the best tools (and Fire Eye is an extremely sophisticated one. It mirrors the system and runs the input data within the mirror and analysis the reactions in order to detect malicious activities). If you do not manage the feedback and alerts of these tools, and take the proper decision, then these tools are useless. Unfortunately, the rate of false error is too high to let current tools take such decisions
  • Law 6: You are the weakest link; The Security Operation Center decided not to react. As FireEye was not yet fully deployed, we may suppose that the operators may not fully trust it. The human decision was wrong this time.