Black Hat 2024: Day 1

Jeff MOSS introduction

Jeff MOSS is the founder of Black Hat and Defcon. He always presents his latest thoughts.

New (probably) unforeseen threats have risen in the geopolitical landscape in the last few years.  For instance, what do you do if some of your development teams are in a war zone?  What if the IP is stored in these zones?  Can you stay neutral?  What are the cyber consequences if you cannot?

Keynote: Democracy’s Biggest Year: The Fight for Secure Elections Around the World

BUECHEL E. (CISA), De VRIES H. (European Union Agency for Cybersecurity), EASTERLY J. (Cybersecurity and Infrastructure Security), OSWALD F. (National Cyber Security Centre)

Nihil Nove Sub Sole. The usual expected stuff.

Practical LLM Security: Takeaways From a Year in the Trenches

HARANG R. (NVIDIA)

First, he provided a high-level explanation of Large Language MOdel (LLM). The interesting point is that although the candidate tokens are ranked by their highest probability, the sampling is random. Thus, LLM sometimes makes bad/weird selections (hallucination,…).

Sampled tokens are locked (no go-back).  Thus, the lousy selection continues and cannot be reversed, at least by the LLM.  The same is true for prompts (Forgetting previous prompts is not going back).

This is why Retrieval Augmented Generation (RAG) is used.  RAG allows better fine-tuned knowledge.

He highlighted some RAG-related issues.  But RAG increases the attack surface.  It is easier to poison a RAG dataset than the LLM dataset.  For instance, he described the Phantom attack.  The attacker can direct the expected answer for a poisoned concept.

Therefore, the security and access control of the RAG is crucial.  Furthermore, RAG is excellent at searching.  Thus if the document classification (and reinforcement) and access control are lax, it is game over.  It is relatively easy to leak confidential data inadvertently.

The RAG’s use of emails is a promising but dangerous domain. It is an easily accessible point of poisoning for an attacker and does not require penetration.

What is logged and who can view the logs is also a concern. Logging the prompts and their responses is very sensitive. Sensitive information may leak and, in any case, bypass the boundaries.

Do not rely on guardrails.   They do not work or protect against a serious attacker.

Privacy Side Channels in Machine Learning Systems, Debendedetti et al., 2023 is an interesting paper to read.

15 Ways to Break Your Copilot

EFRAT A. (Zenity)

Copilot is a brand name that encompasses all of Microsoft’s AI products. All Copilots share the same low-level layers (i.e., they use the same kernel LLM) and are specialized for a set of tasks.

Copilot Studios allows with no code to create a Gen AI-based chatbot.  The speaker presented many default configuration issues that opened devastating attacks.  Meanwhile, Microsoft has fixed some of them to be less permissive.  Nevertheless, there are still many ways to allow the leaking of information.  This is especially true as the tool targets non-experts and thus has a rudimentary security stance if there is even a security stance)

Be careful who you authorize to use such tools and review the outcome.

Kicking in the Door to the Cloud: Exploiting Cloud Provider Vulnerabilities for Initial Access

RICHETTE N. (Datadog)

The speaker presented cross-tenant issues in AWS.  Datadog found some vulnerabilities in the policies managing `sts:AssumeRole`.

Lesson:  When using `sts:AssumeRole`, add restrictive conditions in the policy based on the ARN, or Source, and so on.

Compromising Confidential Compute, One Bug at a Time

VILLARD Maxime (Microsoft)

To isolate a tenant from the cloud provider, Intel proposes a new technology called TDX.  It will be present in the next generation of Intel chips.  The host sends a set of commands to enter the TDX mode for a module.  In this mode, the TDX module can launch its own VM to execute independently from the cloud hypervisor.[1]

 The team found two vulnerabilities.  One enabled a DoS attack from within the TDX to crash all the other tenants executing on the host processor.


[1] TDX is not an enclave like SGX.


Black Hat 2021: my preferred talks

Last week, I attended Black Hat 2021. It was a hybrid conference, i.e., both on-site and virtual. As a consequence, there were only four concurrent “physical’ talks at any moment. The number of attendees was far lower than in 2019. I attended the physical ones exclusively with a focus on hacking.

I enjoyed the most the two following talks


Breaking the Isolation: Cross-Account AWS Vulnerabilities by Shir Tamari and Ami Luttwak
They explored the AWS cross services such as CloudTrail or the Serverless Repository. Such services allow to store some data in the same location for several services or read data from the same location for several services. They discovered that the security policy configuration did not define the referenced accounts. Thus, it was possible to use CloudTrail to store files in an S3 bucket that you did not control.
AWS has fixed the issue. Unfortunately, it is up to the customer to update the policies correspondingly; else, the holes are still present.
Fixing a Memory Forensics Blind Spot: Linux Kernel Tracing by Andrew Case and Golden Richard
The ePBF is a programming language that makes access to the Linux kernel tracing easy. The tracing system is mighty. It allows to read registers, hook subsystem calls, etc. From the userland!! Powerful but nasty.
They presented some extensions of their open-source tools to list the hooked calls and other stealthy operations.
I was not aware of ePBF. It opened my eyes and scared me. An earlier talk With Friends Like eBPF, Who Needs Enemies? The authors presented a rootkit based on ePBF. Unfortunately, I did not attend this talk. Would I have known ePBF, I would have attended it. It seems that there were three other ePBF-based talks at DefCon 2021.


In the coming weeks, I will listen to some virtual talks and report the ones I enjoyed.

Law 7 – You Are the Weakest Link

laws7This post is the seventh post in a series of ten posts. The previous post explored the sixth law: Security is not stronger than its weakest link.  Although often neglected, the seventh law is fundamental.  It states that human users are often the weakest element of the security.

Humans are the weakest link for many reasons.  Often, they do not understand security or have an ill perceived perception of it.  For instance, security is often seen as an obstacle.  Therefore, users will circumvent it when security is an obstruction to the fulfillment of their task and will not apply security policies and procedures.  They do not believe that they are a worthwhile target for cyber-attacks.

Humans are the weakest link because they do not grasp the full impact of their security-related decisions.  How many people ignore the security warnings of their browser?  How many people understand the security consequences and constraints of Bring Your Own Device (BYOD) or Bring Your Own Cloud (BYOC)?  Employees put their company at risk by bad decisions.

Humans are the weakest link because they have intrinsic limitations.  Human memory is often feeble thus we end up with weak passwords or complex passwords written on a post-it.  Humans do not handle complexity correctly.  Unfortunately, security is too complex.

Humans are the weakest link because they can be easily deceived.  Social engineers use social interaction to influence people and convince them to perform actions that they are not expected to do, or to share information that they are not supposed to disclose.   For instance, phishing is an efficient contamination vector.

How can we mitigate the human risk?

  • Where possible, make decisions on behalf of the end user; as the end users are not necessarily able to make rational decisions on security issues, the designer should make the decisions when possible. Whenever the user has to decide, the consequences of his decision should be made clear to him to guide his decision.
  • Define secure defaults; the default value should always be set to that for the highest or, at least, an acceptable security level. User friendliness should not drive the default value, but rather security should.
  • Educate your employees; the best answer to social engineering is enabling employees to identify an ongoing social engineering attack. This detection is only possible by educating the employees about this kind of attack.  Training employees increases their security awareness and thus raises their engagement.
  • Train your security staff; the landscape of security threats and defense tools is changing quickly. Skilled attackers will use the latest exploits.  Therefore, it is imperative that the security personnel be aware of the latest techniques.  Operational security staff should have a significant part of their work time dedicated to continuous training.

Interestingly, with the current progress of Artificial Intelligence and Big Data analytics, will the new generation of security tools partly compensate this weakness?

If you find this post interesting, you may also be interested in my second book “Ten Laws for Security” that will be available end of this month.   Chapter 8 explores in details this law. The book will be available for instance at Springer or Amazon.

Shared Responsibilities on the Cloud

Microsoft recently published a paper titled “Shared Responsibilities For Cloud Computing.” The aim is to explain that when migrating to the cloud not everything relies on the lapses of the cloud provider to reach a secure deployment. This reality is too often forgotten by cloud customers. Too often, when assessing the security of systems, I hear the statement, but cloud provider X is Y-compliant. Unfortunately, even if this declaration is true, it is only valid for the parts that the cloud provider believes are under its responsibility.

The golden nugget of this document is this figure. It graphically highlights the distribution of responsibilities. Unfortunately, I think there is a missing row: Security of the Application executing in the cloud. If the application is poorly written and riddled with vulnerabilities, then game over. In the case, of SaaS, this security is the responsibility of the SaaS provider. For the other cases, it is the responsibility of the entity who designed the service/application.

The explanations in the core of the document are not extremely useful as many elements are advertising for Microsoft Azure (it is fair as it is a Microsoft document).

The document can be used to increase the awareness of the mandatory distribution and sharing of responsibilities.

Some notes on the Content Protection Summit 2015

These motes are personal and reflect the key points that raised my interest. They do not report the already known issues, already approved best practices and security guidelines.

The  conference was held on 7th December at Los Angeles. The audience was rather large for such event (more than 120 attendees) with representatives of content owners, service and technology providers and a few distributors. CPS is becoming the annual event in content protection. The event was as interesting as last year.

A special focus has been placed on cyber security rather than purely content protection.

Welcome remarks (ROSE M.)

The end of EU safe harbor is an issue.

CDSA: A focus on the right things at the right time (by ATKINSON R.)

A set of work streams for 2016 with nothing innovative. Some focus on training and education. A second focus on opportunity versus piracy.

IP security the creative perspective (by McNELIS B.)

An attack against YouTube that does not have in place a strong enough position against piracy. Google does not play the game despite it could (for instance, there is no porn on YouTube, proving the efficiency of curation). The difference between Apple and Google is the intent.

Creators do usually not want to bother about content protection. They want to communicate directly with consumers. The moderator explained that indie filmmakers are far more concerned as piracy may be more impacting their revenue stream. The middle class of creators is disappearing.

The BMG / Cox communication legal decision is a good promising sign.

Breakthrough in watermark (by OAKES G.)

NNSS (Nihil Nove Sub Sole, i.e., nothing new under the sun)

The move to digital pre-release screeners: DVD R.I.P. (panel with ANDERSON A., TANG E., PRIMACHENKO D.)

Pros:

  • Nobody any more uses exclusively DVD at home, they use additional media. The user experience of DVD is bad (dixit Fox).
  • E-screener is more eco-friendly than DVD distribution.
  • Less liability due to no need to dispose of the physical support.
  • Higher quality is possible.
  • According to Fox, on-line screeners are intrinsically more secure than DVD screeners.

Cons:

  • The challenge is the multiplicity of platforms to serve. Anthony pleads for 2FA.
  • Some guild members want to build a library.
  • Connectivity is still an issue for many members.

Suspicious behavior monitoring is a key security feature.

The global state of information security (by FRANK W.)

Feedback on the PcW annual survey of 40 questions.

  • Former employees are still the most cited sources. Third party related risk is rising.
  • Theft of employee and customer records raised this year.
  • 26% of increase of security budget over 2014.
  • ISO27001 is the most used framework. 94% of companies use a security framework.
  • Top Cyber threats: vulnerabilities, social engineering and zero-day vulnerabilities.
  • Data traversal becomes a visible issue with leaks via Dropbox, Google Drive…)

Would you rather be red and blue, or black and blue (by SLOSS J.)

A highlight on high-profile attacks. A plea for having an in-house red team (attack team)

He advocates the stance of assuming that you’re already penetrated. This requires:

  • War game exercises
  • Central security monitoring
  • Live site penetration test (not really new)

Secrets to build an incident response team (panel with RICKELTYON C., CATHCART H., SLOSS J.)

An Incident Response Team is now mandatory together with real-time continuous monitoring.

Personalize the risk by making personal what the consequences of a breach would be.

Hiring experts for a red team or IRT is tough.

Vulnerability scanning penetration testing (panel with EVERTS A., JOHNSON C., MEACHAM D., MONTECILLO M.)

NNSS.

Best practice for sending and receiving content (by MORAN T.)

Taxonomy

  • Consumer grade cloud services: Dropbox, etc
  • Production. Media deal, signiant, mediafly, etc
    • Usually isolated system within a company
    • Owned by production rather than IT
  • Enterprise: Aspera
    • Owned by IT

Cooperation between IT and production staff is key.

Don’t tolerate shadow IT. Manage it

Monitor the progress of Network Function Virtual (NFV)and Software Defined Network (SDN) as they may be the next paradigms

Production in the cloud (panel with BUSSINGER B., DIEHL E., O’CONNOR M., PARKER C.)

CDSA reported about this panel at http://www.cdsaonline.org/latest-news/cps-panel-treat-production-in-the-cloud-carefully-cdsa/

Production security compliance (panel with CANNING J., CHANDRA A., PEARSON J., ZEZZA L.)

It is all about education. The most challenging targets are the creatives

New Regency tried on a production of a TV show to provide all creatives with the computer, tablet, and phone. They also allocated a full-time IT guy.

Cloud Security: a Metaphor

Last year, at the annual SMPTE Technical Conference, I presented a paper “Is the Future of Content Protection Cloud(y)?”  I explained that the trust model of public cloud was theoretically inherently weaker than the trust model of private cloud or private data center.  The audience argued that at the opposite, the security of public cloud may be better than the security of most private implementations.  As usual in security, the answer is never Manichean.

Metaphors are often good tools to introduce complex concepts.  Analogy with the real world helps to build proper mental models.  The pizza as a service metaphor that explains the IaaS, PaaS and SaaS is a good example.  In preparation of the panel on cloud security at the next Content Protection Summit, I was looking for a metaphor to illustrate the difference between the two trust models.  I may have found one.

On one side, when using a private cloud (or a private data center), we can likened the trust model to your residential house.  You control whom you invite into your home and what your guests are allowed to do.   You are the only person (with your family) to have the keys.  Furthermore, you may have planted a high hedge to enforce some privacy so that your neighbors cannot easily eavesdrop.

CloudMetaphor1

On the other side, the trust model of the public cloud is like a hotel.  You book a room at the hotel.  The concierge decides who enters the hotel and what the hosts are allowed to do.  The concierge provides you with the key to your room.  Nevertheless, the concierge has a passkey (or can generate a duplicate of this key).  You have to trust the concierge as you have to trust your cloud provider.

CloudMetaphor4

The metaphor of the hotel can be extended to different aspects of security.  You are responsible for the access to your room.  If you do not lock the room, a thief may enter easily regardless of the vigilance of the hotel staff.  Similarly, if your cloud application is not secured, hackers will penetrate irrespective of the security of your cloud provider.  The hotel may provide a vault in your room.  Nevertheless, the hotel manager has access to its key.  Once more, you will have to trust the concierge.  The same situation occurs when your cloud provider manages the encryption keys of your data at rest.  The hotel is a good illustration of the risks associated to multi-tenancy.  If you forget valuable assets in your room when leaving the hotel, the next visitor of the room may get them.  Similarly, if you do not clean the RAM and the temporary files before leaving your cloud applications, the next user of the server may retrieve them.  This is not just a theoretical attack.  Multi-tenancy may enable it.  Clean your space behind you, the cloud provider will not do it on your behalf.  The person in the room next to your room may eavesdrop your conversation.  You do not control who is in the contiguous rooms.  Similarly, in the public cloud, if another user is co-located on the same server than your application, this service may extract information from your space.  Several attacks based on side channels have been demonstrated recently on co-located server.  They enabled the exfiltration or detection of sensitive data such as secret keys.  Adjacent hotel rooms have sometimes connecting doors.  They are locked.  Nevertheless, they are potential weaknesses.  A good thief may intrude your room without passing through the common corridor.  Similarly, an hypervisor may have some weaknesses or even trapdoors.  The detection of colocation is a hot topic that interests the academic community (and of course, the hacking community).  My blog will follow carefully these new attacks.

Back to the question whether the public cloud is more secure than the private cloud, the previous metaphor helps to answer.  Let us look more carefully at the house of the first figure.   Let us imagine that the house is as the following illustration.

CloudMetaphor2

The windows are wide open.  The door is not shut.  Furthermore, the door has cracks and a weak lock.  Evidently, the owner does not care about security.  Yes, in that case, the owner’s assets would be more secure in the room of a hotel than in his house.  If your security team cannot secure properly your private cloud (lack of money, lack of time, or lack of expertise), then you would be better on a public cloud.

If the house is like the one of the next image, then it is another story.

CloudMetaphor3

The windows have armored grids to protect their access.  The steal door is reinforced.  The lock requires a strong password and is protected against physical attacks.  Cameras monitor the access to the house.  The owner of this house cares about security.  In that case, the owner’s assets would be less secure in the room of a hotel than in his house.  If your security team is well trained and has sufficient resources (time, fund), then you may be better in your private cloud.

Now, if you are rich enough to afford to book an entire floor of the hotel for your usage, and put some access control to filter who can enter this level, then you mitigate the risks inherent to multi-tenancy as you will have no neighbors.  Similarly, if you take the option to have the servers of the public cloud uniquely dedicated to your own applications, then you are in a similar situation.

This house versus hotel metaphor is an interesting metaphor to introduce the trust model of private cloud versus the trust model of public cloud.  I believe that it may be a good educational tool.  Can we extend it even more?  Your opinion is welcome.

A cautionary note is mandatory: a metaphor has always limitation and should never be pushed too far.

 

The illustrations are from my son Chadi.

CANS 2015 submissions

The 14th International Conference on Cryptology and Network Security (CANS 2015) will be at Marrakech in December.  The submission deadline is 19 june 2015.  The topics of interest are rather broad:

  • Access Control for Networks
  • Adware, Malware, and Spyware
  • Anonymity & Pseudonymity
  • Authentication, Identification
  • Cloud Security
  • Cryptographic Algorithms & Protocols
  • Denial of Service Protection
  • Embedded System Security
  • Identity & Trust Management
  • Internet Security
  • Key Management
  • Mobile Code Security
  • Multicast Security
  • Network Security
  • Peer-to-Peer Security
  • Security Architectures
  • Security in Social Networks
  • Sensor Network Security
  • Virtual Private Networks
  • Wireless and Mobile Security

The accepted papers will be published in Springer LNCS.  It is an IACR event.