Virtual Black Hat: Rapid7 Experts Share Key Takeaways from Day 1 Sessions

Virtual Black Hat: Rapid7 Experts Share Key Takeaways from Day 1 Sessions

Boy, oh boy, has Black Hat changed.

Where we once looked up at the neon lights of Las Vegas, we now gaze into the glow of our laptop screens. The bustle of the Business Hall has been replaced by the rustle of papers from our isolated desks. And we’ve gone from shaking hands with other security pros to shaking up cocktails from the comfort of our home offices—hey, pandemic or no pandemic, there were always going to be cocktails.

But even with all these changes, the core mission of Black Hat has remained the same: to share vital information, show off new tools and techniques, and connect with the cybersecurity community.

We know that even from home, it can be tough to catch everything you want to see at Black Hat, so we had our experts do the work for you as part of our Virtual Vegas event. In briefings held earlier this morning, they discussed their key takeaways from Black Hat sessions surrounding research, vulnerability management, and detection and response. Here’s what they had to say.

Want our Black Hat takeaways sent directly to your inbox each day? Sign up today.

Get Started

Research takeaways from Black Hat 2020

Hacking the Supply Chain – The Ripple20 Vulnerabilities Haunt Tens of Millions of Critical Devices

In this session, presenters Shlomi Oberman, Moshe Kol, and Ariel Schön shared the story of how they found and exploited the Ripple20 vulnerabilities (CVE-2020-11901) across millions of IoT devices. Deral Heiland, Rapid7’s IoT Research Lead, said he was already familiar with this situation before attending the session, and had hoped to see more discussion around supply chain issues and how these types of vulnerabilities could play out in the future, rather than a readout of Ripple20.

He did take away that vulnerabilities in widely used protocol stacks are a critical issue that could wind up being devastating due to our lack of good supply chain tracing. Solutions for better tracking software and hardware components within the supply chain need to be better developed, and we need to find a way to build more effective solutions for identifying and patching issues like Ripple20 in IoT/OT technology in our environments.

Based on this talk, Deral said he doesn’t plan on taking immediate action, but he will continue to push and support work in the area of Software Bill of Materials (SBoM).

CloudLeak: DNN Model Extractions from Commercial MLaaS Platforms

Rapid7 Principal Artificial Intelligence Researcher Erick Galinkin attended Wednesday’s “CloudLeak: DNN Model Extractions from Commercial MLaaS Platforms” session, hosted by Yier Jin, Honggang Yu, and Tsung-Yi Ho. In it, the presenters demonstrated a novel type of attack that allows adversaries to easily extract the large-scale Deep Neural Networks (DNN) models from various cloud-based Machine Learning-as-a-Service (MLaaS) platforms.

Erick said this session was his favorite because it was well-aligned with his day-to-day job. He discovered there that using adversarial examples to steal models is feasible and cheaper than brute-force querying, even though using adversarial examples in this way is virtually unheard of nowadays and a very unique type of attack. He added you can also generate adversarial examples without computing the gradient and that using MLaaS does not ensure your data is protected.

The one thing Erick wished the session had done was be clearer about the details around their methodology. He found it to be incredibly clever but wanted to understand more about how the adversarial examples were generated without access to the model. However, Erick said he does plan to use this new method to generate copycat models and do model inversion.

Election Security: Securing America’s Future

Tod Beardsley, Director of Research at Rapid7, attended “Election Security: Securing America’s Future” from CISA’s Christopher Krebs and was surprised to hear the organization confidently claim that hacking is dipping, especially when compared to the 2016 election. Though technical, traditional hacking is down, disinformation and propaganda is up.

He also reported that despite the pandemic, CISA still expects the majority of people to vote in person. There is also an expectation of way more people voting by mail. This means you get the worst of both worlds: Increased risk of COVID-19 infections and the requirement of running in-person elections, while simultaneously dealing with the logistics of printing, mailing, and collecting mail in ballots at a scale never before seen in many counties. Tod did add that CISA does have a ton of practice dealing with the types of attacks one might expect thanks to the ransomware responses the organization has been involved with over the past three years, and that it has been aiding local election sites with red team engagements to test their preparedness.

In terms of next steps, Tod plans to get in touch with his election authority in Austin to see what the deal is on backup pollbooks, since he thinks it’s pretty crazy how easily those things can get knocked offline.

Vulnerability risk management takeaways from Black Hat 2020

Engineering Empathy: Adapting Software Engineering Principles and Process to Security

This session reinforced the necessary—yet elusive—cultural shift making waves amongst many modern security and development teams. Truly practicing what they preach, Salesforce team members Craig Ingram, Principal Security Engineer, and Camille Mackinnon, Principal Infrastructure Engineer, shared their experiences stepping into each other’s shoes and adopting and embedding each other’s methodologies. They also walked through examples of how shared language and values can uphold DevSecOps culture and empower engineers on both sides to self-service.

The session was a favorite for Garrett Gross, our Senior Technical Advisor for Vulnerability Risk Management, whose key takeaway is in the title: One of the best ways to improve yourself is to better understand others. Building a foundation of trust, maturity, and honesty opens the doors for a mutually beneficial relationship between teams. More tactically, Garrett took note that repetition of code or manual remediation expands your attack surface due to natural human error, and that it’s better to “start with failure,” encouraging testing for failure or success (such as with vulnerability assessment) early on in development so that those tests get to live on as automatic regression indicators.

Reversing the Root: Identifying the Exploited Vulnerability in 0-days Used In-The-Wild

In this session, Maddie Stone walked through various techniques and approaches leveraged by Google’s Project Zero to identify the root cause of zero-day vulnerabilities. While the case studies themselves were fascinating in their own right, our Lead Security Researcher for Metasploit Framework, Spencer McIntyre, identified three key learnings that a wide range of organizations can benefit from: First, When a zero-day is detected, it is because the attacker has made a mistake. Second, given the breadth of approaches one can take to analyze a root cause, it is important to be adaptable. Last but most certainly not least, it’s important to understand the three key roles in the process—the person who discovers and discloses a vulnerability, the vendor responsible for a vulnerability, and independent third-party researchers. This helps you locate the right information at the right time, and analyze a PoC more effectively.

Spencer is looking forward to applying the learnings at his next opportunity. “Basically the gist was to modify the PoC to remove sections to identify which were critical and relevant to the exploit procedure to reduce the data that needs to be analyzed.”

The Devil’s in the Dependency: Data-Driven Software Composition Analysis

This session led by Benjamin Edwards and Chris Eng is a cautionary tale for modern day developers and their security counterparts. It’s no wonder why third-party code libraries have become an essential part of developers’ toolkits, and ubiquitous in nearly all software developed today. However, immense gains in agility and efficiency often come at the cost of security risk, and this is no exception.

As Garrett summarizes, “due to transitive dependencies on third-party libraries of third-party libraries, the true attack surface is much large r than developers know or can even report on with the way software is written.” If a security flaw exists in reused code, it can propagate boundlessly. He reassures, however, that “most fixes are relatively minor and most organizations should be successful in managing the risk via discovery, tracking, and remediation than giant overhauls of code.” Another notable insight from the study is that the potential security impact of affected code is more heavily influenced by the size and composition of your IT environment, rather than the language used.

Hacking the Voter: Lessons from a Decade of Russian Military Operations

While the topic of election hacking and interference has become a mainstay in US current events, the tactics leveraged by the Russian military to achieve these ends have dated back to the 1920s. In this session, Nate Beach-Westmoreland covered the two-fold approach to election interference: building informational and psychological distrust in the system, and exercising technical control over information systems and data integrity.

Joshua Harr, our Senior Advisory Services Consultant (not to mention, a Cyber Warfare Officer for the Air Force Reserves) geeked out on the topic (we’re talking eight written pages of notes) and appreciated the contrast of the session to many of Black Hat’s more technical talks, as “strategy informs the tactic.” Joshua’s main takeaway was for organizations to have a plan in place for the potential impact of election interference, such as reputational fallout. Think of the plan as being woven into the fabric of your organization, rather than as a band-aid response.

Detection and response takeaways from Black Hat 2020

Hiding Process Memory via Anti-Forensic Techniques

This session was a great start to a novel topic, but one that our attendees agreed needs more time to evolve before it becomes top of mind. Frank Block, security researcher at ERNW Research GmbH, explores three methods to prevent malicious user space memory from appearing in analysis tools, including modifying characteristics or manipulating kernel structures. These tactics make memory inaccessible for security analysts and create new subversion techniques.

Wade Woolwine, principal threat intelligence researcher at Rapid7, noted that while the research that went into this presentation was significant and the techniques presented are unique, it isn’t relevant for the majority of security teams yet; we need to see techniques in the wild first.

Office Drama on macOS

Macro-based attacks aren’t anything new in the Windows world, but in this session, Patrick Wardle explores the macOS attacks that have historically received less attention from the security community, despite growing in popularity.

Attendees noted that this session was jam-packed with information. For Alan Foster, senior security solutions engineer at Rapid7, it changed his perception of macOS, to the point where he knows he shouldn’t underestimate an attacker’s commitment to breach such an environment. While recent versions of OS have security mechanisms in place to limit the execution of malicious documents via phishing, for example, there are still creative ways around those (including using old file formats from the 1980s). Wade Woolwine notes that he’s been “waiting for when macOS and Apple need to focus on security like Microsoft has been forced to,” noting that Mac users have traditionally been underestimated but as time goes on we’ll see how attackers are committed and creatively finding ways to get around these security mechanisms. And, as many small businesses and startups are using macOS, the time is right to turn focus there as the attack surface widens. The good news: These attacks aren’t hard to detect, with the right tools and processes in place.

Policy Implications of Faulty Cyber Risk Models and How to Fix Them

This session was a favorite among Rapid7 attendees. Presenters Wade Baker and David Severski offered up a fresh take on data management and analysis as it relates to risk management. At its core: “Bad security data leads to bad security policies; better data enables better policies.”

There are numerous risk studies and research available today estimating the economic impact of a cyber event, including the oft-cited Ponemon Institute statistic that the estimated cost per record in a breach is $150 on average. The problem is that this one baseline isn’t indicative of every industry or organization being measured against it. This session broke down data and offered other methods of looking at company size, revenue, and the probability of a breach to estimate the general cost for a breach. For Meg Donlon, senior product marketing manager at Rapid7, it emphasized the importance—and implications—of research methodology in cybersecurity, particularly as it impacts both regulatory impacts and third-party policy. Overall, our attendees believe this session highlighted the value of security program development, and the need for a mindset shift from one of being shamed for breaches, to one of information sharing so we can all make ourselves safer.

Practical Defenses Against Adversarial Machine Learning

Machine learning … so hot right now. Especially adversarial and bad machine learning. In this session, Ariel Herbert-Voss reviewed a series of attacks that ranged from humorous/benign  (e.g. driving a car with 90 cell phones to make Google Maps believe there’s a traffic jam), to more serious and damaging. Her examples were all a result of exploring machine learning models, either by providing bad inputs or by taking advantage of a model leakage to kind of work around the system.

For Jason Hunsberger, senior product manager at Rapid7, a key takeaway was to resist the temptation to share detailed data about the accuracy of your model with your users, because it can easily be reverse-engineered. Using rounded numbers, resisting providing detailed statistics, and using zero or one answers were other methods he took away to mitigate model leakage. The general takeaway for our team: machine learning is based on the data it’s fed, and the value of the output is only as good as the data that goes in. Stay wary of algorithms.

Interested in hearing more takeaways from Black Hat sessions? Sign up to attend our day two debriefs on Friday, Aug. 7, and stay tuned for our blog post tomorrow afternoon with another slew of insights from our experts!

Want our Black Hat takeaways sent directly to your inbox each day? Sign up today.

Get Started

If you like the site, please consider joining the telegram channel or supporting us on Patreon using the button below.

Patreon

Original Source