ASW #223 – Jeevan Singh
Full Audio
View Show IndexSegments
1. Redefining Threat Modeling – Security Team Goes on Vacation – Jeevan Singh – ASW #223
Threat modeling is an important part of a security program, but as companies grow you will choose which features you want to threat model or become a bottleneck.
What if I told you, you can have your cake and eat it too. It is possible to scale your program and deliver higher quality threat models.
Segment Resources: - Original blog: https://segment.com/blog/redefining-threat-modeling/ - Open Sourced slides: https://github.com/segmentio/threat-modeling-training
Announcements
Dive deeper into the world of cybersecurity with Security Weekly on Instagram! Follow us @SecWeekly to find exclusive clips, hilarious memes, behind-the-scenes sneak peeks, and more! Stay connected, stay informed, and join our growing community!
Guest
Jeevan Singh is the Director of Product Security at Twilio, where he is embedding security into all aspects of the software development process. Jeevan enjoys building security culture within organizations and educating staff on security best practices. Jeevan is responsible for a wide variety of tasks including: architecting security solutions, working with development teams to resolve security vulnerabilities and building out security features. Before life in the security space, Jeevan had a wide variety of development and leadership roles over the past 15 years.
Hosts
2. Another Ping of Death, Clever JSON Manipulation, iCloud Encryption, ChatGPT Threats – ASW #223
FreeBSD joins the ping of death list, exploiting a SQL injection through JSON manipulation, Apple's design for iCloud encryption, attacks against machine learning systems and AIs like ChatGPT
Announcements
Security Weekly listeners, we need to hear your voices! Leave us your feedback on Apple podcasts & submit a screenshot to our giveaway form for a chance to win a $100 gift card from Hacker Warehouse! This giveaway will be open until the end of the year. We appreciate your honest feedback so we can continue to make great content for our audience! Visit securityweekly.com/giveaway to enter!
Hosts
- 1. FreeBSD-SA-22:15.ping – Stack overflow in ping(8)
Highlighting this mostly as an item of curiosity and a vuln reminiscent of the ping of death in the late 90s.
As a quick recap, a buffer overflow can be triggered in FreeBSD's ping by a malformed packet. Crafting packets with incorrect length fields, conflicting flags, and overlapping fragments is a classic way to attack TCP/IP stacks. On the plus side, this looks like it's limited to a DoS since ping drops privileges once it opens a raw socket and it also executes within a sandboxed environment. Both of those are good practices for anything that operates on network data. On the other hand, it shows that C code remains plagued by memory safety issues.
- 2. {JS-ON: Security-OFF}: Abusing JSON-Based SQL to Bypass WAF
This doesn't feel like the type of vuln that's going to be prevalent or a new class to worry about. But it's still an article worth reading! It reads like a walkthrough of a CTF challenge -- having a hunch about a vuln, running into obstacles while trying to exploit it, tweaking techniques based on technologies within the target, and eventually finding success. Look at it in terms of process and technique.
- 3. Advanced Data Protection for iCloud
Apple adds encryption to iCloud backups. The guide shows why an apparently simple premise -- encrypt backups -- becomes complex in its edge cases, usability considerations, and policy impacts. This change expands the categories of data that Apple encrypts to user-managed keys, i.e. via end-to-end encryption. Apple enumerates the categories in this page.
Here's the Apple announcement
- 4. Exploring Prompt Injection Attacks
Everyone seems to be playing with chat AIs now. This article from NCC Group summarizes the prompt injection technique against such systems. The attack is essentially the mix of data and code that you'd see in SQL injection or command injection attacks. Only in this case you see more natural phrasing like, "Ignore the above directions and [do something unexpected]". It's a way of manipulating a model's instructions by interacting directly with the model rather than having it execute against the initial intent of a prompt. The article gives simple, clear examples.
The article includes several references (although several are just Twitter threads) that are worth reading should you find yourself intrigued by this class of attacks.
- 5. ChatGPT bid for bogus bug bounty is thwarted
Bug bounty programs already struggle with low-quality reports and reports taken directly from scanners with no followup analysis by the submitter. Here's a case where a researcher apparently used ChatGPT to create a vuln report out of thin air. ChatGPT asserted there was a vuln in a smart contract, created a writeup, and requested a reward for identifying an authorization bypass.
But the vuln didn't exist, either in theory or practice. It just sounded reasonable to a non-expert. To quote from the article, “I was most surprised by the mixture of clear writing and logic, but built off an ignorance of 101-level coding basics."
The Cyber Grand Challenge demonstrated the potential for machine learning in vuln detection and exploitation. ChatGPT and its ilk may eventually improve on that, but it doesn't seem like something to worry about any time soon.
- 6. ChatGPT, Galactica, and the Progress Trap
The prompt injection article this week covers one type of technical attack against AI implementations. Earlier this summer we also covered another paper from NCC Group about "Practical Attacks on Machine Learning Systems".
This article brings in the dimension of social impacts and the trust and safety issues that arise from AI-driven chatbots and image generators. Also read Wired's other article about Lensa from this week.
- 1. Evaluating Large Language Models Trained on Code
This paper goes in-depth into the how behind "Codex", a technology underpinning AI work like GitHub's Copilot. It also goes into analysis of the security implications of such technology, which share a similar threat landscape to Chat GPT.
- 2. On the Malicious Use of Large Language Models like GPT-3
A quote from this theoretical paper discussing AI and security is as follows; and describes best the essence of this article:
'Security is critical to ethical artificial intelligence not because it is a parallel property that we should seek alongside things like fairness, but because it is a prerequisite to it. Specifically, security is a prerequisite to ethical artificial intelligence because you cannot make meaningful guarantees about a system that you do not control.' -Jennifer Fernick
- 3. ChatGPT shows promise of using AI to write malware
This article examines the basics of security risks posed by Chat GPT. The main conclusion is that while current AI cannot write sophisticated offensive security code, however, there may come a time in the future where it can. Furthermore, the biggest risk of current code writing tools and security isn't so much exploitation of existing software, it is that much of the code that it does churn out is insecure. https://www.cyberscoop.com/chatgpt-ai-malware/
- 4. Jailbreaking ChatGPT on Release Day
A more lighthearted article on getting ChatGPT to provide all sorts of instructions for nefarious deeds.