No Starch Press's blog

Live Coder Jon Gjengset Gets into the Nitty-Gritty of Rust


Our always fascinating Author Spotlight series continues with Jon Gjengset – author of Rust for Rustaceans. In the following Q&A, we talk with him about what it means to be an intermediate programmer (and when, exactly, you become a Rustacean), how Rust “gives you the hangover first” for your code's own good, why getting over a language's learning curve sure beats reactive development, and how new users can help move the needle toward a better Rust.

Rust for Rustaceans cover Jon Gjengset headshot

A former PhD student in the Parallel and Distributed Operating Systems group at MIT CSAIL, Gjengset is a senior software engineer at Amazon Web Services (AWS), with a background in distributed systems research, web development, system security, and computer networks. At Amazon, his focus is on driving adoption of Rust internally, including building out internal infrastructure as well as interacting with the Rust ecosystem and community. Outside of the 9-to-5, he conducts live coding sessions on YouTube, is working on research related to a new database engine written in Rust, and shares his open-source projects on GitHub and Twitter.


No Starch Press: Congratulations on your new book! Everyone digs the title, Rust for Rustaceans – which is a tad more fitting than its original moniker, Intermediate Rust. I only bring this up because both names speak to who the book is for. Let’s talk about that. What does “intermediate'' mean to you in terms of using Rust? Specifically, what gap does your book fill for those who may have finished The Rust Programming Language, and are now revving to become *real* Rustaceans?

Jon Gjengset: Thank you! Yeah, I’m pretty happy with the title we went with, because as you’re getting at, the term “intermediate” is not exactly well-defined. In my mind, intermediate encapsulates all of the material that you wouldn’t need to know or feel comfortable digging into as a beginner to the language, but not so advanced that you’ll rarely run into it when you get to writing Rust code in the wild. Or, to phrase it differently, intermediate to me is the union of all the stuff that engineers working with Rust in real situations would pick up and find continuously useful after they’ve read The Rust Programming Language.

I also want to stress that the book is specifically not titled "The Path to Becoming a Rustacean," or anything along those lines. It’s not as though you’re not a real Rustacean until you’ve read this book, or that the knowledge the book contains is something every Rustacean knows. Quite the contrary – in my mind, you are a Rustacean from just before the first time you ask yourself whether you might be one, and it’s at that point you should consider picking up this book, whenever that may be. And for most people, I would imagine that point comes somewhere around two thirds through The Rust Programming Language, assuming you’re trying to actually use the language on the side.
 

NSP: Rust has been voted “the most loved language” on Stack Overflow for six years running. That said, it's also gained a reputation for being harder to learn than other popular languages. What do you tell developers who are competent in, say, Python but hesitant to try Rust because of the perceived learning curve?

JG: Rust is, without a doubt, a more difficult language to learn compared to its various siblings and cousins, especially if you’re coming from a different language that’s not as strict as Rust is. That said, I think it’s not so much Rust that’s hard to learn as it is the principles that Rust forces you to apply to your code. If you’re writing code in Python, to use your example, there are a whole host of problems the language lets you get away with not thinking about – that is, until they come back to bite you later. Whether that comes in the form of bugs due to dynamic typing, concurrency issues that only crop up during heavy load, or performance issues due to lack of careful memory management, you’re doing reactive development. You build something that kind of works first, and then go round and round fixing issues as you discover them.

Rust is different because it forces you to be more proactive. An apt quote from RustConf this year was that Rust “gives you the hangover first” – as a developer you’re forced to make explicit decisions about your program’s runtime behavior, and you’re forced to ensure that fairly large classes of bugs do not exist in your program, all before the compiler will accept your source code as valid. And that’s something developers need to learn, along with the associated skill of debugging at compile time as opposed to at runtime, as they do in other languages.

It’s that change to the development process that causes much of (though not all of) Rust’s steeper learning curve. And it’s a very real and non-trivial lesson to learn. I also suspect it’ll be a hugely valuable lesson going forward, with the industry’s increased focus on guaranteed correctness through things like formal verification, which only pushes the developer experience further in this direction. Not to mention that the lessons you pick up often translate back into other languages. When I now write code in Java, for instance, I am much more cognizant of the correctness and performance implications of that code because Rust has, in a sense, taught me how to reason better about those aspects of code.
 

NSP: In the initial 2015 release announcement, Rust creator Graydon Hoare called it “technology from the past come to save the future from itself.” More recently, Rust evangelist Carol Nichols described it as “trying to learn from the mistakes of C, and move the industry forward.” To give everyone some context for these sentiments, tell us what sets Rust apart safety-wise from “past” systems languages – in particular, C and C++ – when it comes to things like memory and ownership.

JG: I think Rust provides two main benefits over C and C++ in particular: ergonomics and safety. For ergonomics, Rust adopted a number of mechanisms traditionally associated with higher-level languages that make it easier to write concise, flexible, (mostly) easy-to-read, and hard-to-misuse code and interfaces – mechanisms like algebraic data types, pattern matching, fairly powerful generics, and first-class functions. These in turn make writing Rust feel less like what often comes to mind when we think about system programming – low-level code dealing just with raw pointers and bytes – and makes the language more approachable to more developers.

As for safety, Rust encodes more information about the semantics of code, access, and data in the type system, which allows it to be checked for correctness at compile-time. Properties like thread safety and exclusive mutability are enforced at the type-level in Rust, and the compiler simply won’t let you get them wrong. Rust’s strong type system also allows APIs to be designed to be misuse-resistent through typestate programming, which is very hard to pull off in less strict languages like C.

Rust’s choice to have an explicit break-the-glass mechanism in the form of the unsafe keyword also makes a big difference, because it allows the majority of the language to be guaranteed-safe while also allowing low-level bits to stay within the same language. This avoids the trap of, say, performance-sensitive Python programs where you have to drop to C for low-level bits, meaning you now need to be an expert in two programming languages! Not to mention that unsafe code serves as a natural audit trail for security reviews!
 

NSP: Along those same lines, Rust (like Go and Java) prevents programmers from introducing a variety of memory bugs into their code. This got the attention of the Internet Security Research Group, whose latest project, Prossimo, is endeavoring to replace basic internet programs written in C with memory-safe versions in Rust. Microsoft has also been very vocal about their adoption of Rust, and Google is backing a project bringing Rust to the Linux kernel underlying Android. As Rust is increasingly embraced and used for bigger and bigger projects, are there any niche or large-scale applications, or certain technology combos you’re most excited about?

JG: Putting aside the discussion about whether Rust prevents the same kinds of bugs in the same kinds of ways as languages like Go and Java, it’s definitely true that the move to these languages represent a significant boost to memory safety. And I think Rust in particular unlocked another segment of applications that would previously have been hard to port, such as those that would struggle to operate with a language runtime or automated garbage collection.

For me, some of the most exciting trajectories for Rust lie in its interoperability with other systems and languages, such as making Rust run on the web platform through WASM, providing a better performance-fallback for dynamic languages like Ruby or Python, and allowing component-by-component rewrites in established existing systems like cURL, Firefox, and Tor. The potential for adoption of Rust in the kernel is also very much up there if it might make kernel development more approachable than it currently is – kernel C programming can be very scary indeed, which means fewer contributors dare try.
 

NSP: In the book’s foreword, David Tolnay – a prolific contributor to the language, who served as your technical reviewer – says that he wants readers to “be free to think that we got something wrong in this book; that the best current guidance in here is missing something, and that you can accomplish something over the next couple years that is better than what anybody else has envisioned. That’s how Rust and its ecosystem have gotten to this point.” The community-driven development process he’s referencing is somewhat unique to Rust and its evolution. Could you briefly explain how that works?

JG: I’m very happy that David included that in his foreword, because it resonates strongly with me coming from a background in academia. The way we make progress is by constantly seeking to find new and better solutions, and questioning preconceived notions of what is and isn’t possible, or how things “should” be done. And I think that’s part of how Rust has managed to address as many pain points as it does. The well-known Rust adage of “fast, reliable, productive, pick three” is, in some sense, an embodiment of this sentiment – let’s not accept the traditional wisdom that this is a fundamental trade-off, and instead put in a lot of work and see if there’s a better way.

In terms of how it works in practice, my take is that you should always seek to understand why things are the way they are. Why is this API structured this way? Why doesn’t this type implement Send? Why is static required here? Why does the standard library not include random number generation? Often you’ll find that there is a solid and perhaps fundamental underlying reason, but other times you may just end up with more questions. You might find an argument that seems squishy and soft, and as you start poking at it you realize that maybe it isn’t true anymore. Maybe the technology has improved. Maybe new algorithms have been developed. Maybe it was based on a faulty assumption to begin with. Whatever it may be, the idea is to keep pulling at those threads in the hope that at the other end lies some insight that allows you to make something better.

The end result could be an objectively better replacement for some hallmark crate in the ecosystem, an easing of restrictions in the type system, or a change to the recommended way to write code – all of which move the needle along towards a better Rust. That sentiment's best summarized by David Tolnay’s self-quote from 2016: “This language seems neat but it's too bad all the worthwhile libraries are already being built by somebody else.”
 

NSP: Alumni of the Rust Core team have said that it’s a systems language designed for the next 40 years – quite an appealing hook for businesses and organizations that want their fundamental code base to be usable well into the future. What are some of the key design decisions that have made Rust, in effect, built to last?

JG: Rust takes backwards compatibility across versions of the compiler very seriously, and the intent is that (correct) code that compiled with an older version of the compiler should continue to compile indefinitely. To ensure this, larger changes to the language are tested by re-building all versions of all crates published to crates.io to check that there are no regressions. Of course, the flip side of backwards compatibility is that it can be difficult to make improvements to the language, especially around default behavior.

The Rust project’s idea to bridge this divide is the “edition” system. At its core, the idea is to periodically cut new Rust editions that crates can opt into to take advantage of the latest non-backwards-compatible improvements, but with the promise that crates using different editions can co-exist and interoperate, and that old editions will continue to be supported indefinitely. This necessarily limits what changes can be made through editions, but so far it has proven to be a good balance between “don’t break old stuff” and “enable development of new stuff” that is so vital to a language’s long-term health.

The Rust community’s commitment to semantic versioning also underpins some of Rust’s long-term stability promises – that is, by allowing crates to declare through their version number when they make breaking changes, Rust can ensure that even as dependencies change, their dependents will continue to build long into the future (though potentially losing out on improvements and bug fixes as old versions stop being maintained).
 

NSP: One of the goals listed on the Rust 2018 roadmap was to develop teaching resources for intermediate Rustaceans, which I believe is what spurred you to start streaming your live-coding sessions on YouTube. Developers have really embraced them as a way of learning how to use Rust “for real.” Why is it useful, in your view, for newcomers to see an experienced Rust programmer go through the whole development process and see real systems implemented in real time?

JG: Learning a language on your own is a daunting task that requires self-motivation and perseverance. You need to find a problem you’re interested in solving; you need to find the will to get through the initial learning curve where you’ll get stuck more often than you’ll make meaningful progress; and you have to accept the inevitable rabbit holes that you’ll go down when it turns out things don’t work the way you thought they did. That’s not an insurmountable challenge, and some people really enjoy the journey, but it is also time-consuming, humbling and, at times, quite frustrating. Especially because it can feel like you’re infinitely far from what you really wanted to build.

Watching experienced developers build something, especially if you’re watching live and can ask questions, provides a shortcut of sorts. You get to be directly exposed to good development and debugging processes; you get exposure to language mechanisms and tools that you may otherwise not have found for a while on your own; and you spend less time stuck searching for answers, since the experienced developer can probably explain why something doesn’t work shortly after discovering the problem. Of course, it’s not a complete replacement. You don’t get as much of a say in what problem is being worked on, which means you may not be as invested in it, and you won’t get the same exposure to teaching resources that you may later need as you’re trying to work things out on your own. Ultimately, I think of it as a worthwhile “experience booster” to supplement a healthy and steady diet of writing code yourself.
 

NSP: The popularity of your videos notwithstanding, you’ve said that part of what inspired you to write the book is that “they’re not for everyone,” and that some people – yourself included – have a different learning style. Given both mediums cover advanced topics (pinning, async, variance, and so on), would you say the book is an alternative to the live coding sessions, or is it designed to complement them? In other words, would a developer who’s watched your videos still benefit from the book (and vice versa)?

JG: It’s a bit of a mix. The "Crust of Rust" videos cover topics that are covered in the book, and the book covers topics in my videos, but often in fairly different ways. I think it’s likely that consuming both still leads to a deeper understanding than consuming either in isolation. But I also think that consuming either of them should be enough to at least give you the working knowledge you need to start playing with a given Rust feature yourself.

For readers of the book, I would actually recommend watching one of the longer live-coding streams on my channel (over the Crust videos), because they cover a lot of ground that’s hard to capture in a book. Topics like how to think about an error message, or how to navigate Rust documentation work best when demonstrated in practice. And who knows – you may even find the problem area interesting enough that you watch the whole thing to the end!

And with that… std::process::exit
 

*Use code SPOTLIGHT35 to get 35% off your copy of Rust for Rustaceans through Dec. 16th.

Break It Till You Make It: Q&A with Hardware Hackers Colin O'Flynn and Jasper van Woudenberg


To kick off the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series, we're joined by Colin O’Flynn and Jasper van Woudenberg, co-authors of The Hardware Hacking Handbook (available November, 2021). In the following Q&A, we talk with Colin (@colinoflynn) and Jasper (@jzvw) about the perils of proprietary protocols being replaced with network devices, the problem of having too many interesting targets to test your tools on, the beauty of AI-designed attack systems, the indisputable power of “hammock hacking,” and why nobody cares about fault injection until they get hacked with fault injection.

Hardware Hacking Handbook Cover Colin Oflynn Jasper VanWoudenberg

Colin runs NewAE Technology, Inc., a startup based on his ChipWhisperer project that designs tools to make hardware attacks more accessible, and teaches engineers about embedded security – a topic he frequently speaks about at conferences and on tech podcasts.

Jasper is CTO of Riscure North America, where he leads the company’s pentesting teams, and has a special interest in integrating AI with security. His research has been published in various academic journals, and he’s a regular speaker at educational and hacking conferences.

No Starch Press: I’ll start by saying that your book is timely! Hardware hacking, once a niche field of the exploit world, has become far more relevant amidst the proliferation of embedded devices all around us. What do you think accounts for this, and why are side-channel attacks in particular becoming increasingly common (and difficult to prevent)?

Colin O'Flynn: Hardware hacking has been a niche field, but one with an extensive and long history. Most of the powerful attacks we’re discussing today have been demonstrated for 20 years, so I’d say they should be “well-known.” But the truth seems to be that, until recently, advanced hardware attacks weren’t needed for most IoT devices. Default passwords and unlocked debug interfaces were the norm, so most hardware hackers never needed to dig deeper. Many people I’ve talked to at events have told me they were interested in side-channel and similar advanced attacks but never had time to actually learn them, as they were always able to break devices with easier and faster attacks!

The good news is that device manufacturers seem to be taking security more seriously these days, which means side-channel attacks have become a real threat. So I guess we’re seeing the industry fast-forwarding that 20-year lag of security research to catch up.

Jasper van Woudenberg: Hacking always moves with interesting targets. Once pinball machines started requiring money to play, people “hacked” them by just tilting the whole machine. Nowadays physical pinball machines have a tilt sensor – if you tilt the machine in order to affect the ball, it ceases operation. Of course, we’re talking about digital hardware in our book, but bypassing security systems is as old as security systems. So, the abundance of digital devices naturally increases the amount of hacking going on. Side-channel attacks are fascinating if you’re into the intersection between electronics, signal processing and cryptography. Beyond being fascinating though, they only become relevant when more straightforward attacks are mitigated.

NSP: Fault injection (FI) attacks – which inject a glitch into a target device that alters its behavior so you can bypass security mechanisms – used to be too “high end” for most hackers to bother with, often requiring expensive tools and intimate technical knowledge of the specific system under attack. But those days are over. Not only are low-cost FI toolkits readily available, the explosion of IoT has led to the rise of new defensive features, like Secure Boot, that can be easily subverted by a well-timed FI attack. What are the potential risks to a larger IoT network once a device is compromised this way?

CO: In the past we’ve seen end devices used as a pivot point into a more sensitive network. When it comes to commercial devices, we’re seeing many proprietary protocols replaced with network devices. For example, recent access-control readers are now simply PoE devices that talk back to a central server. With many of these devices, the original designers haven’t considered what happens if an end node becomes compromised. While the network may be correctly secured, you still see sensitive credentials stored in end devices become accessible to an attacker. And if an attacker is able to access these credentials, it means they may be able to pivot off the external network and into more sensitive internal networks.

JVW: I think the cost of the tools is a common misunderstanding – they can be really inexpensive. In our lab, we’ve done attacks literally by soldering a single wire to a bus, connecting it to a button, and when we pressed the button at the right time, the system boots our own code. The cost usually comes from the many days and weeks spent trying to figure out how to carry out the attack. And yes, some attacks do require high-end equipment, or at least equipment that can bring down the time used to figure out the attack.

One common stepping-stone attack we see is the firmware dump. Typically, embedded-device firmware does not receive a lot of scrutiny, and may have lingering vulnerabilities that can be exploited. This usually means gaining control over a single device, but there have been wormable firmware issues in the past.

NSP: What measures can be taken to harden embedded systems against FI attacks, and do you see this happening throughout the industry (why or why not)?

JVW: We always advise our customers to threat model and see if it makes sense to consider FI in scope. Usually that’s the case for embedded systems that are out in the field and have some sensitive assets to protect. Next is the question of whether faults can be mitigated in hardware and/or software. Both is ideal, but that’s not always feasible. Our book contains a chapter on countermeasures that also has a lab, so people can try out some ideas for FI countermeasures. Finally, verification of countermeasures early and often is critical. It’s virtually impossible, as a human, to predict all the ways a system can fault. Pre-silicon fault simulation and post-silicon fault injection, without exception, turn up surprises. Iteration and adaptation are key.

And then the million-dollar question: why is the hardening not happening throughout the industry? It’s a combination of cost and human nature. There is a real engineering cost to these countermeasures, so typically we only see customers that have had their devices compromised requiring FI resistance. If a compromise hasn’t happened, it’s very easy to write the attacks off as unrealistic or irrelevant. Nobody cares about fault injection until they get hacked with fault injection.

CO: Fault injection can be tricky to prevent, as we see countermeasures applied that aren’t effective. For instance, Jasper and I demonstrate a few examples in the book where compilers might remove the effect of your clever countermeasures. There seems to be a lot more interest in this now – for many companies, they just need some “end customer” to ask about it. I talked to silicon vendors a few years ago who were tracking countermeasure ideas, but basically none of their customers (people who actually build products) cared about FI attacks. So that meant they weren’t going to pay for engineering efforts to add those countermeasures. We seem to be seeing a very fast shift in the last couple of years though, so people who were tracking this early-on are in a good position to quickly offer solutions.

NSP: Speaking of low-cost fault-injection toolkits, Colin, you developed one of the most popular models out there, the ChipWhisperer, and built a company around it (NewAE Technology). Given that just about everything we use in our homes and offices has embedded computing systems and could be vulnerable to attack, how do you pick which devices to test your boards and analysis algorithms on? An example from your book would be smart toothbrushes – are you ever doing something like brushing your teeth when it suddenly occurs to you, “Wow, I could totally hack this thing”?

CO: This is actually a big problem! Unfortunately I tend to buy a lot of devices (microcontrollers, IoT products, industrial control systems, etc.) because I think they will be interesting to poke at! As a result, I’ve got a storage cabinet full of various devices along these lines… I’m slowly working through some of them, and when we get some time at the company, we’ll pick away at one or two of those devices as well.

But as more devices include embedded security, there are more “interesting" targets than there is hope of having time to deal with them. Part of why we design many different target-board devices (our “UFO targets” for ChipWhisperer) is actually to help out other researchers by giving them an easier platform to work with.

NSP: Once you successfully exploit a commonly used product, do you let the manufacturer know or is that generally considered an exercise in futility?

CO: If I plan on talking about the issue publicly I’ll reach out, even if I don’t think it’s a serious issue. Sometimes it takes a bit of time to reach the correct person (or team), but so far they have all generally spun it into positive experiences all around.

With one ongoing disclosure, for example, the engineering team had internally flagged that there could be some issues related to a relatively unsecure microcontroller that they were using in a product, and my report had validated their internal concerns. In this case they were already working on a new design, but I’m sure my report was a nice bonus for the people involved, as they can point to that as proof that the issue would be found eventually “in the wild.” In the meantime it gave them the opportunity to provide an interim fix via a firmware update for existing customers.

NSP: Jasper, one of your specialized areas of interest is combining AI with security research. Would you explain what this entails? And looking into the future, how could AI applications be leveraged to improve hardware and embedded security at the design level?

JVW: What I love about AI is also what I love about hacking: making a computer device do more than the original designer put in. With AI, this is tying a couple of artificial neurons together and getting a cat-and-dog image detector. With hacking, this is sending some weird input into a program and all of a sudden it executes arbitrary code.

The combination, I find fascinating. For instance, we’ve used neural networks to do side-channel analysis and outperform human-designed algorithms. We created an algorithm with colleagues that automatically optimizes fault injection parameters. I’ll work very hard to create some automation so I can be – paradoxically – lazy afterwards.

I firmly believe that most if not all cognitive activities, such as designing or attacking systems, will be better performed using AI rather than brains – the big question is when. I prefer to be on the side of making systems more secure through AI, so my research is going towards automating both the detection and mitigation of vulnerabilities, at scale. For instance, a big push we have currently is in pre-silicon security – detecting side-channel and fault issues before they make it into products. I wouldn’t say we’ve arrived at using AI yet, but the first steps are being made.

NSP: Both of you have advanced degrees, which makes sense given all of the academic knowledge involved with embedded security. Yet, The Hardware Hacking Handbook makes very little assumption about a reader’s background. What was your approach to making this challenging field accessible to novices and newcomers, and why was it important enough that you wrote an entire book on this premise?

CO: My career path on paper seems relatively full of academic love – I was an assistant professor for several years in the Electrical & Computer Engineering department at Dalhousie University. But back at the start, when I was considering applying to start my undergraduate degree in electrical engineering, I came relatively close to not attending university at all. I had self-taught myself a fair amount about electronics in high school, and managed to get a summer job that was effectively an electrical engineering internship, and was considering just continuing to grow with the “on the job” experience instead. In the end I fell onto the academic path, but I’ve always believed that it is not the only path, and part of this shapes my desire to make this as accessible as possible.

While many readers may be undergraduate or grad students, it’s clear that a classic academic textbook would cut out readers coming from other backgrounds (including everyone from high school students to professionals interested in looking at other careers). Practically, what we write down isn’t the only consideration – one of the great things about working with No Starch Press is that the pricing of the books makes them more accessible as well. From academic publishers, this book would have been $150+. And there would never be Humble Bumble sales that make it completely accessible on the level that NSP does!

JVW: I’ve taught courses on side-channel and fault injection for years, and it has taught me that the group of people that has to defend against these attacks is not necessarily interested in all the theory and all the research in this field. They want to focus on their goals of creating a system.

Then there’s the group of people like teenage me. I started hacking software before I had an internet connection, so I know the struggle of having to figure out everything by yourself. Looking around at the amazing blogs, videos, tutorials, etc. that exist for the software space currently, it really made me realize what a gap there is in the hardware space.

So, for both these groups, it’s really about breaking things down into practical tips and tricks, and then some of the unavoidable theoretical background. I really would like to show people that this space isn’t daunting, and that even someone like me – who came off a software background – can learn and enjoy it.

NSP: I’ll end with an easy one (I think) – what is your favorite hacking tool, and has that changed since you first got interested in hardware hacking when you were young?

CO: I should probably say my favourite tool is one of my own more-advanced products. But really, a good DMM is the most important tool! And in that regard, it hasn’t changed much over the years – one of my first “dream gifts” (back when Santa would be responsible for it) was a Fluke 12 multimeter, long before I knew about hardware hacking. I’ve since upgraded to a nicer meter (Fluke 179/EDA2 kit), but as we talk about in the book, there is so much you can do with this tool! Finding where pins go, checking the state of logic levels and voltages – it’s still my most used tool when I’m looking at a new device.

JVW: I started being “creative with technology” in the mid-’90s. What has changed is the amount of information available, and the fact that security is now an actual career – I still don’t always believe people are willing to pay me to do this. What hasn’t changed is my curiosity, and the rush that comes with solving a complex problem.

Favorite hacking tool? Hah. Although I use devices for a significant portion of the day, they are also a source of frustration. So, those are out. I’m going to say: my hammock. When I get stuck on a problem and I sense no more new ideas are being produced, or I get frustrated, I drop the problem for a few hours or days. Then I hop in my hammock for what I call “hammock hacking.” This is where I hang back and relax. I’ll almost always have a new view on the problem, or another way of connecting some dots that I hadn’t considered before. Or I fall asleep. But it’s a win in either case.

InfoSec Warrior Vickie Li: From Hunting Bugs to Helping Developers

Vickie Li is the resident developer evangelist at the application security firm ShiftLeft, and a self-described “professional investigator of nerdy stuff.” Her new book, Bug Bounty Bootcamp, leverages her expertise in offensive web security as well as her background in vulnerability research to introduce beginners to all aspects of web hacking, showing readers how to find, exploit, and report bugs through “bounty” programs. In her free time, when she’s not podcasting, speaking at conferences, or dropping infosec and cybersecurity knowledge on YouTube, she’s writing articles and blog posts about nipping security problems in the bug.

Bug Bounty Bootcamp Cover Vickie Li

For the September edition of our ongoing Author Spotlight series, we talk with Vickie about her first bug bounty payout, how her success hacking apps made her a passionate advocate for secure development, and why she means it quite literally when she tells you that becoming a good web hacker is like learning to ride a unicycle.

No Starch Press: First of all, that’s a pretty impressive intro for someone in their mid-twenties! But let’s go back a few years. You graduated with a CS degree from Northwestern, then worked as a freelance web developer before getting into infosec, pentesting, and offensive-security content creation, which – correct me if I’m wrong – led to your current full-time gig as a developer evangelist. So where did your foray into bug hunting come into play, and how did you get started with bounty programs?

Vickie Li: I got interested in security through my university courses, and started bug bounties as a way to learn more about infosec. Hacking on bug bounty programs helped me learn a lot about web hacking and web application security in general. But sitting in front of my laptop all day, I started to lose motivation because I really wanted my work to connect me with other people, and doing bug bounties all alone was quite lonely. That’s why I started my technical blog, where I wrote about whatever I was learning at the moment. I really tried to make the blog posts easy to understand, because I hoped people who were studying the same thing would find it helpful.

My blog actually kickstarted my career in infosec. Because of it, I was able to get some freelance penetration testing and technical writing jobs, and eventually landed my current job at ShiftLeft. Knowing how to explain complex technical concepts also helped me with writing Bug Bounty Bootcamp and making it an approachable web-hacking book.

NSP: What was your first real catch, and what was it like earning your first paid bounty?

VL: I found my first paid bug – a CSRF – about a week into hunting for bugs. The bounty was just a hundred dollars, but it was amazing to be able to earn a bit of money as I learned about the field. The most memorable part about the experience was when [the company’s] security team triaged the bug I found, and fixed it on the website. It was very motivating to know that I can contribute to the security of a widely used site through my work!

NSP: Over the past year you’ve gone from working as a freelancer/bug hunter to a full-time gig as a “developer evangelist” – a job focused on bridging communications between external dev teams and your internal app-security colleagues. Can you elaborate on what exactly your day-to-day is like, and how it satisfies your infosec interests?

VL: My primary role at ShiftLeft involves making secure coding practices approachable for developers, and spreading the word about how static analysis can help in this process. Every day is different: I might be writing a blog post, preparing to speak at a conference, or helping my team understand the needs of developers during the security process. I really enjoy the work because it fits into my original motivation for getting into infosec: helping make the internet a safer place for everyone.

NSP: The name of your company refers to shifting security to the left – or, introducing security checks earlier in the development life cycle rather than at the end. You underscored this in a blog post, comparing app security to wearing a facemask during the pandemic (“Building a Security-First Culture”). At the same time, your book is about hunting for zero-day vulnerabilities and getting started in bounty programs. Do you ever worry that if you’re too good at your job there won’t be any more bugs to hunt?

VL: I am not worried about that. Shifting left and bug bounties are not an either-or situation. These practices work together to help organizations become more secure. Bug bounty hunters are creative and are constantly coming up with new ways to attack an application. Organizations can use bug bounty programs to tackle new and inventive attack vectors before malicious attackers discover them. But most bugs should still be discovered early in the development cycle, when they are the easiest to fix. Shifting left will help eliminate most security vulnerabilities in your applications, and bug bounties can help you catch the rest.

NSP: To take this question in the opposite direction, has your bug-hunting experience helped or informed your current work advocating for better security practices?

VL: During my time as a bug bounty hunter, I helped lots of developer teams fix security issues in their applications. That’s when I noticed that many serious security vulnerabilities stem from small programming mistakes that could be easily discovered with static analysis. It’s easier to find and fix vulnerabilities early in the development process because you do not risk an attacker exploiting it in production.

This experience made me a really passionate advocate for secure development and security education. Offensive security practices like penetration testing or bug bounties are a great way to secure your applications, but they should only be used as a fail-safe to catch novel bug classes and vulnerabilities that slip past security protocols during the development cycle.

NSP: The AppSec space, and the cybersecurity industry as a whole, lives in a constant state of change, with new types of exploits emerging every day. How do you keep up with the ever-evolving landscape?

VL: I’m known to be quiet on social media, but I actually use Twitter a lot – mostly to get informed on the latest security news and understand the security challenges people are currently facing. In other words, I am the classic Twitter lurker. I also read a lot of infosec books, and follow a few well-written security blogs and YouTube channels.

NSP: Are there any online resources (besides your own) that you can recommend to aspiring web hackers, bug hunters, or security researchers?

VL: I am a big fan of reading security books to gain in-depth knowledge about a topic, and then reading blog posts for the latest infosec techniques – Orange Tsai’s blog is one of my favorites. He is a really creative hacker and has been a big inspiration for me ever since I started. Also, Web Security Academy by PortSwigger is a great starting point for web hackers who want to get some hands-on experience.

NSP: Okay, I’ve saved the most pressing question for last. You recently posted on Twitter that you were having a hard time selling your unicycle. This implies that 1) you own a unicycle, and 2) that you know how to ride a unicycle. Do tell.

VL: Happy to announce that I have sold my unicycle to a new loving owner! I learned to unicycle in college because I’ve always thought it’s cool to have an uncommon skill like unicycling. Unicycling is really hard to learn! It took me countless falls and months of practice to finally learn to ride it in a straight line.

But, this experience really boosted my confidence in learning. Like web hacking, learning to unicycle is hard but possible if you put your mind to it and persist. Now when I am trying to learn something difficult, I know I can ‘cause hey – I learned to unicycle! 10/10 would recommend unicycling as a sport. There are few things in this world cooler than a unicycling hacker.

Cracking Cybercrimes with Threat Analyst Jon DiMaggio


Our illuminating Author Spotlight series continues this month with Jon DiMaggio – author of The Art of Cyberwarfare: An Investigator's Guide to Espionage, Ransomware, and Organized Cybercrime (March 2022). In the following Q&A, we talk with him about the difference between traditional threats and nation-state attacks, the reasons that critical infrastructure is an easy target for threat actors, the emerging "magic formula" for defeating ransomware, and the fact that just because you're paranoid doesn't mean they aren't targeting you on social media.

Art of Cyberwarfare cover Jon DiMaggio headshot

DiMaggio is a recognized industry veteran in the business of “chasing bad guys,” with over 15 years of experience as a threat analyst. Currently he serves as chief security strategist at Analyst1, and his research on Advanced Persistent Threats (APTs) has identified enough new tactics, techniques, and procedures (TTPs) to garner him near-celebrity status in the cyber world. A fixture on the speaker circuit and at conferences, including RSA (and this month’s CYBERWARCON), DiMaggio has also been featured on Fox, CNN, Bloomberg, Reuters TV, and in publications such as WIRED, Vice, and Dark Reading. He continues to write professional blog posts, intel reports, and white papers on his research into cyber espionage and targeted attacks – insights that have been cited by law enforcement and used in nation-state indictments.


No Starch Press: You’re known as one of the first intelligence analysts to focus on attacks executed by nation-state hacking groups – referred to as Advanced Persistent Threats. What’s the difference between traditional cyberattacks and APTs?

Jon DiMaggio: Traditional cybercriminals conduct attacks relying on a user to click a link in an email or visit a specific website. If the attack fails or security mechanisms defeat the threat before it can successfully infect a victim, the attack is over. That's why, with some exceptions, traditional attacks are geared at targets of opportunity, and not tailored to a specific victim.

Nation-state attacks, however, are the exact opposite. Nation-state attackers target specific victims, and are not only motivated but well-resourced. These advanced attackers have the backing of a government, and often develop their own malware and infrastructure to use in their attacks. Also, unlike traditional threats, nation-state attackers are rarely motivated by financial gain. Instead, they seek to steal intellectual property, sensitive communications, and other data types to advance or provide an advantage to their sponsoring nation.
 

NSP: Governments and militaries are no longer the only targets of nation-state hackers – private-sector companies are now under attack as well. Most of them already have automated security mechanisms, but are those an adequate defense against APTs?

JD: No. Due to the human element behind nation-state attacks, automated security defenses are not enough. Human-driven attacks simply return to the system through another door. And unlike other threats, nation-state attackers are in it for the long game, which is why the attacks continue even if initially defeated by automated defenses. For these reasons, you must handle nation-state attacks differently than any other threat your organization will face – ideally, by deploying human threat hunters.
 

NSP: Another disturbing trend is the growing list of advanced cyber threats targeting the industrial control systems (ICS) of critical infrastructure, like the U.S. power grid. In terms of cyberwarfare, are we getting closer to seeing intrusion campaigns against our electrical, water, and transportation systems escalate from espionage or reconnaissance missions to highly disruptive attacks that could paralyze entire cities?

JD: Not only are we getting closer to attackers getting closer to our critical infrastructure, but it has already happened in other countries. In 2015, the Russian government conducted cyber attacks that resulted in shutting down power across critical areas of Ukraine.

In 2017, when I worked at Symantec, our team discovered a Russian-based nation-state attacker we dubbed "Dragon Fly," who infiltrated the U.S. power grid. The group was very close to gaining access to critical systems responsible for powering cities across the United States. In this case, security companies and the federal government worked together to mitigate the threat. This was a close call, but it just shows that nation-states are targeting our power grid – and likely will continue the effort moving forward.
 

NSP: In early October, the FBI and Cybersecurity Infrastructure and Security Agency (CISA) issued a warning that ransomware attackers, in particular, have been targeting water treatment and wastewater facilities. Do you have any insight into why ransomware attackers have recently moved from banks, local governments and healthcare systems to utility companies? Moreover, why are these critical facilities still so vulnerable to compromise given what we know about the threat and what’s at stake?

JD: Critical infrastructure appeals to ransomware attackers because they likely feel there is a greater chance the victim will pay. Additionally, the breach will be very apparent to the public, like in the Colonial Pipeline attack, when fuel stopped flowing and it resulted in a gas shortage across the East Coast. The effect of this type of attack is meant to be dramatic, and attackers know there will be high pressure from the general public to recover quickly. Usually, the fastest way to recover is to pay the ransomware and obtain the decryption key.

Also, critical infrastructure often provides an easy target to savvy attackers. For example, when a cybercriminal attacked the water system in Florida last year, he did so by taking advantage of technology and infrastructure that allowed workers to remotely access the critical controls used to regulate the system. In short, the ease of access for city workers was more important than the system's security. This, unfortunately, is a common problem. To address many of these existing vulnerabilities will require building systems based on security – and not ease of use. While this may be less important to a retail provider, it should not be an option for industries involved with our infrastructure.
 

NSP: Over the past year you’ve focused your expertise on nation-state ransomware. One thing I’ve learned from your work is just how long sophisticated intruders spend in a victim’s network before kidnapping their data and sending a ransom notice, often lurking for weeks if not months. Why is attacker “dwell” time an important security metric?

JD: Yes, that's a point many security analysts are unaware of. Enterprise ransomware gangs spend between 3 to 21 days on a victim network, with the average time being around 10 days. During this time, the attacker enumerates the network, obtains and escalates their privileges, disables security services, delete backups, and steals the victim's sensitive data. Finally, once the staging and data theft phase is complete, they execute the ransomware payload throughout the victim's network.

The reason this timeframe is so important is that the human attacker is active on your network. The takeaway is that the longer the attacker engages within your network, the better chance a good threat-hunting team will have to find them. This is why I keep emphasizing that you really need a human team to hunt for advanced threats, not simply rely on automated defenses.
 

NSP: As ransomware has evolved and diversified, AI has found its way into the mix, turbo-charging attacks that can automatically scan networks for weaknesses, exploit firewall rules, find open ports that have been overlooked, and so on. But machine learning works both ways. What role could AI tools play in threat hunting?

JD: The combination of both artificial intelligence along with human threat hunters creates the magic formula necessary to defeat ransomware attacks. AI is one of the fastest and most accurate ways to identify suspicious or malicious activity, and make quick mitigation decisions.

Based on the level of success ransomware gangs have had in recent years, current identification and mitigation capabilities are not working. At least, not consistently. In fact, several security vendors already base their technologies on artificial intelligence to mitigate threats. For example, the cybersecurity company DarkTrace recently used their tech – which relies on AI – to defeat a LockBit ransomware attack. (LockBit is a particularly pernicious ransomware-as-a-service gang that specializes in fast encryption speeds.) Using AI, DarkTrace identified and mitigated the attack in mere hours of its presence within the environment.
 

NSP: Sounds like the AI future is nigh! Shifting tracks, let's wrap this Q&A up in the present. You chase bad guys for a living. And not just any bad guys – the kind who could bring an entire nation to its knees. But you’re also a dad. Do you talk to your kids about what you do? If so, how do you explain things like nation-state attacks, ransomware gangs, or cyberwarfare on their level (or at least in a way that sounds less scary) when they ask about your day?

JD: I do talk to my kids about what I do. I actually try to get them involved, and spend time teaching them and explaining some of the work I do at a high level. My youngest son Damian and I even did a podcast together on ransomware. My oldest son Anthony is a freshman in high school and just started taking cyber security classes this year.

They think what I do is more like what they see in the movies, so they will be in for a disappointment when they figure out its more research, analysis and writing than hacking bad guys. However, it’s very rewarding that they have an interest in what I do, and often brag to their friends about it. At the same time, they've seen me working with encoded text and malware, and make comments that I stare at “gibberish” all day and pretend to be working! But overall they are really proud of me and think what I do is “cool."

NSP: Part of your objectively "cool" job entails thinking like the adversary. While it seems unlikely a nation-state actor would hijack a home webcam or set up a fake WAP attack at the local cafe, are there any lessons you've learned from a career spent analyzing cyber criminals that inform your personal online security habits outside of work, or that you try to instill in your children?

JD: Yes, due to my work I have a very different, limited online life. For example, outside of work-related social media, I have no personal accounts. And even with my limited social-media presence, I do not ever connect with family members – only work colleagues. I've used social media to map out relationships with adversary accounts, and know that someone could do the same to me. For that reason, I don’t use social media and, unfortunately for them, at least for now, my kids don't either. It’s not that I'm over-protective, but I don’t want them targeted by an attacker in an effort to get to me. And, to be honest, I think it's healthier at this point in life to let them just be kids. They will have an entire lifetime to be engulfed in social media, but for now I want them to just be kids.

As for my personal habits online, I use three different identity monitor and protection companies to keep an eye on my accounts. I never use the same password twice, nor do I use real “dictionary words” – and I always use two-factor authentication in addition to a hard-key (Yubi-key). I am religious about updating my passwords frequently, and you will never find a device in my home with a camera that is not covered. I also do not use traditional cloud-based services from vendors like Apple and Google.

To be honest, I live a pretty paranoid life because of the work I do and the fact that I put my name out there. At the same time, I think I need to be a bit paranoid, because if there is anything my job has taught me it is that anyone and anything can be hacked and compromised.


*Use coupon code SPOTLIGHT35 to get 35% off your pre-order of The Art of Cyberwarfare from now until Nov. 12 at midnight (PT).

Cyber Defender Bryson Payne Takes Us to School


We continue the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series with Bryson Payne, PhD – author of Go H*ck Yourself: An Ethical Approach to Cyber Attacks and Defense (January 2022). In the following Q&A, we talk with him about training the next generation of cyber defenders, why there's never been a better time to get a job in infosec, the security benefits of thinking like an adversary, and whether ransomware could soon be coming for your car. (Spoiler alert: it's already here!)

Go Hck Yourself Cover bryson Payne

Dr. Payne (@brysonpayne) holds the elite CISSP, GREM, GPEN, and CEH certifications, and is an award-winning cyber coach, author, TEDx speaker, and founding director of the Center for Cyber Operations Education at the University of North Georgia (an NSA-DHS Center for Academic Excellence in Cyber Defense). He's also a tenured professor of computer science at UNG, teaching aspiring coders and cyber professionals since 1998 – including coaching UNG’s champion NSA Codebreaker Challenge cyber ops team. His previous No Starch Press titles include the bestsellers Learn Java the Easy Way (2017) and Teach Your Kids to Code (2015).


No Starch Press: Cybersecurity Awareness Month is a great time to talk with you, because your career's been dedicated to making people aware of common and emerging security vulnerabilities. Recently though, high-profile hacks have hit the headlines like never before, with attacks on public utilities, government agencies, and customer databases causing real alarm among the general public. Are we starting to see a shift in the way mainstream society thinks about cybersecurity? If so, how can this be harnessed to make infosec stronger across the board?

Bryson Payne: All of us are seeing cyberattacks and breaches in the news, in the companies we do business with, and even in our own families. It’s a scary time to be so dependent upon technology, but there’s a bright side, yes –regular people are becoming smarter about how they use their devices, how they secure their information, and what information they share.

By understanding the threats that are out there, and how cybercriminals and cyberterrorists perform simple to complex attacks, you and I can protect ourselves and our families from cybercrime (or worse). And by training a new generation of cyber defenders, we can better protect our nation and our economy from future cyber threats.

NSP: You’re the founding director of the Center for Cyber Operations Education at UNG, where you’re also a tenured professor of computer science. So perhaps it’s no surprise that in 2018 UNG began offering a bachelor’s degree in cybersecurity – one of the nation’s first. Considering there are already a number of academic pathways that can lead to successful careers in the infosec world, what’s the benefit of pursuing such a specific major?

BP: The hands-on experience our students gain from real-world ethical hacking, forensics, network security, and reverse engineering in the classroom, in competitions, or in industry certifications, is more like what they’ll see in industry, government, and military cyber roles than traditional computer science or IT programs. In fact, the NSA and Department of Homeland Security are certifying more National Centers of Academic Excellence in Cyber Defense, like UNG, each year in order to give students the real-world skills needed to fight cybercrime, cyber terrorism, and even cyberwarfare for the next generation.

NSP: Does the addition of this degree program reflect a growing demand for cybersecurity pros in the workforce? And, for anyone reading this who’s considering going into the field (or going back to school to get credentialed), what are some of the career options you encourage students to explore?

BP: According to cyberseek.org, there are over 400,000 positions in cybersecurity open right now in the U.S. alone, with tens of thousands of new postings appearing every month. If you’re considering going into cyber, there’s never been a better time to get a certification, take a course, or study on your own.

If you like police dramas or mysteries, forensics could be a good fit. If you like taking things apart and (sometimes) putting them back together, reverse engineering or ethical hacking might be fun for you. If you like making sure everything works like it’s supposed to, you might make a great network operations or security operations center analyst. There’s a job for everyone, from trainers to managers to technicians – and the pay is growing faster than for many positions in non-security fields.

NSP: Studies have shown that at least half of college-age adults don’t pursue tech-related careers because they believe the subjects are too difficult to learn. What do you say to people who are interested in cybersecurity but don’t think they have what it takes?

BP: There are so many paths into cyber, whether you start out in psychology, journalism, international affairs, criminal justice, business, math, science, engineering, even health sciences. Cyber is a team sport, and we need people who understand not just the technology, but the people, processes, and even the cultures and languages involved in cybercrime, cyberattacks, and cyberwarfare. Every organization, from Fortune 500 companies to city governments, schools, and healthcare institutions, needs people like you and me thinking about cybersecurity and how to protect employees or customers.

But, while it's important to know that not every cyber job is a technical role, the more comfortable you are with the technology, the farther you can go.

NSP: Your upcoming book, Go H*ck Yourself, teaches readers how to perform just about every major type of attack, from stealing and cracking passwords, to launch phishing attacks, using social engineering tactics, and infecting devices with malware. Some critics might find it ironic that a champion of cyber defense would write a book that literally teaches people how to execute malicious hacks. Explain yourself!

BP: Just like in a martial arts class, you have to learn to kick and punch while you’re learning to block kicks and punches – you have to understand the offense to be able to defend yourself. By thinking like an adversary, you’ll see new ways to protect yourself, your company, your family, and the devices and systems you rely on in your daily life.

For too long we’ve been told what to do, but not why we need to do it. A great example is the password cracking you mentioned. When a reader sees how quickly and easily they can crack a one- or two-word password, even with numbers and symbols added to it, they finally have the mental tools to understand why we’re advocating for passphrases of four or five words. It’s the same with all the other attacks – once you see what a hacker can do, you understand how important good cyber hygiene is, and how small steps to secure your devices can really pay off.

NSP: One type of attack that's really skyrocketed lately is ransomware. Your home state of Georgia is just one example – city and county governments, state agencies, hospital systems, even local election systems have fallen victim to ransom demands. With hackers hammering away at our institutional weak spots, something as simple as not installing a security patch right away, or clicking on a link in a socially engineered email can usher in a potentially devastating attack. What do you think can be done to prevent the human errors arguably fueling the current ransomware rage?

BP: Ransomware is definitely one of the most serious threats to your business, your family, and your own financial security. But the good news is that you can keep yourself from being an easy target. While the news often refers to humans as the weakest link, I actually see us as the best first line of defense. Employees and leaders who can spot phishing emails, who install updates and patches regularly, and who use good cyber hygiene can block more than 99% of known attacks before they get into your organization! And smart security-minded computer users can also apply these practices at home to protect themselves and their loved ones from online adversaries on the prowl for easy vulnerabilities.

NSP: Along those same lines, a lot of organizations have started backing up their files as a failsafe. But hackers being hackers, they’ve already adapted: double-extortion ransomware is now the norm, where the data’s exfiltrated before it’s encrypted so it can be released online if the ransom is not paid. How bad is the problem, and what's the solution?

BP: Double-extortion malware can have the most devastating financial impact short of cyber-physical attacks (and by that I mean when malware takes over a manufacturing facility, critical infrastructure, or medical facility and causes real-world, physical damage to real equipment or even endangers human life). It's true that backups used to be enough to recover from ransomware without paying the ransom, but these double-extortion attacks can steal data for months before locking down systems and demanding payment.

The best defense, in addition to those backups, is having well-trained cyber professionals doing what we call "active threat hunting" – looking for suspicious activity, like small file transfers overnight or to unknown networks, and tracking down systems that show indicators of attack or compromise. That’s why it’s important that we train more cyber defenders. Every organization needs cyber heroes now, so it's the perfect time to develop these skills.

NSP: Dr. Payne, you have arrived at your final destination. (Well, my last question anyway.) Over the past decade you’ve done some very cool conference presentations on car hacking, and have since turned them into a tutorial on your blog. The cool factor aside, this is an increasingly relevant skill set for aspiring white hats – since 2016 there’s been a 94% year-over-year increase in automotive cybersecurity incidents, including remote attacks that can control your steering, pump your brakes, shut down the engine, unlock your doors, open the trunk, etc.

1) Is it only a matter of time before ransomware infects this realm of life, with people, say, unable to start their car until they pay a hacker? 2) In the future, should automakers be pentesting cars at the level they perform crash tests? 3) Does this keep you up at night, or are you optimistic that your UNG graduates will have a solution?

BP: It is only a matter of time before we see ransomware and similar attacks regularly affecting smart cars. Today’s automobiles can have more than 40 computer chips, dozens of systems, and networks and connections from USB to 5G, Wi-Fi, Bluetooth, GPS, satellite radio, and more. We call that the “attack surface” of a system, and with so many ways for hackers to try to get into your vehicle, we’ve actually already seen successful remote attacks in the wild – and we’ll continue to see new ones. The good news is that every make and model is slightly different, so a hack that works on a Honda might not work on a Ford, and vice versa.

That being said, auto manufacturers have a responsibility to secure the networks and computer systems inside your vehicle and mine from malicious hackers, which is why I happen to believe that teaching young people how to test and secure these systems – starting within a virtual environment like we do in the book – is one of the best ways to protect our vehicles and our personal safety from ransomware on the roadway.