No Starch Press Blog

Live Coder Jon Gjengset Gets into the Nitty-Gritty of Rust

Our always fascinating Author Spotlight series continues with Jon Gjengset – author of Rust for Rustaceans. In the following Q&A, we talk with him about what it means to be an intermediate programmer (and when, exactly, you become a Rustacean), how Rust “gives you the hangover first” for your code's own good, why getting over a language's learning curve sure beats reactive development, and how new users can help move the needle toward a better Rust.

Rust for Rustaceans cover Jon Gjengset headshot

A former PhD student in the Parallel and Distributed Operating Systems group at MIT CSAIL, Gjengset is a senior software engineer at Amazon Web Services (AWS), with a background in distributed systems research, web development, system security, and computer networks. At Amazon, his focus is on driving adoption of Rust internally, including building out internal infrastructure as well as interacting with the Rust ecosystem and community. Outside of the 9-to-5, he conducts live coding sessions on YouTube, is working on research related to a new database engine written in Rust, and shares his open-source projects on GitHub and Twitter.

No Starch Press: Congratulations on your new book! Everyone digs the title, Rust for Rustaceans – which is a tad more fitting than its original moniker, Intermediate Rust. I only bring this up because both names speak to who the book is for. Let’s talk about that. What does “intermediate'' mean to you in terms of using Rust? Specifically, what gap does your book fill for those who may have finished The Rust Programming Language, and are now revving to become *real* Rustaceans?

Jon Gjengset: Thank you! Yeah, I’m pretty happy with the title we went with, because as you’re getting at, the term “intermediate” is not exactly well-defined. In my mind, intermediate encapsulates all of the material that you wouldn’t need to know or feel comfortable digging into as a beginner to the language, but not so advanced that you’ll rarely run into it when you get to writing Rust code in the wild. Or, to phrase it differently, intermediate to me is the union of all the stuff that engineers working with Rust in real situations would pick up and find continuously useful after they’ve read The Rust Programming Language.

I also want to stress that the book is specifically not titled "The Path to Becoming a Rustacean," or anything along those lines. It’s not as though you’re not a real Rustacean until you’ve read this book, or that the knowledge the book contains is something every Rustacean knows. Quite the contrary – in my mind, you are a Rustacean from just before the first time you ask yourself whether you might be one, and it’s at that point you should consider picking up this book, whenever that may be. And for most people, I would imagine that point comes somewhere around two thirds through The Rust Programming Language, assuming you’re trying to actually use the language on the side.

NSP: Rust has been voted “the most loved language” on Stack Overflow for six years running. That said, it's also gained a reputation for being harder to learn than other popular languages. What do you tell developers who are competent in, say, Python but hesitant to try Rust because of the perceived learning curve?

JG: Rust is, without a doubt, a more difficult language to learn compared to its various siblings and cousins, especially if you’re coming from a different language that’s not as strict as Rust is. That said, I think it’s not so much Rust that’s hard to learn as it is the principles that Rust forces you to apply to your code. If you’re writing code in Python, to use your example, there are a whole host of problems the language lets you get away with not thinking about – that is, until they come back to bite you later. Whether that comes in the form of bugs due to dynamic typing, concurrency issues that only crop up during heavy load, or performance issues due to lack of careful memory management, you’re doing reactive development. You build something that kind of works first, and then go round and round fixing issues as you discover them.

Rust is different because it forces you to be more proactive. An apt quote from RustConf this year was that Rust “gives you the hangover first” – as a developer you’re forced to make explicit decisions about your program’s runtime behavior, and you’re forced to ensure that fairly large classes of bugs do not exist in your program, all before the compiler will accept your source code as valid. And that’s something developers need to learn, along with the associated skill of debugging at compile time as opposed to at runtime, as they do in other languages.

It’s that change to the development process that causes much of (though not all of) Rust’s steeper learning curve. And it’s a very real and non-trivial lesson to learn. I also suspect it’ll be a hugely valuable lesson going forward, with the industry’s increased focus on guaranteed correctness through things like formal verification, which only pushes the developer experience further in this direction. Not to mention that the lessons you pick up often translate back into other languages. When I now write code in Java, for instance, I am much more cognizant of the correctness and performance implications of that code because Rust has, in a sense, taught me how to reason better about those aspects of code.

NSP: In the initial 2015 release announcement, Rust creator Graydon Hoare called it “technology from the past come to save the future from itself.” More recently, Rust evangelist Carol Nichols described it as “trying to learn from the mistakes of C, and move the industry forward.” To give everyone some context for these sentiments, tell us what sets Rust apart safety-wise from “past” systems languages – in particular, C and C++ – when it comes to things like memory and ownership.

JG: I think Rust provides two main benefits over C and C++ in particular: ergonomics and safety. For ergonomics, Rust adopted a number of mechanisms traditionally associated with higher-level languages that make it easier to write concise, flexible, (mostly) easy-to-read, and hard-to-misuse code and interfaces – mechanisms like algebraic data types, pattern matching, fairly powerful generics, and first-class functions. These in turn make writing Rust feel less like what often comes to mind when we think about system programming – low-level code dealing just with raw pointers and bytes – and makes the language more approachable to more developers.

As for safety, Rust encodes more information about the semantics of code, access, and data in the type system, which allows it to be checked for correctness at compile-time. Properties like thread safety and exclusive mutability are enforced at the type-level in Rust, and the compiler simply won’t let you get them wrong. Rust’s strong type system also allows APIs to be designed to be misuse-resistent through typestate programming, which is very hard to pull off in less strict languages like C.

Rust’s choice to have an explicit break-the-glass mechanism in the form of the unsafe keyword also makes a big difference, because it allows the majority of the language to be guaranteed-safe while also allowing low-level bits to stay within the same language. This avoids the trap of, say, performance-sensitive Python programs where you have to drop to C for low-level bits, meaning you now need to be an expert in two programming languages! Not to mention that unsafe code serves as a natural audit trail for security reviews!

NSP: Along those same lines, Rust (like Go and Java) prevents programmers from introducing a variety of memory bugs into their code. This got the attention of the Internet Security Research Group, whose latest project, Prossimo, is endeavoring to replace basic internet programs written in C with memory-safe versions in Rust. Microsoft has also been very vocal about their adoption of Rust, and Google is backing a project bringing Rust to the Linux kernel underlying Android. As Rust is increasingly embraced and used for bigger and bigger projects, are there any niche or large-scale applications, or certain technology combos you’re most excited about?

JG: Putting aside the discussion about whether Rust prevents the same kinds of bugs in the same kinds of ways as languages like Go and Java, it’s definitely true that the move to these languages represent a significant boost to memory safety. And I think Rust in particular unlocked another segment of applications that would previously have been hard to port, such as those that would struggle to operate with a language runtime or automated garbage collection.

For me, some of the most exciting trajectories for Rust lie in its interoperability with other systems and languages, such as making Rust run on the web platform through WASM, providing a better performance-fallback for dynamic languages like Ruby or Python, and allowing component-by-component rewrites in established existing systems like cURL, Firefox, and Tor. The potential for adoption of Rust in the kernel is also very much up there if it might make kernel development more approachable than it currently is – kernel C programming can be very scary indeed, which means fewer contributors dare try.

NSP: In the book’s foreword, David Tolnay – a prolific contributor to the language, who served as your technical reviewer – says that he wants readers to “be free to think that we got something wrong in this book; that the best current guidance in here is missing something, and that you can accomplish something over the next couple years that is better than what anybody else has envisioned. That’s how Rust and its ecosystem have gotten to this point.” The community-driven development process he’s referencing is somewhat unique to Rust and its evolution. Could you briefly explain how that works?

JG: I’m very happy that David included that in his foreword, because it resonates strongly with me coming from a background in academia. The way we make progress is by constantly seeking to find new and better solutions, and questioning preconceived notions of what is and isn’t possible, or how things “should” be done. And I think that’s part of how Rust has managed to address as many pain points as it does. The well-known Rust adage of “fast, reliable, productive, pick three” is, in some sense, an embodiment of this sentiment – let’s not accept the traditional wisdom that this is a fundamental trade-off, and instead put in a lot of work and see if there’s a better way.

In terms of how it works in practice, my take is that you should always seek to understand why things are the way they are. Why is this API structured this way? Why doesn’t this type implement Send? Why is static required here? Why does the standard library not include random number generation? Often you’ll find that there is a solid and perhaps fundamental underlying reason, but other times you may just end up with more questions. You might find an argument that seems squishy and soft, and as you start poking at it you realize that maybe it isn’t true anymore. Maybe the technology has improved. Maybe new algorithms have been developed. Maybe it was based on a faulty assumption to begin with. Whatever it may be, the idea is to keep pulling at those threads in the hope that at the other end lies some insight that allows you to make something better.

The end result could be an objectively better replacement for some hallmark crate in the ecosystem, an easing of restrictions in the type system, or a change to the recommended way to write code – all of which move the needle along towards a better Rust. That sentiment's best summarized by David Tolnay’s self-quote from 2016: “This language seems neat but it's too bad all the worthwhile libraries are already being built by somebody else.”

NSP: Alumni of the Rust Core team have said that it’s a systems language designed for the next 40 years – quite an appealing hook for businesses and organizations that want their fundamental code base to be usable well into the future. What are some of the key design decisions that have made Rust, in effect, built to last?

JG: Rust takes backwards compatibility across versions of the compiler very seriously, and the intent is that (correct) code that compiled with an older version of the compiler should continue to compile indefinitely. To ensure this, larger changes to the language are tested by re-building all versions of all crates published to to check that there are no regressions. Of course, the flip side of backwards compatibility is that it can be difficult to make improvements to the language, especially around default behavior.

The Rust project’s idea to bridge this divide is the “edition” system. At its core, the idea is to periodically cut new Rust editions that crates can opt into to take advantage of the latest non-backwards-compatible improvements, but with the promise that crates using different editions can co-exist and interoperate, and that old editions will continue to be supported indefinitely. This necessarily limits what changes can be made through editions, but so far it has proven to be a good balance between “don’t break old stuff” and “enable development of new stuff” that is so vital to a language’s long-term health.

The Rust community’s commitment to semantic versioning also underpins some of Rust’s long-term stability promises – that is, by allowing crates to declare through their version number when they make breaking changes, Rust can ensure that even as dependencies change, their dependents will continue to build long into the future (though potentially losing out on improvements and bug fixes as old versions stop being maintained).

NSP: One of the goals listed on the Rust 2018 roadmap was to develop teaching resources for intermediate Rustaceans, which I believe is what spurred you to start streaming your live-coding sessions on YouTube. Developers have really embraced them as a way of learning how to use Rust “for real.” Why is it useful, in your view, for newcomers to see an experienced Rust programmer go through the whole development process and see real systems implemented in real time?

JG: Learning a language on your own is a daunting task that requires self-motivation and perseverance. You need to find a problem you’re interested in solving; you need to find the will to get through the initial learning curve where you’ll get stuck more often than you’ll make meaningful progress; and you have to accept the inevitable rabbit holes that you’ll go down when it turns out things don’t work the way you thought they did. That’s not an insurmountable challenge, and some people really enjoy the journey, but it is also time-consuming, humbling and, at times, quite frustrating. Especially because it can feel like you’re infinitely far from what you really wanted to build.

Watching experienced developers build something, especially if you’re watching live and can ask questions, provides a shortcut of sorts. You get to be directly exposed to good development and debugging processes; you get exposure to language mechanisms and tools that you may otherwise not have found for a while on your own; and you spend less time stuck searching for answers, since the experienced developer can probably explain why something doesn’t work shortly after discovering the problem. Of course, it’s not a complete replacement. You don’t get as much of a say in what problem is being worked on, which means you may not be as invested in it, and you won’t get the same exposure to teaching resources that you may later need as you’re trying to work things out on your own. Ultimately, I think of it as a worthwhile “experience booster” to supplement a healthy and steady diet of writing code yourself.

NSP: The popularity of your videos notwithstanding, you’ve said that part of what inspired you to write the book is that “they’re not for everyone,” and that some people – yourself included – have a different learning style. Given both mediums cover advanced topics (pinning, async, variance, and so on), would you say the book is an alternative to the live coding sessions, or is it designed to complement them? In other words, would a developer who’s watched your videos still benefit from the book (and vice versa)?

JG: It’s a bit of a mix. The "Crust of Rust" videos cover topics that are covered in the book, and the book covers topics in my videos, but often in fairly different ways. I think it’s likely that consuming both still leads to a deeper understanding than consuming either in isolation. But I also think that consuming either of them should be enough to at least give you the working knowledge you need to start playing with a given Rust feature yourself.

For readers of the book, I would actually recommend watching one of the longer live-coding streams on my channel (over the Crust videos), because they cover a lot of ground that’s hard to capture in a book. Topics like how to think about an error message, or how to navigate Rust documentation work best when demonstrated in practice. And who knows – you may even find the problem area interesting enough that you watch the whole thing to the end!

And with that… std::process::exit

*Use code SPOTLIGHT35 to get 35% off your copy of Rust for Rustaceans through Dec. 16th.

Cracking Cybercrimes with Threat Analyst Jon DiMaggio

Our illuminating Author Spotlight series continues this month with Jon DiMaggio – author of The Art of Cyberwarfare: An Investigator's Guide to Espionage, Ransomware, and Organized Cybercrime (March 2022). In the following Q&A, we talk with him about the difference between traditional threats and nation-state attacks, the reasons that critical infrastructure is an easy target for threat actors, the emerging "magic formula" for defeating ransomware, and the fact that just because you're paranoid doesn't mean they aren't targeting you on social media.

Art of Cyberwarfare cover Jon DiMaggio headshot

DiMaggio is a recognized industry veteran in the business of “chasing bad guys,” with over 15 years of experience as a threat analyst. Currently he serves as chief security strategist at Analyst1, and his research on Advanced Persistent Threats (APTs) has identified enough new tactics, techniques, and procedures (TTPs) to garner him near-celebrity status in the cyber world. A fixture on the speaker circuit and at conferences, including RSA (and this month’s CYBERWARCON), DiMaggio has also been featured on Fox, CNN, Bloomberg, Reuters TV, and in publications such as WIRED, Vice, and Dark Reading. He continues to write professional blog posts, intel reports, and white papers on his research into cyber espionage and targeted attacks – insights that have been cited by law enforcement and used in nation-state indictments.

No Starch Press: You’re known as one of the first intelligence analysts to focus on attacks executed by nation-state hacking groups – referred to as Advanced Persistent Threats. What’s the difference between traditional cyberattacks and APTs?

Jon DiMaggio: Traditional cybercriminals conduct attacks relying on a user to click a link in an email or visit a specific website. If the attack fails or security mechanisms defeat the threat before it can successfully infect a victim, the attack is over. That's why, with some exceptions, traditional attacks are geared at targets of opportunity, and not tailored to a specific victim.

Nation-state attacks, however, are the exact opposite. Nation-state attackers target specific victims, and are not only motivated but well-resourced. These advanced attackers have the backing of a government, and often develop their own malware and infrastructure to use in their attacks. Also, unlike traditional threats, nation-state attackers are rarely motivated by financial gain. Instead, they seek to steal intellectual property, sensitive communications, and other data types to advance or provide an advantage to their sponsoring nation.

NSP: Governments and militaries are no longer the only targets of nation-state hackers – private-sector companies are now under attack as well. Most of them already have automated security mechanisms, but are those an adequate defense against APTs?

JD: No. Due to the human element behind nation-state attacks, automated security defenses are not enough. Human-driven attacks simply return to the system through another door. And unlike other threats, nation-state attackers are in it for the long game, which is why the attacks continue even if initially defeated by automated defenses. For these reasons, you must handle nation-state attacks differently than any other threat your organization will face – ideally, by deploying human threat hunters.

NSP: Another disturbing trend is the growing list of advanced cyber threats targeting the industrial control systems (ICS) of critical infrastructure, like the U.S. power grid. In terms of cyberwarfare, are we getting closer to seeing intrusion campaigns against our electrical, water, and transportation systems escalate from espionage or reconnaissance missions to highly disruptive attacks that could paralyze entire cities?

JD: Not only are we getting closer to attackers getting closer to our critical infrastructure, but it has already happened in other countries. In 2015, the Russian government conducted cyber attacks that resulted in shutting down power across critical areas of Ukraine.

In 2017, when I worked at Symantec, our team discovered a Russian-based nation-state attacker we dubbed "Dragon Fly," who infiltrated the U.S. power grid. The group was very close to gaining access to critical systems responsible for powering cities across the United States. In this case, security companies and the federal government worked together to mitigate the threat. This was a close call, but it just shows that nation-states are targeting our power grid – and likely will continue the effort moving forward.

NSP: In early October, the FBI and Cybersecurity Infrastructure and Security Agency (CISA) issued a warning that ransomware attackers, in particular, have been targeting water treatment and wastewater facilities. Do you have any insight into why ransomware attackers have recently moved from banks, local governments and healthcare systems to utility companies? Moreover, why are these critical facilities still so vulnerable to compromise given what we know about the threat and what’s at stake?

JD: Critical infrastructure appeals to ransomware attackers because they likely feel there is a greater chance the victim will pay. Additionally, the breach will be very apparent to the public, like in the Colonial Pipeline attack, when fuel stopped flowing and it resulted in a gas shortage across the East Coast. The effect of this type of attack is meant to be dramatic, and attackers know there will be high pressure from the general public to recover quickly. Usually, the fastest way to recover is to pay the ransomware and obtain the decryption key.

Also, critical infrastructure often provides an easy target to savvy attackers. For example, when a cybercriminal attacked the water system in Florida last year, he did so by taking advantage of technology and infrastructure that allowed workers to remotely access the critical controls used to regulate the system. In short, the ease of access for city workers was more important than the system's security. This, unfortunately, is a common problem. To address many of these existing vulnerabilities will require building systems based on security – and not ease of use. While this may be less important to a retail provider, it should not be an option for industries involved with our infrastructure.

NSP: Over the past year you’ve focused your expertise on nation-state ransomware. One thing I’ve learned from your work is just how long sophisticated intruders spend in a victim’s network before kidnapping their data and sending a ransom notice, often lurking for weeks if not months. Why is attacker “dwell” time an important security metric?

JD: Yes, that's a point many security analysts are unaware of. Enterprise ransomware gangs spend between 3 to 21 days on a victim network, with the average time being around 10 days. During this time, the attacker enumerates the network, obtains and escalates their privileges, disables security services, delete backups, and steals the victim's sensitive data. Finally, once the staging and data theft phase is complete, they execute the ransomware payload throughout the victim's network.

The reason this timeframe is so important is that the human attacker is active on your network. The takeaway is that the longer the attacker engages within your network, the better chance a good threat-hunting team will have to find them. This is why I keep emphasizing that you really need a human team to hunt for advanced threats, not simply rely on automated defenses.

NSP: As ransomware has evolved and diversified, AI has found its way into the mix, turbo-charging attacks that can automatically scan networks for weaknesses, exploit firewall rules, find open ports that have been overlooked, and so on. But machine learning works both ways. What role could AI tools play in threat hunting?

JD: The combination of both artificial intelligence along with human threat hunters creates the magic formula necessary to defeat ransomware attacks. AI is one of the fastest and most accurate ways to identify suspicious or malicious activity, and make quick mitigation decisions.

Based on the level of success ransomware gangs have had in recent years, current identification and mitigation capabilities are not working. At least, not consistently. In fact, several security vendors already base their technologies on artificial intelligence to mitigate threats. For example, the cybersecurity company DarkTrace recently used their tech – which relies on AI – to defeat a LockBit ransomware attack. (LockBit is a particularly pernicious ransomware-as-a-service gang that specializes in fast encryption speeds.) Using AI, DarkTrace identified and mitigated the attack in mere hours of its presence within the environment.

NSP: Sounds like the AI future is nigh! Shifting tracks, let's wrap this Q&A up in the present. You chase bad guys for a living. And not just any bad guys – the kind who could bring an entire nation to its knees. But you’re also a dad. Do you talk to your kids about what you do? If so, how do you explain things like nation-state attacks, ransomware gangs, or cyberwarfare on their level (or at least in a way that sounds less scary) when they ask about your day?

JD: I do talk to my kids about what I do. I actually try to get them involved, and spend time teaching them and explaining some of the work I do at a high level. My youngest son Damian and I even did a podcast together on ransomware. My oldest son Anthony is a freshman in high school and just started taking cyber security classes this year.

They think what I do is more like what they see in the movies, so they will be in for a disappointment when they figure out its more research, analysis and writing than hacking bad guys. However, it’s very rewarding that they have an interest in what I do, and often brag to their friends about it. At the same time, they've seen me working with encoded text and malware, and make comments that I stare at “gibberish” all day and pretend to be working! But overall they are really proud of me and think what I do is “cool."

NSP: Part of your objectively "cool" job entails thinking like the adversary. While it seems unlikely a nation-state actor would hijack a home webcam or set up a fake WAP attack at the local cafe, are there any lessons you've learned from a career spent analyzing cyber criminals that inform your personal online security habits outside of work, or that you try to instill in your children?

JD: Yes, due to my work I have a very different, limited online life. For example, outside of work-related social media, I have no personal accounts. And even with my limited social-media presence, I do not ever connect with family members – only work colleagues. I've used social media to map out relationships with adversary accounts, and know that someone could do the same to me. For that reason, I don’t use social media and, unfortunately for them, at least for now, my kids don't either. It’s not that I'm over-protective, but I don’t want them targeted by an attacker in an effort to get to me. And, to be honest, I think it's healthier at this point in life to let them just be kids. They will have an entire lifetime to be engulfed in social media, but for now I want them to just be kids.

As for my personal habits online, I use three different identity monitor and protection companies to keep an eye on my accounts. I never use the same password twice, nor do I use real “dictionary words” – and I always use two-factor authentication in addition to a hard-key (Yubi-key). I am religious about updating my passwords frequently, and you will never find a device in my home with a camera that is not covered. I also do not use traditional cloud-based services from vendors like Apple and Google.

To be honest, I live a pretty paranoid life because of the work I do and the fact that I put my name out there. At the same time, I think I need to be a bit paranoid, because if there is anything my job has taught me it is that anyone and anything can be hacked and compromised.

*Use coupon code SPOTLIGHT35 to get 35% off your pre-order of The Art of Cyberwarfare from now until Nov. 12 at midnight (PT).

Cyber Defender Bryson Payne Takes Us to School

We continue the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series with Bryson Payne, PhD – author of Go H*ck Yourself: An Ethical Approach to Cyber Attacks and Defense (January 2022). In the following Q&A, we talk with him about training the next generation of cyber defenders, why there's never been a better time to get a job in infosec, the security benefits of thinking like an adversary, and whether ransomware could soon be coming for your car. (Spoiler alert: it's already here!)

Go Hck Yourself Cover bryson Payne

Dr. Payne (@brysonpayne) holds the elite CISSP, GREM, GPEN, and CEH certifications, and is an award-winning cyber coach, author, TEDx speaker, and founding director of the Center for Cyber Operations Education at the University of North Georgia (an NSA-DHS Center for Academic Excellence in Cyber Defense). He's also a tenured professor of computer science at UNG, teaching aspiring coders and cyber professionals since 1998 – including coaching UNG’s champion NSA Codebreaker Challenge cyber ops team. His previous No Starch Press titles include the bestsellers Learn Java the Easy Way (2017) and Teach Your Kids to Code (2015).

No Starch Press: Cybersecurity Awareness Month is a great time to talk with you, because your career's been dedicated to making people aware of common and emerging security vulnerabilities. Recently though, high-profile hacks have hit the headlines like never before, with attacks on public utilities, government agencies, and customer databases causing real alarm among the general public. Are we starting to see a shift in the way mainstream society thinks about cybersecurity? If so, how can this be harnessed to make infosec stronger across the board?

Bryson Payne: All of us are seeing cyberattacks and breaches in the news, in the companies we do business with, and even in our own families. It’s a scary time to be so dependent upon technology, but there’s a bright side, yes –regular people are becoming smarter about how they use their devices, how they secure their information, and what information they share.

By understanding the threats that are out there, and how cybercriminals and cyberterrorists perform simple to complex attacks, you and I can protect ourselves and our families from cybercrime (or worse). And by training a new generation of cyber defenders, we can better protect our nation and our economy from future cyber threats.

NSP: You’re the founding director of the Center for Cyber Operations Education at UNG, where you’re also a tenured professor of computer science. So perhaps it’s no surprise that in 2018 UNG began offering a bachelor’s degree in cybersecurity – one of the nation’s first. Considering there are already a number of academic pathways that can lead to successful careers in the infosec world, what’s the benefit of pursuing such a specific major?

BP: The hands-on experience our students gain from real-world ethical hacking, forensics, network security, and reverse engineering in the classroom, in competitions, or in industry certifications, is more like what they’ll see in industry, government, and military cyber roles than traditional computer science or IT programs. In fact, the NSA and Department of Homeland Security are certifying more National Centers of Academic Excellence in Cyber Defense, like UNG, each year in order to give students the real-world skills needed to fight cybercrime, cyber terrorism, and even cyberwarfare for the next generation.

NSP: Does the addition of this degree program reflect a growing demand for cybersecurity pros in the workforce? And, for anyone reading this who’s considering going into the field (or going back to school to get credentialed), what are some of the career options you encourage students to explore?

BP: According to, there are over 400,000 positions in cybersecurity open right now in the U.S. alone, with tens of thousands of new postings appearing every month. If you’re considering going into cyber, there’s never been a better time to get a certification, take a course, or study on your own.

If you like police dramas or mysteries, forensics could be a good fit. If you like taking things apart and (sometimes) putting them back together, reverse engineering or ethical hacking might be fun for you. If you like making sure everything works like it’s supposed to, you might make a great network operations or security operations center analyst. There’s a job for everyone, from trainers to managers to technicians – and the pay is growing faster than for many positions in non-security fields.

NSP: Studies have shown that at least half of college-age adults don’t pursue tech-related careers because they believe the subjects are too difficult to learn. What do you say to people who are interested in cybersecurity but don’t think they have what it takes?

BP: There are so many paths into cyber, whether you start out in psychology, journalism, international affairs, criminal justice, business, math, science, engineering, even health sciences. Cyber is a team sport, and we need people who understand not just the technology, but the people, processes, and even the cultures and languages involved in cybercrime, cyberattacks, and cyberwarfare. Every organization, from Fortune 500 companies to city governments, schools, and healthcare institutions, needs people like you and me thinking about cybersecurity and how to protect employees or customers.

But, while it's important to know that not every cyber job is a technical role, the more comfortable you are with the technology, the farther you can go.

NSP: Your upcoming book, Go H*ck Yourself, teaches readers how to perform just about every major type of attack, from stealing and cracking passwords, to launch phishing attacks, using social engineering tactics, and infecting devices with malware. Some critics might find it ironic that a champion of cyber defense would write a book that literally teaches people how to execute malicious hacks. Explain yourself!

BP: Just like in a martial arts class, you have to learn to kick and punch while you’re learning to block kicks and punches – you have to understand the offense to be able to defend yourself. By thinking like an adversary, you’ll see new ways to protect yourself, your company, your family, and the devices and systems you rely on in your daily life.

For too long we’ve been told what to do, but not why we need to do it. A great example is the password cracking you mentioned. When a reader sees how quickly and easily they can crack a one- or two-word password, even with numbers and symbols added to it, they finally have the mental tools to understand why we’re advocating for passphrases of four or five words. It’s the same with all the other attacks – once you see what a hacker can do, you understand how important good cyber hygiene is, and how small steps to secure your devices can really pay off.

NSP: One type of attack that's really skyrocketed lately is ransomware. Your home state of Georgia is just one example – city and county governments, state agencies, hospital systems, even local election systems have fallen victim to ransom demands. With hackers hammering away at our institutional weak spots, something as simple as not installing a security patch right away, or clicking on a link in a socially engineered email can usher in a potentially devastating attack. What do you think can be done to prevent the human errors arguably fueling the current ransomware rage?

BP: Ransomware is definitely one of the most serious threats to your business, your family, and your own financial security. But the good news is that you can keep yourself from being an easy target. While the news often refers to humans as the weakest link, I actually see us as the best first line of defense. Employees and leaders who can spot phishing emails, who install updates and patches regularly, and who use good cyber hygiene can block more than 99% of known attacks before they get into your organization! And smart security-minded computer users can also apply these practices at home to protect themselves and their loved ones from online adversaries on the prowl for easy vulnerabilities.

NSP: Along those same lines, a lot of organizations have started backing up their files as a failsafe. But hackers being hackers, they’ve already adapted: double-extortion ransomware is now the norm, where the data’s exfiltrated before it’s encrypted so it can be released online if the ransom is not paid. How bad is the problem, and what's the solution?

BP: Double-extortion malware can have the most devastating financial impact short of cyber-physical attacks (and by that I mean when malware takes over a manufacturing facility, critical infrastructure, or medical facility and causes real-world, physical damage to real equipment or even endangers human life). It's true that backups used to be enough to recover from ransomware without paying the ransom, but these double-extortion attacks can steal data for months before locking down systems and demanding payment.

The best defense, in addition to those backups, is having well-trained cyber professionals doing what we call "active threat hunting" – looking for suspicious activity, like small file transfers overnight or to unknown networks, and tracking down systems that show indicators of attack or compromise. That’s why it’s important that we train more cyber defenders. Every organization needs cyber heroes now, so it's the perfect time to develop these skills.

NSP: Dr. Payne, you have arrived at your final destination. (Well, my last question anyway.) Over the past decade you’ve done some very cool conference presentations on car hacking, and have since turned them into a tutorial on your blog. The cool factor aside, this is an increasingly relevant skill set for aspiring white hats – since 2016 there’s been a 94% year-over-year increase in automotive cybersecurity incidents, including remote attacks that can control your steering, pump your brakes, shut down the engine, unlock your doors, open the trunk, etc.

1) Is it only a matter of time before ransomware infects this realm of life, with people, say, unable to start their car until they pay a hacker? 2) In the future, should automakers be pentesting cars at the level they perform crash tests? 3) Does this keep you up at night, or are you optimistic that your UNG graduates will have a solution?

BP: It is only a matter of time before we see ransomware and similar attacks regularly affecting smart cars. Today’s automobiles can have more than 40 computer chips, dozens of systems, and networks and connections from USB to 5G, Wi-Fi, Bluetooth, GPS, satellite radio, and more. We call that the “attack surface” of a system, and with so many ways for hackers to try to get into your vehicle, we’ve actually already seen successful remote attacks in the wild – and we’ll continue to see new ones. The good news is that every make and model is slightly different, so a hack that works on a Honda might not work on a Ford, and vice versa.

That being said, auto manufacturers have a responsibility to secure the networks and computer systems inside your vehicle and mine from malicious hackers, which is why I happen to believe that teaching young people how to test and secure these systems – starting within a virtual environment like we do in the book – is one of the best ways to protect our vehicles and our personal safety from ransomware on the roadway.

Break It Till You Make It: Q&A with Hardware Hackers Colin O'Flynn and Jasper van Woudenberg

To kick off the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series, we're joined by Colin O’Flynn and Jasper van Woudenberg, co-authors of The Hardware Hacking Handbook (available November, 2021). In the following Q&A, we talk with Colin (@colinoflynn) and Jasper (@jzvw) about the perils of proprietary protocols being replaced with network devices, the problem of having too many interesting targets to test your tools on, the beauty of AI-designed attack systems, the indisputable power of “hammock hacking,” and why nobody cares about fault injection until they get hacked with fault injection.

Hardware Hacking Handbook Cover Colin Oflynn Jasper VanWoudenberg

Colin runs NewAE Technology, Inc., a startup based on his ChipWhisperer project that designs tools to make hardware attacks more accessible, and teaches engineers about embedded security – a topic he frequently speaks about at conferences and on tech podcasts.

Jasper is CTO of Riscure North America, where he leads the company’s pentesting teams, and has a special interest in integrating AI with security. His research has been published in various academic journals, and he’s a regular speaker at educational and hacking conferences.

No Starch Press: I’ll start by saying that your book is timely! Hardware hacking, once a niche field of the exploit world, has become far more relevant amidst the proliferation of embedded devices all around us. What do you think accounts for this, and why are side-channel attacks in particular becoming increasingly common (and difficult to prevent)?

Colin O'Flynn: Hardware hacking has been a niche field, but one with an extensive and long history. Most of the powerful attacks we’re discussing today have been demonstrated for 20 years, so I’d say they should be “well-known.” But the truth seems to be that, until recently, advanced hardware attacks weren’t needed for most IoT devices. Default passwords and unlocked debug interfaces were the norm, so most hardware hackers never needed to dig deeper. Many people I’ve talked to at events have told me they were interested in side-channel and similar advanced attacks but never had time to actually learn them, as they were always able to break devices with easier and faster attacks!

The good news is that device manufacturers seem to be taking security more seriously these days, which means side-channel attacks have become a real threat. So I guess we’re seeing the industry fast-forwarding that 20-year lag of security research to catch up.

Jasper van Woudenberg: Hacking always moves with interesting targets. Once pinball machines started requiring money to play, people “hacked” them by just tilting the whole machine. Nowadays physical pinball machines have a tilt sensor – if you tilt the machine in order to affect the ball, it ceases operation. Of course, we’re talking about digital hardware in our book, but bypassing security systems is as old as security systems. So, the abundance of digital devices naturally increases the amount of hacking going on. Side-channel attacks are fascinating if you’re into the intersection between electronics, signal processing and cryptography. Beyond being fascinating though, they only become relevant when more straightforward attacks are mitigated.

NSP: Fault injection (FI) attacks – which inject a glitch into a target device that alters its behavior so you can bypass security mechanisms – used to be too “high end” for most hackers to bother with, often requiring expensive tools and intimate technical knowledge of the specific system under attack. But those days are over. Not only are low-cost FI toolkits readily available, the explosion of IoT has led to the rise of new defensive features, like Secure Boot, that can be easily subverted by a well-timed FI attack. What are the potential risks to a larger IoT network once a device is compromised this way?

CO: In the past we’ve seen end devices used as a pivot point into a more sensitive network. When it comes to commercial devices, we’re seeing many proprietary protocols replaced with network devices. For example, recent access-control readers are now simply PoE devices that talk back to a central server. With many of these devices, the original designers haven’t considered what happens if an end node becomes compromised. While the network may be correctly secured, you still see sensitive credentials stored in end devices become accessible to an attacker. And if an attacker is able to access these credentials, it means they may be able to pivot off the external network and into more sensitive internal networks.

JVW: I think the cost of the tools is a common misunderstanding – they can be really inexpensive. In our lab, we’ve done attacks literally by soldering a single wire to a bus, connecting it to a button, and when we pressed the button at the right time, the system boots our own code. The cost usually comes from the many days and weeks spent trying to figure out how to carry out the attack. And yes, some attacks do require high-end equipment, or at least equipment that can bring down the time used to figure out the attack.

One common stepping-stone attack we see is the firmware dump. Typically, embedded-device firmware does not receive a lot of scrutiny, and may have lingering vulnerabilities that can be exploited. This usually means gaining control over a single device, but there have been wormable firmware issues in the past.

NSP: What measures can be taken to harden embedded systems against FI attacks, and do you see this happening throughout the industry (why or why not)?

JVW: We always advise our customers to threat model and see if it makes sense to consider FI in scope. Usually that’s the case for embedded systems that are out in the field and have some sensitive assets to protect. Next is the question of whether faults can be mitigated in hardware and/or software. Both is ideal, but that’s not always feasible. Our book contains a chapter on countermeasures that also has a lab, so people can try out some ideas for FI countermeasures. Finally, verification of countermeasures early and often is critical. It’s virtually impossible, as a human, to predict all the ways a system can fault. Pre-silicon fault simulation and post-silicon fault injection, without exception, turn up surprises. Iteration and adaptation are key.

And then the million-dollar question: why is the hardening not happening throughout the industry? It’s a combination of cost and human nature. There is a real engineering cost to these countermeasures, so typically we only see customers that have had their devices compromised requiring FI resistance. If a compromise hasn’t happened, it’s very easy to write the attacks off as unrealistic or irrelevant. Nobody cares about fault injection until they get hacked with fault injection.

CO: Fault injection can be tricky to prevent, as we see countermeasures applied that aren’t effective. For instance, Jasper and I demonstrate a few examples in the book where compilers might remove the effect of your clever countermeasures. There seems to be a lot more interest in this now – for many companies, they just need some “end customer” to ask about it. I talked to silicon vendors a few years ago who were tracking countermeasure ideas, but basically none of their customers (people who actually build products) cared about FI attacks. So that meant they weren’t going to pay for engineering efforts to add those countermeasures. We seem to be seeing a very fast shift in the last couple of years though, so people who were tracking this early-on are in a good position to quickly offer solutions.

NSP: Speaking of low-cost fault-injection toolkits, Colin, you developed one of the most popular models out there, the ChipWhisperer, and built a company around it (NewAE Technology). Given that just about everything we use in our homes and offices has embedded computing systems and could be vulnerable to attack, how do you pick which devices to test your boards and analysis algorithms on? An example from your book would be smart toothbrushes – are you ever doing something like brushing your teeth when it suddenly occurs to you, “Wow, I could totally hack this thing”?

CO: This is actually a big problem! Unfortunately I tend to buy a lot of devices (microcontrollers, IoT products, industrial control systems, etc.) because I think they will be interesting to poke at! As a result, I’ve got a storage cabinet full of various devices along these lines… I’m slowly working through some of them, and when we get some time at the company, we’ll pick away at one or two of those devices as well.

But as more devices include embedded security, there are more “interesting" targets than there is hope of having time to deal with them. Part of why we design many different target-board devices (our “UFO targets” for ChipWhisperer) is actually to help out other researchers by giving them an easier platform to work with.

NSP: Once you successfully exploit a commonly used product, do you let the manufacturer know or is that generally considered an exercise in futility?

CO: If I plan on talking about the issue publicly I’ll reach out, even if I don’t think it’s a serious issue. Sometimes it takes a bit of time to reach the correct person (or team), but so far they have all generally spun it into positive experiences all around.

With one ongoing disclosure, for example, the engineering team had internally flagged that there could be some issues related to a relatively unsecure microcontroller that they were using in a product, and my report had validated their internal concerns. In this case they were already working on a new design, but I’m sure my report was a nice bonus for the people involved, as they can point to that as proof that the issue would be found eventually “in the wild.” In the meantime it gave them the opportunity to provide an interim fix via a firmware update for existing customers.

NSP: Jasper, one of your specialized areas of interest is combining AI with security research. Would you explain what this entails? And looking into the future, how could AI applications be leveraged to improve hardware and embedded security at the design level?

JVW: What I love about AI is also what I love about hacking: making a computer device do more than the original designer put in. With AI, this is tying a couple of artificial neurons together and getting a cat-and-dog image detector. With hacking, this is sending some weird input into a program and all of a sudden it executes arbitrary code.

The combination, I find fascinating. For instance, we’ve used neural networks to do side-channel analysis and outperform human-designed algorithms. We created an algorithm with colleagues that automatically optimizes fault injection parameters. I’ll work very hard to create some automation so I can be – paradoxically – lazy afterwards.

I firmly believe that most if not all cognitive activities, such as designing or attacking systems, will be better performed using AI rather than brains – the big question is when. I prefer to be on the side of making systems more secure through AI, so my research is going towards automating both the detection and mitigation of vulnerabilities, at scale. For instance, a big push we have currently is in pre-silicon security – detecting side-channel and fault issues before they make it into products. I wouldn’t say we’ve arrived at using AI yet, but the first steps are being made.

NSP: Both of you have advanced degrees, which makes sense given all of the academic knowledge involved with embedded security. Yet, The Hardware Hacking Handbook makes very little assumption about a reader’s background. What was your approach to making this challenging field accessible to novices and newcomers, and why was it important enough that you wrote an entire book on this premise?

CO: My career path on paper seems relatively full of academic love – I was an assistant professor for several years in the Electrical & Computer Engineering department at Dalhousie University. But back at the start, when I was considering applying to start my undergraduate degree in electrical engineering, I came relatively close to not attending university at all. I had self-taught myself a fair amount about electronics in high school, and managed to get a summer job that was effectively an electrical engineering internship, and was considering just continuing to grow with the “on the job” experience instead. In the end I fell onto the academic path, but I’ve always believed that it is not the only path, and part of this shapes my desire to make this as accessible as possible.

While many readers may be undergraduate or grad students, it’s clear that a classic academic textbook would cut out readers coming from other backgrounds (including everyone from high school students to professionals interested in looking at other careers). Practically, what we write down isn’t the only consideration – one of the great things about working with No Starch Press is that the pricing of the books makes them more accessible as well. From academic publishers, this book would have been $150+. And there would never be Humble Bumble sales that make it completely accessible on the level that NSP does!

JVW: I’ve taught courses on side-channel and fault injection for years, and it has taught me that the group of people that has to defend against these attacks is not necessarily interested in all the theory and all the research in this field. They want to focus on their goals of creating a system.

Then there’s the group of people like teenage me. I started hacking software before I had an internet connection, so I know the struggle of having to figure out everything by yourself. Looking around at the amazing blogs, videos, tutorials, etc. that exist for the software space currently, it really made me realize what a gap there is in the hardware space.

So, for both these groups, it’s really about breaking things down into practical tips and tricks, and then some of the unavoidable theoretical background. I really would like to show people that this space isn’t daunting, and that even someone like me – who came off a software background – can learn and enjoy it.

NSP: I’ll end with an easy one (I think) – what is your favorite hacking tool, and has that changed since you first got interested in hardware hacking when you were young?

CO: I should probably say my favourite tool is one of my own more-advanced products. But really, a good DMM is the most important tool! And in that regard, it hasn’t changed much over the years – one of my first “dream gifts” (back when Santa would be responsible for it) was a Fluke 12 multimeter, long before I knew about hardware hacking. I’ve since upgraded to a nicer meter (Fluke 179/EDA2 kit), but as we talk about in the book, there is so much you can do with this tool! Finding where pins go, checking the state of logic levels and voltages – it’s still my most used tool when I’m looking at a new device.

JVW: I started being “creative with technology” in the mid-’90s. What has changed is the amount of information available, and the fact that security is now an actual career – I still don’t always believe people are willing to pay me to do this. What hasn’t changed is my curiosity, and the rush that comes with solving a complex problem.

Favorite hacking tool? Hah. Although I use devices for a significant portion of the day, they are also a source of frustration. So, those are out. I’m going to say: my hammock. When I get stuck on a problem and I sense no more new ideas are being produced, or I get frustrated, I drop the problem for a few hours or days. Then I hop in my hammock for what I call “hammock hacking.” This is where I hang back and relax. I’ll almost always have a new view on the problem, or another way of connecting some dots that I hadn’t considered before. Or I fall asleep. But it’s a win in either case.

InfoSec Warrior Vickie Li: From Hunting Bugs to Helping Developers

Vickie Li is the resident developer evangelist at the application security firm ShiftLeft, and a self-described “professional investigator of nerdy stuff.” Her new book, Bug Bounty Bootcamp, leverages her expertise in offensive web security as well as her background in vulnerability research to introduce beginners to all aspects of web hacking, showing readers how to find, exploit, and report bugs through “bounty” programs. In her free time, when she’s not podcasting, speaking at conferences, or dropping infosec and cybersecurity knowledge on YouTube, she’s writing articles and blog posts about nipping security problems in the bug.

Bug Bounty Bootcamp Cover Vickie Li

For the September edition of our ongoing Author Spotlight series, we talk with Vickie about her first bug bounty payout, how her success hacking apps made her a passionate advocate for secure development, and why she means it quite literally when she tells you that becoming a good web hacker is like learning to ride a unicycle.

No Starch Press: First of all, that’s a pretty impressive intro for someone in their mid-twenties! But let’s go back a few years. You graduated with a CS degree from Northwestern, then worked as a freelance web developer before getting into infosec, pentesting, and offensive-security content creation, which – correct me if I’m wrong – led to your current full-time gig as a developer evangelist. So where did your foray into bug hunting come into play, and how did you get started with bounty programs?

Vickie Li: I got interested in security through my university courses, and started bug bounties as a way to learn more about infosec. Hacking on bug bounty programs helped me learn a lot about web hacking and web application security in general. But sitting in front of my laptop all day, I started to lose motivation because I really wanted my work to connect me with other people, and doing bug bounties all alone was quite lonely. That’s why I started my technical blog, where I wrote about whatever I was learning at the moment. I really tried to make the blog posts easy to understand, because I hoped people who were studying the same thing would find it helpful.

My blog actually kickstarted my career in infosec. Because of it, I was able to get some freelance penetration testing and technical writing jobs, and eventually landed my current job at ShiftLeft. Knowing how to explain complex technical concepts also helped me with writing Bug Bounty Bootcamp and making it an approachable web-hacking book.

NSP: What was your first real catch, and what was it like earning your first paid bounty?

VL: I found my first paid bug – a CSRF – about a week into hunting for bugs. The bounty was just a hundred dollars, but it was amazing to be able to earn a bit of money as I learned about the field. The most memorable part about the experience was when [the company’s] security team triaged the bug I found, and fixed it on the website. It was very motivating to know that I can contribute to the security of a widely used site through my work!

NSP: Over the past year you’ve gone from working as a freelancer/bug hunter to a full-time gig as a “developer evangelist” – a job focused on bridging communications between external dev teams and your internal app-security colleagues. Can you elaborate on what exactly your day-to-day is like, and how it satisfies your infosec interests?

VL: My primary role at ShiftLeft involves making secure coding practices approachable for developers, and spreading the word about how static analysis can help in this process. Every day is different: I might be writing a blog post, preparing to speak at a conference, or helping my team understand the needs of developers during the security process. I really enjoy the work because it fits into my original motivation for getting into infosec: helping make the internet a safer place for everyone.

NSP: The name of your company refers to shifting security to the left – or, introducing security checks earlier in the development life cycle rather than at the end. You underscored this in a blog post, comparing app security to wearing a facemask during the pandemic (“Building a Security-First Culture”). At the same time, your book is about hunting for zero-day vulnerabilities and getting started in bounty programs. Do you ever worry that if you’re too good at your job there won’t be any more bugs to hunt?

VL: I am not worried about that. Shifting left and bug bounties are not an either-or situation. These practices work together to help organizations become more secure. Bug bounty hunters are creative and are constantly coming up with new ways to attack an application. Organizations can use bug bounty programs to tackle new and inventive attack vectors before malicious attackers discover them. But most bugs should still be discovered early in the development cycle, when they are the easiest to fix. Shifting left will help eliminate most security vulnerabilities in your applications, and bug bounties can help you catch the rest.

NSP: To take this question in the opposite direction, has your bug-hunting experience helped or informed your current work advocating for better security practices?

VL: During my time as a bug bounty hunter, I helped lots of developer teams fix security issues in their applications. That’s when I noticed that many serious security vulnerabilities stem from small programming mistakes that could be easily discovered with static analysis. It’s easier to find and fix vulnerabilities early in the development process because you do not risk an attacker exploiting it in production.

This experience made me a really passionate advocate for secure development and security education. Offensive security practices like penetration testing or bug bounties are a great way to secure your applications, but they should only be used as a fail-safe to catch novel bug classes and vulnerabilities that slip past security protocols during the development cycle.

NSP: The AppSec space, and the cybersecurity industry as a whole, lives in a constant state of change, with new types of exploits emerging every day. How do you keep up with the ever-evolving landscape?

VL: I’m known to be quiet on social media, but I actually use Twitter a lot – mostly to get informed on the latest security news and understand the security challenges people are currently facing. In other words, I am the classic Twitter lurker. I also read a lot of infosec books, and follow a few well-written security blogs and YouTube channels.

NSP: Are there any online resources (besides your own) that you can recommend to aspiring web hackers, bug hunters, or security researchers?

VL: I am a big fan of reading security books to gain in-depth knowledge about a topic, and then reading blog posts for the latest infosec techniques – Orange Tsai’s blog is one of my favorites. He is a really creative hacker and has been a big inspiration for me ever since I started. Also, Web Security Academy by PortSwigger is a great starting point for web hackers who want to get some hands-on experience.

NSP: Okay, I’ve saved the most pressing question for last. You recently posted on Twitter that you were having a hard time selling your unicycle. This implies that 1) you own a unicycle, and 2) that you know how to ride a unicycle. Do tell.

VL: Happy to announce that I have sold my unicycle to a new loving owner! I learned to unicycle in college because I’ve always thought it’s cool to have an uncommon skill like unicycling. Unicycling is really hard to learn! It took me countless falls and months of practice to finally learn to ride it in a straight line.

But, this experience really boosted my confidence in learning. Like web hacking, learning to unicycle is hard but possible if you put your mind to it and persist. Now when I am trying to learn something difficult, I know I can ‘cause hey – I learned to unicycle! 10/10 would recommend unicycling as a sport. There are few things in this world cooler than a unicycling hacker.

Engineer Angel Sola Explains How Math is in Everything (Including Beer)

For Mathematics and Statistics Awareness Month, we’ve been spotlighting some of our favorite math/stats books – and the brilliant number ninjas who write them.

Angel Sola Orbaiceta Hardcore Programming for Mechanical Engineers

As we enter the final week of #MathStatMonth, our focus is on software engineer and author Angel Sola Orbaiceta, whose new book, Hardcore Programming for Mechanical Engineers, comes out in June (and is available now for pre-order). Angel earned his university degree in Industrial Engineering, but has worked in the software industry since graduating over a decade ago. He currently serves as Senior Software Engineer at Glovo, a Barcelona-based quick-commerce startup that provides on-demand delivery services via its popular mobile app. Angel is also the creator of InkStructure, a software application for architecture and engineering students that solves 2D structural problems.

In the wonderfully illuminating conversation below, we talk with him about why math is so key to engineering (and everything else), how his IE background gives him an edge in software development (and why all engineers should learn to code), how statistics keep projects on budget, what data analysts have in common with gold miners, and how getting the math wrong when you home-brew beer is – literally – the stuff of nightmares.

1) As most of our readers probably know, mathematics is the foundation for many science and engineering disciplines. Given that you majored in industrial engineering (IE) – with an intensification in mechanics – at university, would you please explain why math is so important to engineering in general?

In the words of the genius Galileo Galilei: “Mathematics is the language in which God has written the universe.” Thanks to math, we can describe physical phenomena, measure events, and make predictions. Most of the principles our engineering designs rely on are given in the form of mathematical equations. Take for instance the well-known Newton’s law of universal gravitation:

Newton’s law of universal gravitation

This simple and beautiful equation describes the gravitational force that any two bodies with masses “M” and “m” laying anywhere in our universe, a distance of “r” apart, exert on each other. Thanks to this piece of math, engineers have been able to do the calculations necessary to launch satellites to space. How cool is that? Fun fact: we know this formula isn’t exactly right thanks to Einstein’s general relativity ideas; nature is a bit more nuanced than that. But, when the bodies we study aren’t moving at speeds close to the speed of light, the results yielded by Newton’s formula are so incredibly accurate that engineers still use them.

Math is everywhere, behind every technological advancement we know. Even behind the best kept secrets of big companies like Google or Uber. In my current company, Glovo (we’re similar to Uber Eats, DoorDash or Deliveroo), we use math for everything! For instance, to make the optimal assignment of the customers’ orders to the available couriers, such that the time customers have to wait to get their order is minimized and the amount of money couriers make is maximized, we solve a linear optimization problem. There is a lot of math going on behind our algorithm. If we weren’t good at math, couriers would earn little money and customers would get their food cold.

It’s also important to note that Engineering syllabi usually have a very high load of mathematics, some of it a bit abstract for most students. It’s only when you start solving real engineering problems that those abstract mathematical concepts might turn into useful tools. I remember my first year in university, we had this algebra course and I was incredibly lost; I couldn’t understand anything. We learned about linear and affine transformations; I would simply complete the assignments mechanically, without pondering on what those meant. Some years later, I started working on InkStructure (an application I developed to do structural analysis, which includes a CAD-like drawing canvas) and found out that, to zoom in or out the drawing, I needed to apply affine transformations. That was an Eureka! moment for me. As I explain in my book, the first versions of the application had a weird zoom in/out behaviour, and that is because it took me some time to fully grasp the math behind affine transformations. The Youtube channel 3Brown1Blue did a much better job explaining me those concepts in a graphical manner than my university professors (thanks, Grant!).

Follow-up Q: Which math applications do you need to study (and understand) to excel in these fields? Alternatively, is engineering still a good career choice for people who are not good at math – like, could a computer simply solve all the math-related problems you’d encounter?

Math is key in engineering, one of the most powerful tools at our disposal, but you don’t need to be a math genius to become a good engineer. At least, this is how I see it. But I also think it depends on the specific engineering discipline. In the software field, for instance, math might not be strictly necessary; you can get along just with high school math. I know many good software engineers who don’t have a strong math background, yet produce some of the best designed software I’ve seen. But for civil or mechanical engineers, a strong math background is more important. A good understanding of linear algebra, calculus, differential equations and statistics is key for these engineering disciplines. Electrical engineers make extensive use of complex numbers, so complex analysis is necessary for them. And in the case of industrial engineering, linear and non-linear optimization is important as well.

In any case, we have at our disposal very good software that does most of the tedious math for us; and yet, we still need to know about how that math works, so the results of that software don’t appear like some kind of magic and we’re able to correctly interpret them. Hardcore Programming for Mechanical Engineers is precisely about writing such software: software that automates the resolution of engineering problems, which usually involve a lot of math. If we have computers which do billions of operations in a single second, why not hand them the tedious, automatable calculations?

2) You’ve noted that you taught yourself programming while at university in order to solve engineering problems. Post-graduation, you took these skills and went right into the software industry. How do you think your IE training has helped you succeed in software development? And based on your experience, do you see a need for more STEM graduates (who understand basic scientific principles) in software engineering?

What I think is key about the Industrial Engineering program I took is that, on top of the algebra, math and physics foundations, we touched on a lot of disciplines: structural analysis, mechanics, design of machines, electric circuits, electronics, energy production and distribution, microprocessors, manufacturing, statistics and many more. This rich knowledge foundation is invaluable to write software that solves a broad range of problems. Because if you think about it, software is basically a way of automating the solution to a problem through leveraging the power of computing. Knowing how to find an effective solution to a problem in the first place is also a great skill for a Software Engineer to have. Also, project management plays a big role in the study of industrial engineering, and creating software requires big doses of project management.

I think it’s for this exact same reason that we need more STEM graduates to get acquainted with Software Engineering principles, so they write the next generation of scientific and engineering software. Most of us STEM graduates learn coding at university, but writing good software goes beyond mastering “for” loops and “if” statements. When I graduated, I was naive enough to think that, since I knew how to program, I was ready to write good software. It took me some years of industry experience and sleepless nights reading software engineering books to start understanding how real software is produced. I recently
wrote a blog post
about two important lessons I’ve learnt in this respect.

I would definitely encourage everyone in the STEM field to learn coding plus some software engineering principles and practices. And hey – engineering students and professionals alike, give a read to read my book, as it has some of the most important lessons I’ve learnt along the way!

3) Your book is about learning to solve engineering problems with Python – from scratch – using math concepts, like linear algebra and geometry. But there’s also a focus on writing clean code, building libraries, and automated unit testing. How do all of these disciplines (math, statistics, programming, etc.) fit together in engineering applications, and why is it important for mechanical engineering students and professionals to learn how to code?

Exactly! And let me start by saying you’ve summarized the intent of the book quite nicely. Computers have become the main and most important tool for us engineers. In the past, that might have been the abacus, a compass, or a pocket calculator. But today, a computer with specialized software installed is the most effective tool we have. Nobody in their right mind would draw the blueprints for a bridge or skyscraper by hand anymore; now we use AutoCAD, and that makes us hundreds of times more productive than the engineers from a century ago who did all this by hand. By the same token, no civil engineer works out the stresses on every structural member of a bridge by hand anymore, either – that would take months of work and can be error-prone. We have great software today that we use instead, capable of doing that type of analysis in a couple minutes and with no calculation errors.

So if computers are the modern engineers' best tool, why not learn to give it instructions directly (that is, learn programming)? In the end, someone needs to design and write those software applications engineers use, right? Who’s better than the engineer that would use it themselves? For me, this is the most important reason why I think all engineers should learn how to code, even if they don’t plan to work for the software industry directly. And if they decide they want to write software of their own, they’ll need to be acquainted with some important Software Engineering topics, such as how to architect their code, distribute it in reusable libraries, and have automated tests to know it remains bug-free as they code. I believe all of this is so important, I wanted to bundle it all in a single book that could become a good reference for engineers who want to write software. I wish someone had done this earlier so I could have learned all this when I was an engineering student. Most books on programming targeted at engineers miss the “code quality” part: they teach you about programming, but miss the software engineering part, which is every bit as important.

4) Now let’s talk about statistics. There’s an old engineering axiom that goes, “You can’t improve what you don’t measure.” Why are statistical concepts so useful in engineering?

Statistics show up in most engineering disciplines, in many different ways. There’s one example that I find very insightful, and it’s about structural analysis. Construction structures in some parts of the world (such as Japan or Taiwan) are designed to withstand strong typhoons and powerful earthquakes. But the probability of having the strongest of the typhoons and the deadliest of the earthquakes happening at the same time is quite low. In fact, we can compute – using statistical models – the most likely combination of those forces of nature, so that the probability of the building collapsing is almost negligible and the usage of construction material is minimized.

These days, one of the best paid jobs in the tech industry is a Data Analyst. These professionals have a strong statistics background and are capable of, among other things, making predictions based on studies done with large amounts of data. I find it really remarkable the fact that there’s so much valuable information hidden in large amounts of data – which, using the right statistical tools, can be brought to the surface and used to make more-educated business decisions. I see this in my current company. Our data analysts are amazing! We have regular sessions with them where they show us their findings, and it’s so incredible to see how the answer to many of our “unanswered questions” were always there, lying right in front of our eyes but hidden behind petabytes of data. It only required the right set of statistics skills to be revealed. It’s as if the answers are gold inside a mine, and statistics are the pick and shovel necessary to extract them.

Follow-up Q: What is the difference between how statistics and statistical principles are used in IE or mechanical engineering versus how they’re used in software engineering?

For industrial engineering, to name one example, statistics are key to designing effective manufacturing processes: Preventive maintenance can be put in place thanks to statistical studies that predict when a given machine might break down or fail, so it can be acted upon before that happens and without the need to halt the entire assembly line. Another example can be risk management for any given engineering project. Any risky conditions that could put the project at stake need to be accounted for, but designing a solution that’s safe in every possible scenario usually increases the cost of a project considerably. So, instead of assuming all of those conditions will be a problem 100% of the time – and to the maximum extent of their risk potential – you can run statistical experiments to determine the most likely level of risk the project should account for. These types of statistical analyses keep the project budget reasonable, while ensuring the engineering solution is as safe as necessary.

In the software field, the usages are not that different from the other cases. Software projects are like any other engineering project – they have risks to manage, and those can be analyzed using statistical models. But besides that, there is a very interesting technique we use often in software that allows us to know whether a change brings value to users or not: A/B tests. In an A/B test we show randomly chosen users a new version of the feature, (what we call “version A”). The rest of the users still see “version B,” which is how the app looks without the proposed change. We then measure if version A performs better with users than version B. If that improvement does indeed have statistical significance, we implement the change for all users. This technique has a special place in my heart because I was part of the team who built the A/B test infrastructure at Glovo.

5) One of your hobbies outside of designing software (and writing a book) is brewing your own beer. Salud! Are there any engineering skill sets, such as the use of math and statistics, that help you in endeavoring to craft the perfect pint?

This is the best question I’ve been asked in a long time! Let me start with an anecdote about how not getting the math right during the brewing process can yield “interesting” results.

Since I’m not a professional brewer, I typically follow other brewers’ beer recipes. I like to brew Imperial IPAs (aka double IPAs), which use a large amount of hops and happen to be my favorite style of beer. Some time ago, I was feeling adventurous and decided that I’d design my own recipe. I’m not sure what exactly went wrong – perhaps I calculated the proportion of hops wrong incorrectly – but that beer gave me weird nightmares every time I drank one before going to sleep (100% of the time, thus making it statistically significant). And not just any kind of nightmares, I’m talking about “how can my brain be so messed up” sort of nightmares. Most people aren’t aware of this, but hops come from a plant belonging to the Cannabinaceae family – a fact that might have something to do with the nightmares part, but who knows? I think this story of me getting the math wrong and brewing a nightmare-factory beverage exemplifies how important math can be, even for simply creating a liquid that’s pleasurable to drink.

Apart from that anecdote, the truth is there actually is math behind the art of brewing; maybe just elementary math, but math nonetheless. For instance, recipes usually refer to quantities as proportions with respect to the amount of water used, so you need to calculate the amount of each ingredient depending on your desired volume of brewed beer. And, to measure the amount of alcohol present in the final product, we rely on the density difference between the wort both before and after the fermentation process – which, when plugged into a mathematical expression, gives us the alcohol percentage by volume. But this is just me being a brewer noob. I’m sure professional brewers could give us many more examples!

You can also do statistical analyses with homebrewed beers. For instance, you could check whether there is any correlation between, say, drinking your newly brewed double IPA and having nightmares. Or, compute the likelihood of your friends being more likely to come over for a barbecue when you brew a certain style of beer… I’ll conclude by noting that I’m quite sure there are some mathematicians who have demonstrated their best theorems after a few beers. So, yes, you could say beer and math go hand-in-hand.

Tech Entreprenerd Tracy Osborn Wants Women to Know What They’re Worth (and Ask For a Raise!)

For Women’s History Month, No Starch Press is spotlighting the contributions and individual achievements that female authors have made in the world of tech and on our bookshelves.

Each week we'll shine your attention on just a few of these remarkable women in tech – along with a 30% discount on their books.

Use promo code WOMEN30 at checkout!

Hello Web DesignTracy Osborn

This week, the focus is on designer-developer-author-”entreprenerd” Tracy Osborn, who is program director for Tiny Seed, a year-long accelerator for bootstrapped businesses. She’s built websites and worked with startups for two decades, before launching her own (, but eventually pivoted to helping other tech-based seed-stage businesses grow. All the while, she’s been churning out uniquely accessible “Hello” web-dev e-books for beginners (in addition to speaking about female entrepreneurship and design on the conference circuit).

Her first book with No Starch Press, Hello Web Design, comes out this month. Fortunately, she made time to talk with us about making coding tutorials more inclusive, her battles with social anxiety, finding joy in a career change, her biggest mistake, and how women need to face their fears in order to ask for the salary they deserve.

1) A decade ago you taught yourself how to code Python and Django in order to build the web platform for your former startup, Yet, much of the impetus for your Hello web-dev book series was the frustration you felt with all of the programming tutorials you came across in the process. What were the shortcomings that you identified, and how are the tutorials that you created in response different?

It all goes back to when I went to university — I originally majored in Computer Science. Growing up with the advent of the web, I always loved building websites (yes, those funky table layouts) and I thought my love would translate well to CSC. Unfortunately, the way that the university taught programming (and how most tutorials and resources teach programming) did not fit how my brain worked. It was very conceptual and theoretical, when I just wanted to build something and see others use it. Consequently, I thought I was terrible at programming and I switched majors to Art.

I realized later on when I learned programming in a way that worked for me (project-based, step-by-step building, reduced theory, a “win” at the end) that there was an opportunity to share how I learned with others who shared my struggle. Web development was first, but this way of teaching can apply to most beginner subjects — I believe theory can come later once you have a feel for how to do the basics.

Follow-up: Do you believe your approach to teaching design makes the craft more inclusive?

Absolutely. While my tutorials work universally, I find that folks who don’t identify as white and male are drawn to them more. And the more we can teach these beginner subjects to a wide range of folks, the more diverse these areas will be once folks get to intermediate and advanced levels.

2) Women suffer from social anxiety at nearly twice the rate of men. In fact, you’ve noted that it was your own social anxiety that, regretfully, kept you from utilizing the mentors, networking opportunities, and career connections offered by the San Francisco startup accelerator you took part in for WeddingLovely. Fast forward to today, and you not only speak at conferences all over the world, you’re the Program Director at a (different) accelerator program. How did you conquer your social fears, and what advice do you have for other aspiring female tech entrepreneurs who struggle with this type of anxiety?

Oh gosh, I am still working on this. I wouldn’t call my social anxiety conquered, but greatly reduced. I was able to identify my triggers — lack of confidence and feeling isolated and alone — and once identified, I was able to figure out tactics that would boost my confidence and surround me with folks I felt comfortable with. Speaking at conferences was huge for this, as being a “speaker” raised my confidence and gave me a set of other people at a conference (other speakers) that I could latch onto when I was feeling alone. It’s odd saying that one of my greatest tools against social anxiety is public speaking, but it really did wonders.

I’m aware this might not work for others (as public speaking is a whole other set of anxiety); for anyone else, I would recommend to looking at the past and identify those pain points and those triggers, as once you have a specific action that you can pinpoint (like lack of confidence), you can better form tactics to improve that area rather than working on nebulous “anxiety.”

3) Sort of along these same lines is the fear of rejection, or of the unknown, that seems to disproportionately affect women’s career trajectories, particularly when navigating the tech industry. In your case, you spent your entire life excelling because of your technical prowess – from a kid creating websites, to getting a BFA in graphic design, to running a web-based startup and selling programming tutorials. Then you shut down WeddingLovely in 2018 and went in a completely different direction, taking on a Program Manager role at TinySeed, where you work with people not computers. What led to the departure from your wheelhouse, and how does it square with your career trajectory up to that point? More importantly, what can other women facing a major change in their career path learn from your experience?

My joy comes from helping others, and every role I’ve had has had an aspect of that in some way: WeddingLovely, my goal was to help small and local wedding vendors receive more business; with my books, I wanted to help folks learn new skills that they could apply to their career. TinySeed is yet another way I can work with folks and improve their lives, as my day-to-day role is being the main person our founders interact with and I guide the direction of the accelerator program. I find that I feel the most fulfilled when I am able to improve someone else’s life in some small way, and I love that my day job now revolves around being a people-person.

Career changes are immensely difficult but I hope that any career change comes with a better understanding of what brings a person joy. It’s scary taking on something completely new, but I find that being stuck in an unhappy role or position is even scarier. Plus, if you’re like me, something completely new can be invigorating.

4) Changing tracks (no pun intended) let’s talk about women getting paid. The gender wage gap persists, with women still earning 22% less than men for the same work more than 50 years after the Equal Pay Act. The World Economic Forum estimates it will take 217 years to close that gap – which actually went up from 170 last year. One thing women can do to affect change is simple: Ask for more money. But the vast majority don’t. You’ve directly addressed this issue, noting that the scariest moment in your career was orchestrating a 55% raise. What advice do you have for women who can’t imagine taking a bold approach to salary negotiation?

The doesn’t-feel-great-but-it’s-true answer is “what’s the worst that could happen?” Ideally, the worst that could happen is a “no.” Folks fear that the worst that could happen would be a demotion or even having an employment offer rescinded or being fired — and in that case, I would ask those folks, “Would you want to work at this company if that is their reaction to you having this conversation?” Probably not. In fact, the most likely outcome is your manager coming back with a counter-offer. So it comes down to internalizing that the worst that could happen is finding out that the company you’re working for isn’t on your side and that you should find something different, the best that could happen is that you get your raise, and the likely outcome is that you get some sort of salary increase. Breaking it down like this makes salary negotiation easier to do (but unfortunately it’ll never be easy.)

For folks already in a role and wanting a raise, there is something you can do if you have the time. It can be immensely helpful to interview at other companies while you’re in your role and before you ask for your raise, as it gives you negotiation practice, and, if you get an offer, and real number you can anchor your salary negotiations to (not to mention, if you have an offer, it gives you something to fall back to if your current role does indeed decide to let you go.) This involves a lot of juggling, but the process of interviewing for another job can be incredibly helpful for boosting your own confidence.

5) Back when you were running a startup and heavily involved in the programming world, you wrote about attending conferences, events and meetups where you were, literally, the only woman in the room. Sure, we’ve seen positive developments – Crunchbase reports that the number of female founders doubled from 10% of global startups in 2009 to 20% in 2019. And yet there remains a huge disparity in access to venture capital. Fortune recently found that even though the startup world raised 13% more from VCs in 2020 compared to 2019, companies founded solely by women received less investment than in 2019, accounting for just 2.2% of the $150 billion raised last year. Does this track with your own experience as a female entrepreneur, and – now that you work on the investment side of things – what options do aspiring female founders have for getting a foothold in Silicon Valley?

Absolutely. I feel that diversity has made huge strides in technology in the last decade, but sometimes it can feel as though venture capital is still in the dark ages. Traditional venture capital revolves around folks looking for a pattern, and when you don’t look like the Mark Zuckerbergs of the world, you have to work 10x harder to prove that you deserve investment.

One of my biggest mistakes when looking for venture capital when I was working on my last startup was not understanding that venture capitalists say they want to change the world and invest in those that are, but they also really need to see actionable ways you’ll be returning their investment multiple times over. I hate talking about money, and I find that a lot of female founders are the same — we want to talk more about the future and how our product will be but not about how the product is working right now (fit feels like bragging?) and how that directly leads to money money money. It’s gross, but folks who can lean into the gross can have better venture capital outcomes.

However, (and this is one of the reasons I work at TinySeed), I don’t want to encourage female founders to learn how to play the game if they don’t want to. Bootstrapping (growing your company without outside investment) or finding ways to fund your company with just an initial bit of money — like what TinySeed does — means more sustainable companies for folks who don’t want to do the VC rollercoaster. They don’t have to work 80+ hour work weeks, they can have families, side projects, other passions. I want more women to be funded, but I feel like we need more alternatives to just VC, as this will also increase diversity.

Software Guru Marianne Bellotti Doesn't Have Time for Big Tech's BS

For Women’s History Month, No Starch Press is spotlighting the contributions and individual achievements that female authors have made in the world of tech and on our bookshelves.

Kill It with FireMarianne Bellotti

This week, the focus is on software engineer extraordinaire and legacy-systems expert Marianne Bellotti (@bellmar). Her new book, Kill It with Fire (April 2021), reflects her internationally known work on some of the oldest, messiest computer systems in the world, and is rich with historical contexts for advancements in technology, fascinating case studies, flexible modernization frameworks, and her trademark wit. Bellotti currently runs Identity and Access Control at Rebellion Defense; prior to that, she oversaw platform services at Auth0, served on a technical SWAT team in the U.S. Digital Service, and built data infrastructure for the United Nations.

Below, we talk with Marianne about following social science into Big Tech, why a learning disability became her biggest career strength, how diversity affects software output, and the best advice she ever got.

No Starch Press: You took a fairly unconventional path to software engineering. For one, you’re completely self-taught; and, even though you were a fairly prolific hacker in high school, you blew off Silicon Valley to travel, study anthropology, and pursue a career in international development. Eventually, of course, you did get lured into the tech industry, where you’re now pretty much at the top of your field. What led to the pivot?

Marianne Bellotti: I was very interested in social systems. Actually I still am! A lot of Kill It With Fire can be described as organization theory. So I didn’t really change my mind about what I wanted to do, stuff I wanted to do just moved into the tech industry. At the time I went to college, being in technology on the east coast meant working for the banks, or maybe a Fortune 500 company. There was no Facebook. There was no Twitter. Google was just a search engine. Ten years later, the data science movement was getting ramped up and all the interesting activity around social science was shifting towards the way the ubiquity of computers was changing how people organized and interacted.

NSP: Given that it’s often difficult for women to “make it” in tech via traditional means (advanced degrees, networking, etc.), is there something to be said – based on your experience – for going your own way?

MB: The biggest advantage I had making it in tech was actually an experience I had in grade school. When I was eight years old I was diagnosed with a sensory processing disorder, which is a type of learning disability. I struggled in school a lot at first, and then as I found my rhythm I had to deal with the social stigma of having a disability. The attitude of people around me was that I couldn’t possibly compete with “normal” children and if I did better than the “normal” children it was clearly because I cheated. It broke my spirit for a while. I didn’t want to try, because what was the point? It was an awful time, but it came in handy when I entered technology.

Women in technology get treated exactly the same. Many people will assume you can’t possibly be as good as the men, and if you’re in the room it’s because you somehow cheated or because standards have been lowered. By the time I got into technology I had realized that those experiences I had in school were bullshit and I regretted giving up on myself. So when people tried to convince me I couldn’t be successful here because I was a girl, I didn’t internalize it. I just ignored it and kept going.

Some of the best software engineers I know come from non-traditional backgrounds. It’s not about degrees or credentials, it’s about giving yourself a chance. For some people the structure and credibility of a degree will give them the confidence to fight for a place in the room where things are happening. For people like me it was more about having other skills that could be super useful to gatekeepers. But one approach isn’t more effective than the other. You need to be resilient.

NSP: There’s a lot of talk these days about closing tech’s gender gap. After all, women make up less than a quarter of the technology workforce, and even fewer are in government tech roles. Are there solutions for making the field more inclusive, regardless of sector? And, as someone who’s literally written the book on the subject (Hiring Engineers), in what way can recruitment efforts or hiring processes help resolve this?

MB: I really believe that you don’t recruit diverse talent, you grow it. I think more engineering managers need to understand the dynamics of experience level. People assume that you should always hire the most experienced, most talented person you can. If you can fill a team with All Star talent, you should. But when you study how teams actually work, you learn that isn’t a recipe for success. All Star talent needs to own big projects. So if you fill your team with All Stars, they will end up burned out trying to run all the big projects they’ve started by themselves. My rule of thumb is when I need my team to deepen their expertise I fill out junior and mid-career roles. When I need to increase my team’s responsibilities, I hire senior engineers.

When junior roles are not an afterthought, it becomes much easier to plug in diverse recruiting efforts. I watch a lot of teams try to hire all senior, then someone comes through the pipeline who is just below the standard for senior and all of a sudden the team hires them for a more junior role that didn’t exist before. That’s a strategy that’s going to result in homogeneous teams, because the people who we see “great potential” in tend to be people who remind us of ourselves. So an equal candidate from an under-represented group doesn’t get a junior role created for them.

The other thing that kills diversity in recruiting is trying to hire and grow fast. It’s just math. Under-represented groups are going to be rare, if your goal is to hire the first qualified person who makes it through your pipeline, that person is probably going to be a white or asian guy. If you want a more diverse group to choose from you need to factor in some lead time to source candidates. Right now I’m working on finding leads for what I expect to be hiring in late Q3. I do a lot of my own sourcing. I don’t just sit back and wait for recruiting to send me candidates. (For that reason people who are interested in working with me should definitely reach out over Twitter and ask for a 30-minute coffee date! 😉)

NSP: To take this in a more granular direction, and one that speaks to your roots in anthropology, let’s talk about the gender gap in terms of Conway’s Law – the old adage that software mirrors the shape of the company that makes it (e.g., the org chart, team structures, etc.). Is there a parallel with how the lack of diversity in most tech settings is not only a matter of equitable representation but also one of technology’s impact on society being adversely affected by a perpetually male-dominated perspective?

MB: This is a super interesting question! We know from various studies that diverse teams make better decisions not because of different perspectives but because diversity makes people uncomfortable – it puts them on edge – which leads to more critical thinking. At the same time, I think the minority/majority experience does create its own set of assumptions that influence what problems you think need solving. It’s possible. And certainly it’s true that people from marginalized groups will see a different context around proposed approaches, particularly in civic tech.

NSP: You’ve talked about experiencing “imposter syndrome” early in your tech career – that little voice in our heads that says we’re not nearly as competent or talented as the people around us think we are. Struggling with feelings of professional inadequacy can either hold women back from taking on leadership roles in tech, or cause them to avoid/drop out of the field altogether. From an organizational standpoint, what can be done to better support and nurture young women entering the technology industry?

MB: I still experience imposter syndrome all the time. I find that the more of a reputation I develop as an “expert,” the more I enjoy playing the neophyte. It keeps me grounded by putting how difficult it is to learn things on display for others. It seems counterintuitive, but if I hid behind my reputation I feel like my imposter syndrome would get worse. By exploring things I don’t understand and being vocal about not understanding them, I feel much more comfortable with my place in the industry. For example, lately I’ve been learning program language design, and podcasting about my experiences trying to understand that stuff in a series called “Marianne Writes a Programming Language” – twenty-minute episodes of me missing lots of obvious things and needing experts to explain things multiple times just so that I know what to look up via Google later.

In general, I find that women tend not to believe any praise they receive for their technical skills. The best advice I ever got on this was from Mikey Dickerson. He interrupted me while I was in the middle of explaining how I wasn’t qualified for something and just said “you didn’t trick us, we know who we hired.” Pumping people up is the wrong approach. People need to feel seen. When he said this, it gave me a lot of confidence, even though it was a gentle admission of my flaws as a software engineer. And throughout my career, these moments of vulnerability have been the most valuable in fighting back imposter syndrome. When a very senior person confides in you that they don’t understand something, or are afraid to do something, it changes your perspective on your own self-doubt. I try to do the same for others. It’s not about being critical, it’s about breaking down the assumption that smart people are smart without trying.

Article-19 Activists Mallory Knodel and Ulrike Uhlig Reimagine the Internet

For Women’s History Month, No Starch Press is spotlighting the contributions and individual achievements that female authors have made in the world of tech and on our bookshelves.

Throughout the month we'll shine your attention on these remarkable women—along with a 30% discount on all their books. We'll also be posting a special Q&A on one new or forthcoming title each week.

Use promo code WOMEN30 at checkout!

Mallory Knodel How the Internet Really Works Ulrike Uhlig

This week, the focus is on two of the co-authors behind How the Internet Really Works (Dec. 2020), a collaborative work produced by Article 19 activists. Mallory Knodel is the CTO of the Center for Democracy & Technology, co-chair of the Human Rights and Protocol Considerations group of the Internet Research Task Force, an advisor to the Freedom Online Coalition, and former head of digital for ARTICLE 19, where she integrated a human rights-centred approach to communications and technology work for social justice movements. Ulrike Uhlig is a (comic) artist, graphic designer, front-end web developer, and Debian Developer. She works with non-profit organizations at the intersection of technology, arts and human rights.

No Starch Press: You both work in technology and human rights – and gender equality is one of the most fundamental guarantees of human rights. Given that ICT is an area where women commonly experience discrimination, exclusion and harrassment, what role does internet governance and/or protocol standards play in achieving a more equitable and inclusive global cyberspace?

Mallory Knodel: Gender discrimination is present in internet governance, too. That is to say that, while setting standards and building governance mechanisms presents the opportunity to provide guidance on best practice, efforts to address inequality are undervalued. Rather than getting trapped in the endless loop that starts and ends with the demographics of participant data, there are two things that should be ubiquitously understood by now: 1) inclusion is everyone’s responsibility, and 2) participation is, in some part, related to interest. I hope that my work on human rights and the public interest in standard bodies and internet governance is sufficiently interesting to attract experts who are also feminists, anti-racists, and social justice advocates.

NSP: Your work has brought attention to the theme of the “digital gender divide.” How does gender affect the way women access, use the web, and benefit from internet technology?

MK: The digital gender divide is the result of compounded inequalities that all derive from access to the internet. There is inequality in access to literacy, devices, mobile data, in-home internet subscriptions, information that is censored, paywalled, filtered, and blocked. On the other hand networked mobile devices can exacerbate stalking, police surveillance, harassment, and economic harms such as theft and private data brokerage. At the same time, the offline world of bookstores, government services and public spaces, is disappearing. For initiatives like e-commerce and remote jobs aimed at women to work, ubiquitous, cheap and quality internet access is a fundamental requirement.

NSP: “Alice & Bob” – fictional characters used in discussions about cryptography to make complex concepts more understandable – have been a popular archetype in CS since the ‘70s. But in How the Internet Really Works, you made a conscious decision not to use the “couple” for explaining cryptographic protocols and systems; instead, Alice talks about these topics with a friendly dragon. What inspired you to do this, and what was your intent by recasting “Bob”?

MK: I credit Ulrike with the creative side to our book. She was able to bring to life the everyday objects, characters and mythologies from technology and retell them through her brilliant illustrations of reimagined and more contemporary archetypes.

Ulrike Uhlig: While making the book, we’d come across a very interesting and well-researched work by Quinn DuPont and Alana Cattapan: “Alice & Bob. A History of The World’s Most Famous Cryptographic Couple.” There we learned, for example, that Eve, the person who listens to, and eventually tampers with Alice and Bob’s conversations has sometimes been depicted as Bob’s rejected ex-wife, and that Alice and Bob are not only longer names for representing “A” and “B” (in cryptographic transmissions) but we understood there was also an assumption of their role, related to their gender. We found this to be a bit too heteronormative. We want all sorts of people to be able to identify with our characters, and to, sort of, pass the (non-existing but self-imposed) Bechdel test for books.

However, when writing the text for our book, we noticed that there seemed to be one advantage to using gendered characters – it makes it a little bit easier to explain complex systems, because we can use two different pronouns so readers can more easily follow who is doing what. At first, we had the idea of simply inversing the assumption that Bob is a man and Alice is a woman, and wanted to call those characters “Aob and Blice.” Later we had the idea that Alice could just be talking to her friends, Catnip and Dragon – like, that we accidentally got rid of her assigned role as Bob’s partner. Finally we realized that we didn’t need to use gendered pronouns to make the text easy to understand, we could simply repeat the characters’ names.

Actually, creating the characters of our book has been a challenge for similar reasons. While we knew from the start that we wanted the main character to be a cat by the name of Catnip – an acronym for Censorship, Access, Telecommunications, Networks, and Internet Protocols – we initially thought our secondary characters would be the commonly used ones for explaining cryptography. When we did the first sketches though, it became clear that it would be hard to be inclusive and diverse using “human” characters. So we turned to the imaginary animal world to represent Eve, Mallory, and Catnip’s friend, Dragon. There is now only one human character in How the Internet Really Works who has a name: Alice. With the image of Alice, I encoded a bit of ourselves – women in tech – into the book.

NSP: Women were key to the development of computing in its early history. But in the decades since, they’ve been increasingly marginalized throughout the industry. Are there any notable actions being undertaken right now to help solve the persistent problem of gender inequality in tech and governance?

UU: That’s a good question. I have the impression that ever since I started working in tech, this question has been turned into all possible directions, with a bit of change, but not significant enough change that I would call it progress. I personally think that we need to open up the gender inequality discussion and talk about diversity. To me, this means first of all to question ourselves: How do we encode inequality in our systems? How do we (often unconsciously) reproduce patterns of classism, sexism, ableism, racism, and oppression? How are privilege and social reproduction part of our spaces, organizations, perceptions? Those questions are collective ones, not questions that can be solved on an individual level.

We can ask ourselves similar questions about the technologies that we produce. Technologies, such as internet protocols, are inherently political, as they shape how we interact with each other. How do we encode bias and power into those technologies and how can we do it differently? I would even dare to ask: How can we bring empathy into the technologies that we create? To that end, the Human Rights Protocol Considerations Research Group at the IRTF, that Mallory is a chair of, aims at researching whether standards and protocols can enable, strengthen (or threaten) human rights, and therefore gender diversity.

CS Curriculum Developer Sam Taylor Drops Some Knowledge

For Women’s History Month, No Starch Press is spotlighting the contributions and individual achievements that female authors have made in the world of tech and on our bookshelves.

Throughout the month we'll shine your attention on these remarkable women in tech—along with a 30% discount on all their books. We'll also be posting a special Q&A on one new or forthcoming title each week.

Use promo code WOMEN30 at checkout!

Sam TaylorThe Coding Workbook

This week, the focus is on Sam Taylor, M.Ed., a Bay Area curriculum developer, computer-science education advocate, and author of The Coding Workbook (Jan. ’21). While teaching STEM to middle school students, she taught herself how to code and build websites, then began blogging about what she had learned as a way to help other beginners with HTML and CSS. Those tutorials became the basis for her book, which guides grade schoolers and teachers alike through the basics and web development without the need for a computer or high-speed internet – resources that aren’t available in many low-income areas of the country.

In the following Q&A, Sam talks with us about the future of women in the technology workforce, making computer science (CS) more accessible to girls, the benefits of learning to code at an early age, and how the tech industry can work with schools to help close the “digital divide.”

No Starch Press: You started out in the teaching profession, a field in which the vast share of practitioners are women, then transitioned into the male-dominated tech industry. From that vantage point, do you feel hopeful that we will someday see the end of such stark gender imbalances in the workforce – particularly as it relates to STEM?

Sam Taylor: One of the first teams I worked on when I made the transition into tech had just one other woman and five men. At the time I didn’t realize that this was a common trend, until I started to make friends with other women in the tech industry. Luckily, even in the past few years, I’ve met and seen more and more amazing women take on various technical roles in the industry (software engineers, data scientists, product managers, etc.) or even just become more visible in the roles they’d already assumed! I have a ton of optimism for the female future of the tech workforce as we see more and more incredibly smart and diverse women taking on technical and leadership positions.

NSP: The number of women in computer science has actually decreased since the 1980s, when nearly 40% of CS majors were women. Today that figure has dipped below 20%. You yourself got your bachelor's degree in psychology and it was only later, while you were working as a teacher, that you learned to code completely on your own. Do you think more girls will pursue STEM careers if they are taught CS at an early age?

ST: As coding and technical skills become more in-demand, I think it is critical that we give as many young girls access to computer-science education as possible. That means creating clubs, after-school programs, and other opportunities to engage girls in STEM-related fields of study and show them all the career possibilities. Another important thing to do is expose them as much as possible to the many real-life role models they can be inspired by in tech, as well as everyday women making a difference in the world through their work in various STEM roles.

NSP: Most of the top 10 highest-paying college majors are in STEM. Research has even shown that one year after graduation, male and female coders were earning the same salary – meaning that more women in tech could help eliminate the gender wage gap. That aside, what are other benefits to be gained from girls learning to code?

ST: I think there are so many benefits to learning to code at an early age, such as learning how to collaborate with others, figuring out solutions to complex problems, and learning ‘how to fail’ and quickly bounce back through perseverance. Coding also allows you to explore your creativity in new and exciting ways! And to be honest, I just love to see and hear about women in tech getting paid what they’re worth, asking for and earning raises, and ultimately succeeding.

NSP: Technical knowledge and skills are now considered vital for full participation in 21st-century life, yet most states are only just now beginning to adopt CS learning standards – and with little in the way of federal support. As a professional curriculum developer and the new author of an offline coding workbook for grade schoolers, do you think the tech industry itself has a role to play in helping the public-education sector close its digital divide and ensure every student is taught essential computational skills?

ST: One of the results of the coronavirus pandemic is that more light has been shined on the digital divide – or, the divide in learning between those who have access to the internet and modern technology tools and those who don’t. I’ve seen different tech companies work to help get laptops, high-speed WiFi, and other resources into schools to help students who would otherwise lack those resources. But I think the tech industry can do more to support computer-science literacy in high-need areas by finding ways to provide mentorship, technological support, and just hands-on learning experiences in general – rooted in equity.