Quantcast
Channel: Cadence Functional Verification
Viewing all articles
Browse latest Browse all 652

Tales from DAC: A Meeting of Security's Heroes at the Accellera Luncheon (Part 1 of 2)

$
0
0

Figure 1: The panel and crowd

Citizens—the tech world is in trouble. With the ever-expanding size and complexity of chip designs, security hasn’t kept up. Old techniques for securing a design are no longer sufficient—and with IoT devices expanding into every facet of a person’s life, security is more important than ever. The often-joked-about case of someone hacking your refrigerator isn’t strictly a joke—without proper security on the systems your fridge uses, someone could actually do that. So how do we stop this before it starts? How does one secure modern designs?

These are questions asked at the Accellera Luncheon at DAC 2019. In support of standard development in the Accellera IP Security Assurance Working Group, Accellera assembled four of the top minds in IP security today to assuage some of the worries regarding security, and to discuss what the future of security-aware design and verification might hold.

Assembled that day were Andrew Dauman, a Tortuga Logic VP of engineering; Lei Poo, the technical director of secure platforms at Analog Devices; and Brent Sherman, a security researcher at Intel.

In this post, you’ll hear what the panel had to say about the questions from the moderator, Cadence’s own Adam Sherer. Next time, I’ll review what the audience’s concerns were, and how the panel addressed those.

The session began with the question: What is the state of IP design and security verification today? The panel agreed that secure architectures are quite advanced now, but the verification used in conjunction with those architectures is lagging. Security verification is very expensive, and not a whole lot of engineers have a good handle on what exactly comprises it. That’s partially due to its nebulous definition—what exactly is security verification? The truth is that security verification covers a huge variety of potential vulnerabilities, exploits, and techniques—but the only ones that truly matter are the ones we haven’t found yet, because the people that want to exploit hardware have a tendency to be one step ahead—if we know about a security hole, odds are that they have already found it and moved on to the next thing.

It’s also really expensive. Good threat analysis is required in order to make sure a company doesn’t spend too much money securing something that doesn’t matter. It’s unlikely anyone will hack your internet-enabled microwave from a macro-economic perspective, but automotive and defense applications require a higher level of security and scrutiny.  Of course, that microwave does need some level of security to protect you from a neighbor with a grudge.

The strategies used for automating security verification are improving, but progress is slow.  And the processes that exist aren’t that flexible, leaving manual security review as something engineers—frustratingly—still have to do.

If this is so vital, though, why is security verification still lacking?

Research is coming out of academia, and the tools are starting to catch up, but security verification still represents a big hole in most designer’s expertise. There’s a need for both positive and negative testing, but verification engineers rarely ever do both. Make no mistake—this is an unsolved problem. Until we somehow manage to figure out how to defend against unknown threats, security verification will never be “solved.” Andrew Dauman offered an economic view: the reason security is lacking is because it’s not cost efficient at this time to ensure that systems are as secure as they can possibly be. Until something changes in terms of economic incentives, security will always be lacking, even if the research gets up to speed.

Even if the research is there, and even if security verification matures with an acceptable, “mostly-secure” standard, how do we ensure that verification engineers implement it? Changes in the automotive development flow to incorporate standards like ISO 26262 provide some insights into how security might affect development, but these things need to trickle down to the engineers to get done. How does one make a verification engineer care about security? Throughout the panel, the phrase “culture of security” was stressed—an environment needs to develop where security becomes as much of a consideration as basic functionality. It has to be at the forefront of a verification engineer’s mind.

How do we make people care?

Lei Poo suggested that this movement to form a culture of security must be deliberate and disruptive. Change is hard for engineers stuck in their ways. If there’s a way to make it fun, we’ve got to do that—security verification needs to be rewarding beyond just the satisfaction of contributing to the ever-elusive “big picture.” Hardware security needs this especially badly—there is a body of work to address software exploits and partially prevent them with tool assistance, but that kind of functionality isn’t pervasive yet for hardware tools. That lack of functionality represents a big barrier to entry for verification engineers—why they should devote so much of their valuable time to something that could be fixed in software? The answer to that question needs to include better awareness of hardware exploits and better security-oriented functionality from hardware development tools to alleviate this apathy.

Beyond that, there’s security holes masked by misuse. It’s a common saying that the point of failure for a lot of programs is the person with a keyboard trying to tell it what to do, and this is especially true for IP developers, who cannot necessarily foresee all the ways their IP might be used. This begs the question: how do we quantify security? When an IP vendor says their IP is secure in such-and-such cases and it’s suitable for this-and-that applications, what does that really mean? Does the same sentence from two different IP vendors mean the same thing? Good metrics for security don’t really exist yet. There’s a world of difference between “secure” and “not secure”, but no good vocabulary for that gray area in the middle that most IP occupies.

To conclude the moderator questions, the panelists stressed that security needs to be a part of the optimization paradigm of development at every stage. Engineers need to think of it as often as they think of power or performance. It has to be ubiquitous.

This is just some of the opening discussion that went on at the Accellera Luncheon. Next time, we’ll talk about the questions the audience asked, and how the panelists responded to those.


Viewing all articles
Browse latest Browse all 652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>