Interviewee: Ben Chelf
In this interview we talk with Ben Chelf from Coverity. In specific, we talk about:
- Scanning the worldwide open source code base for vulnerabilities
- The necessity of automating defect identification
- Building scalability into code analysis
- Overcoming the limitations of automatic scanning
- The hubris of believing too much in your code
- The difficulty of appearing objective
- The changing face of code security and analysis
Sean Campbell: Hi, Ben. Could you start us off with some background of yourself and of Coverity?
Ben Chelf: Sure. I’m the CTO and one of the founders of Coverity. We started the company in late 2002, with technology that I worked on with a few colleagues while we were graduate students at Stanford.
The idea behind the technology is to make static source code analysis practical for everyday commercial software developers. It was a bit of a departure from previous academic attempts at static analysis, in the sense that our approach seeks to scale to millions of lines of code, so it can encompass systems that people are actually developing today.
To fine-tune the technology, we applied it to the Linux kernel, and the nature of the open source community made it fairly simple to work with developers to correct the defects we found. As more and more people found out about the defects we had been able to find automatically–as opposed to the old-fashioned “hard way”– a lot of buzz started to develop around the technology.
Some companies started approaching us, saying, “Obviously this stuff works because it’s being applied to a real system like Linux and getting some traction there, so can we try it on our code bases?” Eventually, a customer offered to pay us, and we decided that it was the right time to get the company started.
We saw a rapid adoption of our product, and we learned a lot in those first few months about bringing the raw technology into product form. We also started to address some of the other issues around making static source code analysis practical for commercial development, in terms of integrating it into existing build systems and making it so that defects are automatically distributed to the right developers when they’re discovered.
To encourage adoption, it was also important for us to work our technology into the existing workflow so that the defects we discovered automatically could be treated in a similar fashion to those discovered in the QA process and so forth.
Over the course of that learning process, we acquired a lot of customers–we have more than 400 now. Because the technology is so valuable, the company has been cash-flow-positive since our inception. And that’s a testament to the real value, I think, that we’re providing to folks in discovering defects earlier in the software development process with this new, disruptive kind of technology.
Scott Swigart: Coverity showed up in relatively recent news with a huge splash in making the announcement that certain open source projects are now in Coverity’s rung 2. Can you speak a bit to the significance of that announcement and what those different rungs signify?
Ben: Obviously, we started our work with the open source community when we were at Stanford, but under the Coverity umbrella, a new wave of work with the open source community really started about two years ago. We have been providing our technology as a service to open source developers and specifically as many of the widely known packages as possible through the site scan.coverity.com.
What led to this effort was that, a couple months earlier, we had been awarded a contract with the U.S. Department of Homeland Security, in conjunction with Stanford and Symantec, essentially to harden open source, since it is being used more and more in our nation’s infrastructure.
Software like Linux and Apache is everywhere, and as a matter of public security, it’s important to maximize the extent to which these systems are secure and reliable, work as they’re meant to, and are resistant to attack.
When we launched the site, we weren’t sure how it would be received. Being a proprietary software company, we sell a product. So we were a little bit nervous about how it would be received by the open source community.
So we tried to be very respectful of their code, and when we launched the site, I personally sent a note to the developer list saying, “OK. Here’s the deal. Here’s this site that we’re about to launch. I’m not letting anybody have access except for you. This is for the developers of these projects. This isn’t for press people to log in and troll through the bugs, or hackers to log in and look for vulnerabilities. This really is for the developers of the open source package.”
Today, we’ve kept that mantra of, “This is really for the developers.” Yes, we post statistics on the site, at a high level. But it’s really meant to try to improve the open source software as a development tool to help these folks.
I think it’s because of that cautious approach that the site has been well received. Developers started flocking to the site, registrations came flooding in, and defects started to get fixed as a result of the site.
In fact, to date, almost 8,000 defects have been fixed as a result of our scanning efforts through this site, across the board from open to commercial packages. We’re now seeing tens of millions of lines of code on a regular basis.
About the rung question, about a year after the site launch, we introduced the idea of the Scan ladder and the different rungs on it to incent the packages to use the product more. To acknowledge and reward those who have been diligently using the product and fixing all the defects that we find, we offer upgrades to the latest and greatest from Coverity, from a product and feature standpoint, as well as giving them additional access to us in terms of giving us their feedback.
There were 11 packages that met the criteria we set up for rung 2, in terms of cleaning out defects on a regular basis. We kind of stepped up our relationship with those packages and our commitment to them to help them get the latest and greatest out of the technology. We anticipate, over time, that we will add more and more rungs, as we have more and more capabilities in our product.
Scott: The press coverage seemed to stretch between two ends of a spectrum. One side seemed to assert that these projects that achieved rung two are bug free. Obviously, Coverity never suggested that–you were just rewarding projects that have made good use of your technology. On the other end of the spectrum, some people thought it pointed out inherent weakness of open source, because there were vulnerabilities to be addressed.
My impression is the truth lies somewhere in the middle. Large open source products like the Linux kernel or Apache have very high quality, and they’re worked on by very good developers. But I think it does show that certain jobs are best done by a machine. It stands to reason that using automation to scan code that never gets tired or bored makes sense.
Where do you think the limits are of what eyeballs can do in fixing defects? And as an extension of that idea, where do you think the technology can continue to go? What are some of the opportunities for things that could potentially be scanned for that maybe are still hard problems to solve today?
Ben: We started this project with the idea that we were not up to the task of debugging our own software; debugging is something that’s plagued developers for forever, right?
In 1949 Maurice Wilkes–a pioneer in creating the EDSAC–said that the first time he had to debug his software, he realized that most of the time he spent in programming would be on debugging the mistakes he made.
The observation, of course, is that the existing techniques are not sufficient. Testing can’t cover everything, and humans are subject to frailty, no matter how good we are. First, we’re not perfect. Even if I only had 1,000 lines of code and I could look at it day in and day out for months and months and months, I would still make mistakes.
Layer on top of that the fact that systems now are millions or tens of millions of lines of code, and a single developer has no chance of keeping the complexity of that kind of system in his or her head all at once. It’s just impossible.
We’re at the limit. There is no way to uncover these defects with the traditional software development process. So you look to automation technology that can cover all the different possibilities in code, and this is where static analysis shines, especially compared to testing or dynamic techniques. It isn’t tied to the execution of the program.
When you execute the program, maybe you get five percent code coverage, meaning that five percent of the lines of code are touched at least once in the execution of a test suite. And that doesn’t speak at all about the different contexts that those lines could be executed in.
So when you’re talking about real coverage, it’s a very miniscule percentage. Hundredths or thousandths or tens of thousandths of a percent in terms of the billions and trillions of different possible execution paths through the code.
Static analysis affords us the luxury of being able to try to cover all of that without running it. And so you can find those corner cases that every human and test case overlooked. Now, obviously the promise of that sounds great, but there are some very practical limitations, in terms of analysis.
Analysis is not easy, partly because the systems are so complicated. This is where a lot of the breakthrough comes in from our technology–addressing that scalability while retaining enough analysis heft to find interesting things in the code.
It’s one thing to analyze ten million lines of code, but if you’re not doing any deep analysis, you’re not going to find any of those real defects, those real security vulnerabilities that really resonate with developers. On the other hand, if you are doing deep analysis, it might take you the age of the universe to analyze a database of reasonable size.
Scott: [laughs] Right.
Ben: So that’s no good either. So there’s this very elusive sweet spot that for decades, literally, people didn’t quite have figured out in static analysis, and that’s what kept it from really delivering on the promise that it had ever since the days of Lint in the late 70s.
There are always tradeoffs that have to be made to get that scalability, and those tradeoffs manifest themselves as potential weaknesses, in terms of both false negatives and false positives.
False negatives, of course, are instances where static analysis did not report a defect that’s actually in the system. Of course, we’ll never have static analysis that can find every single defect, but if it doesn’t find anything, then it’s not going to be very useful. Therefore, in a nutshell, you need to find enough things to make the testing process valuable, in conjunction with your other mechanisms for testing.
The other issue is false positives, which tend to be a lot more deadly. When a static analysis tool reports to you something that is not actually a defect, it wastes a bit of the developer’s time. If you have too many false positives, then the developers will start ignoring even the good results. It’s the boy who cried wolf phenomenon.
If the tool cries wolf nine times in a row, even if that tenth one is a nasty bug, you’re probably going to gloss over it. I think the testament to the fact that we solved that problem was that open source developers gravitated to the product. They used it and they kept using it.
No manager told them, “You better go use this now,” or anything like that It was just out there and they kept using it. We’ve measured the false positive rate to be very low, and that’s an important part of what makes it practical–having enough analysis heft to keep the rates of false positive low, yet still having a high enough rate of real bugs found to make it interesting and valuable to the end customer.
All of that is really background to answer your question about where we take the product next. You want to add more add more analysis heft and still scale, so you can reduce that false positive rate even lower and find more and more and more bugs.
If you were to run Coverity 1.0 circa 2003 and compare it to now, we find double, triple, quadruple, or more number of defects on any given code base at that same low false positive rate, because we’re learning. We’re getting examples from our customers–they’re giving us more and more bugs, and asking us to find ways to uncover them.
We’ve now analyzed close to two billion lines of code between the open source community and all of our commercial customers. That’s an awful lot of great data for us in terms of learning about software systems, how they fail, and specifically in the code, what goes wrong that makes developers pull their hair out.
The obvious question is how to do that. How do you increase your bug-finding rate and make sure you’re doing it in a reasonable fashion? Recently, we’ve introduced a new kind of analysis technology based on Boolean satisfiability.
You take a Boolean formula, which only deals in values of true and false–variables that have true and false and the logical operators and “or” and “not”–and you try to figure out whether its’ possible for that formula to be satisfied. In other words, you try to determine whether there’s some assignment of variables that make the whole thing true.
This has actually been used in the hardware industry for a long time for the people who are making the tools for chip design. Of course, chips are very, very expensive to patch, if a defect hits in the field. We all remember that Intel Pentium 4 div bug–it cost Intel half a billion dollars.
Obviously, that’s a pretty big mistake, and it’s not surprising that the hardware guys generally apply more rigor on the analysis front than the software guys do. But the fact is, static analysis for hardware verification has been around for a long time. No one’s really translated this for software before, because there hasn’t been as much as a push for software verification.
Now we have introduced that kind of technology to complement our traditional dataflow analysis, or abstract interpretation, or lattice simulation–there are a number of different names for the notion of pushing down all the different possible paths through the code.
But we want to marry that with a bit-level representation of the software system. That helps use reduce false positives by reducing false paths, meaning paths that can never be executed, as well as paths that shouldn’t be analyzed.
It also helps us find new kinds of bugs like integer overflows and enhance the buffer overflow capabilities of the traditional static analysis technologies. So there are lots of different technologies out there that can do analysis of systems at various different levels.
But again, we always have an eye toward the scalability of the system and a low false positive rate, because that’s how you get the thing to be used. And of course that’s the name of the game. It doesn’t matter how good your analysis is, if developers don’t want to use it.
Scott: One of the things I did for a little homework was to look at the Linux kernel mailing list to see what the initial response was when Coverity started to show up on the list.
I remember seeing a thread where somebody was submitting a bunch of patches, and other people were arguing against checking them in. Their view was that they needed to actually look at why the tool reported a problem, and that it isn’t enough just to modify the code so that the tool doesn’t complain any more. They wanted to verify that there was really a vulnerability.
What are the safeguards, once the tool has output a lot things, to make sure they actually get fixed the right way? And do you think that there is the problem, or the potential of a problem, where people might just try to make the code pass through clean, but they don’t necessarily really spend the time to really do a root cause analysis? Is there a vulnerability here?
Ben: Yes, that’s always a danger with any kind of technology, in terms of a developer who isn’t quite in the right mindset, who maybe is unfortunately not careful enough about the changes they’re making. They might just try to either make the code compile without throwing errors or warnings, or make it run through the static analysis tool without reporting anything.
What we do to discourage that kind of behavior is to interface robustly to the defect. What we communicate is pretty much everything we learned about the defect in the analysis of that path that discovered the problem.
So, for example, if we observe one event on line 23, and a number of different conditions had to be true or false to get another key event to occur, say, on line 53, we try to step the developer from one condition to the next, outlining the path as exactly as possible.
We try to present all of the information that we have to encourage a deeper look into the issue, although of course you can lead the horse to water, but you can’t make him drink.
For instance, we might say, “OK, this is where you have a memory leak.” And they think, “Oh, memory leak. I should put a free in there.” Or no pointer dereference. “Oh, I guess I should just check that pointer to see if it’s null or not.”
Absolutely, you can end up introducing things that aren’t the right fix. And, of course, that’s a fact of life. It’s just as if a QA person told the developer, “Hey, I noticed that the system crashed.” Then the developer just made a bunch of arbitrary changes. Then it doesn’t crash anymore, but they never understood exactly why that crash happened in the first place. Chances are, they just masked the problem with maybe another problem that’s going to rear its head later on.
So the problem isn’t specific to static analysis, but certainly these tools can be susceptible to that.
Scott: We were talking to some people on Firefox, and they said the dynamic nature of their product, the way they use pointers, and so on causes Coverity to generate excessive false positives. Do you see that type of issue in other environments, and if so, how do you respond to that?
Ben: One of the things that we unfortunately don’t necessarily get to have with the open source community, as opposed to our commercial customers, is the implementation phase of the static analysis.
For the most part, users are going to get good scalability and a low rate of false positives right out of the box, but there are a lot of things you can do to tune it, as well. If you do happen to have a higher than average false positive rate, chances are good that we can address it by encapsulating just a couple of idioms in some configuration for the analysis engine.
For our commercial customers, part of licensing the technology includes our professional services, to come in and make sure everything looks exactly right for that environment. You need to tune things to the application that you’re analyzing, since every code base is a little different. Tuning the analysis engine can not only keep the false positive rate low, but it can also find more and more kinds of defects, if you add additional bits of knowledge.
Sean: When you looked at open source projects, I understand that PHP and some others came out as having particularly high rates of vulnerabilities. Was there any particular class of vulnerability that seemed to show up more often than others? And I guess this is really a separate question, but in cases like that, how do you provide the next level down of analysis for customers?
Ben: For the open source packages, we decided not to do that deeper kind of digging. Of course, the data’s there, and we could certainly look at the different kinds of vulnerabilities and defects that we discovered, but there was generally nothing that jumped out as so obvious that we thought it was either newsworthy or worth exposing in that way.
On the other hand, with X.org, there was one security vulnerability that caused them a great amount of potential grief. They were very glad that we found it, because they said it was literally one of their worst case scenarios–a root exploit that anybody could take advantage of.
It was actually just a simple matter of missing parentheses in a function call, where they meant to call a function but didn’t. That kind of issue is hard to categorize. It was just a simple coding error, but it led to a root exploit.
That’s another thing with static analysis: it’s sometimes hard to predict the impact of a coding flaw. For instance, we might discover a place where you’re going to de-reference a NULL pointer.” Well, that could translate into an innocuous crash that hardly ever happens, or a huge denial of service that every single attacker can take advantage of.
Sean: Once you’ve got a mistake, there’s no way to really know what the ramifications of that mistake are going to be. A curly brace in the wrong place, or, like you said, lack of parens could be nothing, or it could be a worst-case scenario.
Ben: Exactly. So we try to stay away from getting too sensational about claiming that a certain type of defects suggest a certain level of vulnerability. We prefer to let the data speak for itself, and we let the developers take care of the issues to get them fixed.
Sean: At this point, you guys have looked at a lot of code across commercial and open source projects, projects of different sizes, and projects of different ages.
Do you find any correlation between the number of defects that you find with the size or age of the project? Are there other things where there seems to be a correlation, or is it really not correlated to much of anything?
Ben: I would say it isn’t correlated to any of those obvious things like age or size. Certainly the bigger the project, the more susceptible it’ll be to the complexities of code interacting with other code, which leads to the kinds of things we find.
So there might be a slight correlation in terms of defects per thousand lines of code being a little bit higher on the larger code bases, but what I think dominates more than that is the mindset of the development organization.
Those who focus on quality, in terms of good processes and tools in that regard, have better code, and we find fewer bugs in that kind of code.
Sean: I’ll just go ahead and lay some personal bias out there for the reader to judge me on, but there seem to be a lot of organizations that have a lot of hubris around the security of their products because they haven’t necessarily had a lot of high profile exploits.
Not to throw stones at any one company, but I read an article just today about enterprise organizations that have a lot of concerns about the iPhone, because of vulnerabilities like buffer overruns from people loading unsupported apps on them.
You’ve been looking at this for a long time. Do you feel like companies are sometimes a little overconfident, so they haven’t necessarily really done the work to make it secure, and if they really started looking under the rug they’d find a lot of unpleasantries?
Ben: Obviously, I can’t comment on any particular companies, but I think software makers in general–whether commercial or open source–need to be paranoid.
They need to recognize that every system is attackable, and that there is no such thing as finding all the bugs. There is no such thing as finding all the security vulnerabilities, and that means you’re susceptible to malicious users.
I think software makers who have not been the victims of a lot of attacks should consider themselves lucky. It’s a game of probabilities, in many respects, and you can control some of the factors but not all of them.
You can control the degree to which you test your product, how good your secure coding practices are, whether you leverage the latest and greatest static analysis, find and fix all those defects, and do the penetration testing and so forth. All of that reduces the probability that malicious users will discover remaining security vulnerabilities to be attacked.
On the other hand, software builders don’t have direct control over factors like how many attackers are out there trying to break the system. I think the companies that have the highest profile security problems are the ones the attackers are really going after, because it’s their products that are on most of the hardware out there.
There are a lot of different mindsets for attackers, but one motivation is certainly fame or the pervasiveness of the exploit. If I have a choice of hacking something that a million people have or five people have, I’m going to pick the one that applies to a million different systems.
Sean: What do you think are some of the most common misconceptions that people have about announcements that you guys push out, whether it’s the homeland security one or whatever?
When Microsoft or Mozilla goes and puts out an announcement about bug counts, the other one comes and spanks it down. In the end, you’re always left wondering. I’ve been wanting to put a certain great Mark Twain phrase into an interview transcript, and this looks like the right place–”Lies, damn lies, and statistics.”
What are the common misconceptions you see about the meaning of statistics you publish?
Ben: The most common misconception, especially with the open source work we’re doing, is that we are somehow trying to make a judgment between the quality of open source versus the quality of proprietary software.
I think if you look at the debates and the Slashdot dialog and the comments to stories and so on, you’ll see people responding to report by saying, “Oh, is Coverity saying that open source is bad?”, “No, Coverity is saying open source is good!”, “No, Coverity is saying open source is bad!”
Sean: Your PR department must hate it when you put out an analysis, since they know they’re going to get hit by two sides. Both sides are going to pull the numbers and stretch it like taffy.
Ben: Exactly. I think that’s the number one thing that people misconstrue. I think the second misconception, unless people are really trying to understand what we’re doing with these projects and so on, is they think that we’re claiming that static analysis is the be-all and end-all to software testing and the ultimate yardstick for software quality.
We never make that claim, and we’ve never fed it when other people make it. The reason we put these reports out and publicize them is so that people are aware of technologies that can help make software better.
Static analysis hasn’t been around for a long time, and there isn’t necessarily awareness of what it can do, but in the last two years, 8,000 defects have been fixed in open source packages because of this new technology. That’s says something.
The fact that 400 customers have signed up, many of them across their enterprise, to have our defect detection as part of their development process also says something.
There are still many billions of lines of code out there that aren’t going through this new way of checking. To get this stuff implemented used to be hard, but not anymore. There’s no excuse.
When we make these announcements and talk to people about the technologies, we want them to realize that the development world has improved, in terms of the tools and technologies that help them make the quality of software better. We do that in part using living proof that open source developers are using it and benefiting from it.
Scott: For whatever reason, it’s hard to simply present data and have people believe that you’re not trying to subtly pass a judgment.
It must have been tremendously valuable to you that tens of millions of lines of open source code were available to throw at your product for analysis.
Ben: Absolutely. I wrote an article about two years ago, and the title was something on the order of, “Open Source Software and Static Source Code Analysis: A Perfect Match.”
The article was really about how the open source movement enabled this innovation in static analysis. Before that, not enough companies would have let us come in and play around with their code for months on end. Now that we have all this code out there, it’s a great playground.
Sean: It seems to me that what hackers do has sort of changed over the past five, six, seven years. It used to be, if you had a virus, worm, or whatever, you knew it, because your corporate network came to its knees.
Today, the hackers are far more subtle, because they’ve figured out that it’s a lot more beneficial for them to build a bot-net that nobody knows they’re a part of. Do changing hackers’ techniques affect the kind of scanning that you do?
Ben: I don’t think there’s too much of a correlation there. Not all hackers are even really ‘hacking’ so much, because there are so many social ways to take control of systems, just by getting people to install stuff.
I do think that the vulnerabilities that allow them to automatically have a worm or a bot spread across a network look the same as ever, though. Even if they hide their tracks a lot better than they used to, they’re attacking the same kinds of problems in the code.
Sean: Do you think that track-hiding helps perpetuate a false sense of security, as opposed to the old malware back in the day, like “Code Red,” “Nimda,” and “Slammer,” where you definitely knew you had them?
Ben: That’s a good question. Admittedly I don’t have my pulse on the consumer side of things, but my impression is that people are not as concerned about security, but they’re frustrated by a different aspect of the same issue. They buy a computer, and six months later it’s loaded up with all kinds of different services and applications that they have no idea about.
And some of those are probably malware, and some of them are legitimate, but I don’t think people see it as a security threat, they just see it as frustration with computing.
Scott: It seems that, even on the server side, nobody in the data center wants to install any more patches than they have to. By using static analysis in combination with other tools, like dynamic analysis and fuzz testing, on one hand the number of defects per line of code can go down, but on the other hand, the number of lines of code goes up. Do you see it as a zero sum game, or can the number of defects per line of code be driven down faster than the increase in the total number of lines of code?
Ben: You can definitely make progress, for the simple reason that people write code, and that’s slow, whereas analyzing code is fast. We can analyze code a lot faster than people can generate it, and when we innovate in terms of finding ways to analyze code and so on, we’re pushing that out to a billion and a half or two billion lines of code per year.
It doesn’t take very much innovation out of the Coverity Research and Development arm to drastically impact the quality of billions of lines of code, so I think we’re going to be able to stay ahead of the game in that regard.
It’s the exploding complexity in software that has people listening again to the messages of static analysis and disruptive technology, because they don’t have any other answers. They’re trying to make high-quality, secure code, but the volume is increasing geometrically. There are 10 million lines of code in a premium automobile today, where 10 years ago, there were almost none.
There’s code in everything; by the time you get to work in the morning, you’ve interacted with probably over 100 million lines of code, if you look at all the devices that you’ve interacted with. And since that code is coming from all over–outsourcing, interactions with third party libraries, new infrastructure frameworks, you name it–the complexity of code is just too great not to leverage these technologies.
Scott: Do you see any sort of a trend where when a company gets components from external sources, they use tools like yours as sort of a quality gate? For example, they might go out to some component vendor and say, “Look, you don’t have to open source your component, but I want to run some tools on your code base,” so they could compare the outcomes among competitors.
Ben: We’ve got a great case study on exactly that topic. One of our customers recently was trying to come to an agreement with their outsource vendor as to when some code should be accepted, in terms of whether it had been debugged sufficiently.
We didn’t suggest this to them, but they looked at Coverity as third-party arbitration, if you will, where they could say, “This is a gate, and we both can agree that when Coverity says there are no defects in it, we accept that development is complete.”
That approach cleaned up their relationship a lot, because they finally had some kind of commonly agreed upon measurement for acceptance. They also said that using the product in that capacity and also just fixing defects early in the development process let them shave a significant percentage off of the time it took them to release a product. Of course, that’s the ultimate ROI here–you want to get products out the door faster.
I think that’s something that we will see more and more of. People will finally have some way of making a reasonable gate for the code that they accept into their products.
Scott: Well, I want to be sensitive to the time, so do you have any final thoughts to add?
Ben: We have talked a lot about problems in software development and finding defects in code with static analysis, but we view this as just the beginning.
To do static analysis the right way, you have to understand how the software is built, how every file is compiled, and how it’s all assembled together. In the technology that we’ve generated over the last five years at Coverity, we have all of this information.
If you look at the marketing literature out there on Coverity, we talk about the software DNA map, the fundamental building blocks of everything in your software system–not just the source code. Versions, how every file was compiled, your build system, and all of that information is sitting right there.
The future for Coverity is to branch out from static analysis and to try to identify the places that are painful in the software development process, from build systems through the compilation process, and finding defects, both statically and dynamically.
All of those things where having a beautiful representation of the software system and a complete representation of exactly what’s going on could benefit the developers, the product managers, and the software development managers. We want to help them answer questions like, “What’s going in my code base? How is it changing? Am I ready for release? Is this code in good shape?”
Right now, no one knows how to answer those questions, so for us, the next few years will be spent exploring that. We’re really trying to find the tools and technologies that will enable high software integrity, and that encompasses much more than just finding a few bugs or a few security vulnerabilities–it means making sure that you’re confident in what you’re doing from a software development standpoint.
That’s how I’ll conclude–a little bit of a teaser for what’s coming next out of the pipe.
Scott: Excellent. Thanks for taking the time to talk with us today.
Ben: You’re welcome, and thank you.