Interviewee: Ryan Waite
In this interview, we talk with Ryan Waite, Group Program Manager for High-Performance Computing at Microsoft. We talk about:
- Microsoft’s entry into the High Performance Computing area.
- Microsoft’s support of the open source Message Passing Interface (MPI).
- MPI has become bloated, and there’s no usability data (like there is with Microsoft Office) to indicate what should be deprecated.
- At Microsoft, select customers help shape the features for the next product release.
- Microsoft has always focused on ease of use, simplified administration, and uncomplicated setup. HPC is no different.
- Instrumentation helps find reliability issues. If partners are affecting reliability (with unstable drivers, for example) certification helps raise the bar.
- Parallel programming is now the only way to do more computation in the same amount of time. Processors just are getting faster like they used to.
- It’s easier for open source to be incremental. Proprietary software has to make radical changes.
- Hard drives + overnight shipping = really high bandwidth.
Scott Swigart: We are conducting a series of interviews to explore the premise that open source software and closed source proprietary software are built differently — that the software development life-cycle is different, and that difference is likely to manifest itself in concrete ways in the final product.
That’s not to say that one’s better than the other, but simply that there are certain expectations you can have from either open source or closed source software because of the way it’s built, as well as certain inherent limitations.
In the past, ISVs pretty much just wrote closed source proprietary software, but today they have a little more of a choice in how they build a product, and we want to look at some of the decision points around that. And if you’re looking at bringing software into your organization, and you have a choice between something open source and something closed source proprietary, again, what are some of the common decision points or expectations around that?
To get us started, tell us a little about yourself and what you do, to give us a little bit of context.
Ryan Waite: I’m the Group Program Manager for High-Performance Computing at Microsoft. What that means is that my team is responsible for writing the technical specifications for our products; we design which features are going to be necessary for the product, then we write up how those features should work.
We work in close partnership with the software development and software test teams. The development team, as you can imagine, writes the code itself, and the test team tests that code. I should also mention that my product actually combines both closed and open source software — both Microsoft-designed as well as open source software.
My staff is comprised mostly of computer scientists or computer engineers of one kind or another. I’ve been at Microsoft for over 15 years, and I’ve worked mostly on server products, including things like Exchange Server, BackOffice Server, Windows Server, and Small Business Server — all sorts of different pieces.
And the product I work on right now is called Compute Cluster Server. As an HPC product, Compute Cluster Server allows you to bind 50 or 100 or 1,000 machines together to work on a single computational problem or a single set of closely related computational problems. So it’s basically a category of supercomputing.
The idea is to take very large computational problems, like proteomics problems, genomic problems, problems in mechanical engineering, weather simulations, and computational finance; and you can then run these against this huge set of machines. And the great thing about these kinds of clusters is that they cost a fraction of the cost of traditional supercomputers. But clusters are very powerful, and they’re very well suited to certain types of programming problems. These kinds of clusters are based on the concept of what are called Beowulf clusters, developed back in the early ’90s at NASA. The idea is to bind together a bunch of commodity servers with a high-speed network, and you can run some very interesting computational problems on them.
Most problems can be designed to work on a cluster, although there is still a category of supercomputing problems that you really do need a traditional supercomputer for.
Scott: Thanks. You mentioned that you have closed source and open source solutions; can you talk a little bit about what you do on each side?
Ryan: Loosely speaking, you can think of our product as having three major components. One is what’s called a job scheduler, which schedules work across this big cluster of resources. If you have 1,000 machines, it’s not like every job takes 1,000 machines; some take 16 machines, some need 64 machines, some need 132 machines. And then they have time limits for how long they want the machines to go — for example, they might need eight hours of computational time. So we have a job scheduler.
One of the gentlemen on my team was an architect on the LSF job scheduler, at a company called Platform Computing — he’s a rocket scientist. The second thing we’ve built is systems management functionality, and this allows us to manage this big cluster of machines. There’s a woman named Cathy Palmer on my team who was at Cray and Terra Computing for 11 years and has a PhD in job scheduling; she’s a rocket scientist, too. These are some of the best people that ever I worked with, so I’m just going to glow a little bit about how great they are.
The third area is about high-speed networking, and the parallel programming model itself. That’s headed up by another gentleman on my team named Eric Lantz. Part of that is high-speed networking. On the high-speed networking end of things, you need to enable these machines to talk to each other very quickly, because sometimes the speed of your computation is dependent on how fast the nodes can communicate with each other as they’re running their computations. It’s not just about the CPU inside that machine, but sometimes the I/O, or the communication speed between those machines.
For high speed, tightly coupled communication there’s something called MPI. MPI stands for the Message Passing Interface. This is basically the communication interface used for people to write these kinds of massively parallel applications that run on clusters. There are some standards for how MPI is implemented, but the reference design comes from Argonne National Labs. And Argonne National Labs distributes that code under a BSD-style open source license.
So Microsoft is a licensee for that code. We’ve taken that open source code from Argonne, and we’ve made modifications to it and we ship it as part of our product. The folks at Argonne are genius UNIX and Linux developers — really great UNIX developers. They’ve been doing this kind of stuff for years and years. And message passing is a tremendously complicated area of computer science, and they’ve done a great job at building this code. They call it MPICH and MPICH2.
The problem is that they’re not Windows developers. They don’t have a lot of experience with that. They have one person over there that is their Windows developer, but we have a lot of experience programming Windows and can certainly help. And Windows and Linux, are different operating systems, so you write programs for them in slightly different ways. Not radically different ways; in both cases you use C or Java or C# or whatever to write your applications.
But the thing to keep in mind with them is that you have different efficiencies in different kinds of ways. And so one of the things we’ve done is to implement those efficiencies in the Windows version of MPICH. All of this allows us to have a very high-performance solution.
We’re about to contribute those changes back. We’ve just been working through some of the final issues with how we transfer the code back. It’s just logistical stuff. In the next month or two, you’ll see a press announcement go out that talks about our contributions to this code, these modifications that we’ve made back to Argonne. As far as I can tell, this will be the largest contribution Microsoft has ever made to the open source community.
With this particular code, we want anybody running Windows to have access to a high-performance MPI library. If they buy it through my product, that’s great; if they are just running Windows and they want it, then they can get it from Argonne.
Scott: You’re in kind of an interesting position, because you’re one of the few Microsoft people developing both closed source proprietary and open source projects. What are some of the things that stand out to you as key differences between how those two processes work?
Ryan: What’s interesting about MPI is that there are a large number of features that have been driven because somebody in a standards body or open source group thought it was an interesting feature for what they were doing. They’ve implemented particular APIs.
Now, that API may never be used by anybody else except them. And yet it becomes part of this code base. The really great thing about that is that if you need to do a very specialized API for the application that you’re writing, you can go ahead and contribute it to Argonne — and if anybody else in the world has that same kind of problem, they might be able to use that same piece of code that you wrote.
The disadvantage of that is that this is now a very large code set with over 200 APIs. It’s a bit of a joke, and I don’t have any proof that this is true, but the joke is most developers use six or eight of the MPI commands out of that 200 for the bulk of their programming, so there’s a whole bunch of stuff that just isn’t used. And part of the problem is that you have to maintain all of that code as it marches forward, making sure all those APIs work.
They’re all documented, they’ve all been shipped, and ideally, they all continue to be well-supported. I think that’s the struggle with that kind of model — the code has exploded pretty quickly. It exploded because everybody gets to contribute the little things that they need into that code.
Scott: That’s interesting, because in other open source projects, the maintainers tend to be pretty strict and conservative about what they’ll let in.
Ryan: There is no single person for MPI saying, “this is OK, this is not OK.” You don’t really have the same kind of maintainer making things cross a very high bar before they get included. And those bars can take many forms, other than just the quality of the code.
Sometimes people establish a quality bar for when code gets included and sometimes it’s a customer requirement bar — you might want to make sure that if people are going to use something, it works properly. We all have to have different reasons for what we decide to include in software and what we decide not to.
I think it’s just the nature of these projects and the way software projects mature. They get bigger. Take the Linux kernel — it’s huge compared to what it was five or ten years ago.
As these products become more and more general purpose, they have to be able to service the needs of a greater and greater community. And we have an enormous amount of experience with this at Microsoft with products like Office, for example.
We have more usability data about how people use Office than you can imagine. Because we shipped instrumented versions of Office during the beta period, we know exactly which features people use and we know exactly how they use them.
So, even thought there may be a thousand features inside of Office, we know that every one of them is used. Everybody may use a different 20 percent, but we know that all those features are well-used. And features that aren’t used, we deprecate or remove from the product. We have very concrete usability data to tell us that.
Scott: Contrast that with the closed source work that you do. It seems like Microsoft goes through a process where there’s a big wish list of features, and people start coding on things as the project marches toward release. Things get cut, and developer resources get put on the more critical things.
How do you see the closed source work that you do comparing with the way you’ve seen the open source projects developed, in terms of how features are spec’d, how they’re built, how features are cut, and how things make it into the release product?
Ryan: That’s a hard question to answer, because it’s so different depending on which open source group you’re working with. I’d say for us, the first step early on in a project is to figure out whether a product is going to be more driven by a release date, or more driven by a feature set.
We do both at Microsoft. If you’re more driven by a release date, you really make sure that you scope that product so you’re going to do well at hitting that release date. And Office is a good example of a product that really drives to release dates.
The video game industry is an even better example. If you look at Xbox or you look at any of the game title providers that are building stuff for Xbox, they have to hit Christmas. No ifs, ands or buts. Otherwise, they’re not going to recoup any of their development costs, and those companies could fail. It’s really important for those organizations to scope their features to a particular date.
For other projects, you might look at customer requirements and pick a date, but you’re really going to drive to implementing that set of features and making sure that they hit the appropriate level of quality. I think that as those projects move on, that’s where you may begin to see some features get cut in order to hit a release date, or just because the team realizes that they’re less important.
We spend a lot of time sitting with customers. We have a customer advisory board called the Technology Adoption Program, and they come out here for a couple of days. These are big technology decision makers at government national labs, biotech companies, and financial institutions like banks and stock trading houses. We spend two days presenting to them on what we’re planning to build, and they give us feedback about what they’d like to see.
We release public betas. In the Technology Adoption Program, people actually deploy beta code and run it in production prior to the release of our software. That lets us know which features are being used and which ones aren’t.
I also spend a lot of time with people asking them which things they think are dumb or badly implemented, with an eye toward cutting those and focusing on other stuff instead.
The overall process takes the entire universe of things that you could possibly build, and you continually cut, as you move through the software development process, until you have what is really important at the end.
Scott: Again, this varies from open source project to open source project, but a lot of them are built by developers for developers, so the way a feature gets into an open source product is that somebody just sits down and writes it. Their first interaction with the process is by submitting code for a feature.
If they can’t code it, then that feature isn’t going to get included, so the downside is that you only have coders to some degree, spec’ing features. The upside is that nobody writes something that they aren’t personally going to use.
At a company like Microsoft, when you’re building the next version of a product, you’re doing a certain amount of work that’s fairly speculative. You’re building what you think people are going to need — you’re really building what you think people are going to use, and sometimes that’s more successful than others.
I mean, sometimes Microsoft delivers stuff that’s wildly popular and everybody uses it, and some things are dead on arrival, even though there was a lot of effort put into building them.
Ryan: Right — do you remember the “Bob” operating system? It’s not a successful product.
Commercial software companies have to determine what people want, and what will be successful. That’s why we spend a lot time doing things like customer research, usability research — really trying to figure out what we should build. I talk a lot about market opportunity documents, which is a fancy title for “what do people want?”
It tells us first of all, what set of users we are going to help. We segment the market, and we figure out where we’re going to prioritize. That’s the first round of cuts — we’re not building something for everybody; we’re building something for a particular group.
Then we ask, “What do those people really want?” and we visit customers to try to answer that. One of the first questions I asked is an open-ended one, which is, “What drives you most crazy about your HPC System?” Windows or Linux — it doesn’t matter — what’s the thing that drives you most nuts?
If you have some really open-ended questions, you hear some really interesting things come back from customers. Remember that HPC was entirely a UNIX market in the past. A lot has moved to Linux, but there’s still a fair amount of Unix out there, and some of them are beginning to adopt Windows.
And because so much of it is driven by the open source people, you have a whole set of computer scientists really driving at issues that they think are important in high performance computing.
But when you go talk to the system administrators, they tell you things like, “I’m struggling with power and cooling — the systems just run too hot.” The second problem that they talk about is, “It’s so hard to get the cluster up and running in the first place.”
Sean Campbell: Do you think that either closed source or open source software reaches out a little more effectively to some of those audiences and meets their needs?
Ryan: Take systems administration software for example. How do I get servers up and running, and deployed, and get the whole system going? We do really well with that, and it’s a good thing to do well at since a lot of people complain about it.
This is where things get a little bit touchy, because there are a lot of UNIX administrators who are very passionate about UNIX or Linux. And Windows administrators like to talk about how great Windows is. People throw mud all the time about this. I’m on a bunch of user group mailing lists, and people there say, “Well, the problem with Windows administrators is that they’re inexpensive, and they’re all the same.”
On the other hand, I have heard people say, “Every time we get a call from a Unix customer, or a Linux customer, their whole network is set up in a unique way. Every time I get a call from one of our Windows customers they’re easier to support because all Windows networks are traditionally set up the same.” It’s true that there are better prescriptions of how to do things in Windows than there are in the Linux world, or the Unix world, and so that has some benefit.
I think we do a good job of servicing the needs of the system administrators. Certainly in the database space there are a lot of debates about it. I think MySQL has actually come a long way, and is doing pretty well. I’m not a big database expert, but SQL Server really changed the database market.
I don’t know if you guys have ever set up a Oracle Database before, but it’s hard [laughs]. The first one I set up was about 10 years ago on a Solaris box, and oh my God. I consider myself to be a pretty technically competent guy. I’m not a database expert, but I spent a weekend just trying to get through the ten volumes of manuals about how to get the thing up and running and performing correctly.
With SQL Server, you walk through a wizard for setup, and you’re pretty much up and running. So, I think we do really well at servicing particular user needs, and servicing a broad general market as well, even though it may not be as technically savvy as people that are making contributions into the open source community.
Scott: Another area that I’d like to dig into involves some of the differences around identification of bugs, and support, and so forth. I’d guess that when you’re dealing with high performance computing and different architectures, you see all kinds of strange things that might be hard to reproduce.
On the closed source side, how do you handle that? What is the process for a customer to come to you with a problem and get a resolution, whether it’s just sending knowledge back to the customer, or whether it’s an actual change to the product to resolve it?
Ryan: One of the things that’s important for my group to keep in mind is that we’re brand new in this space. Windows is eight to ten percent of the HPC market right now, which is really pretty good. But if somebody has an HPC cluster, it’s probably a Linux or Unix cluster, so what has been important for us is that we understand the way that people are using their existing clusters, and that we fit into the model.
We provide support through our traditional Microsoft product support services, and we also have newsgroups that we support. But we also behave a lot more like more open source organizations might behave.
We have a site called WindowsHPC.net, and there are a set of bulletin boards up there that people can search to look for problems that they run into. Our development team here in Redmond reads those groups all the time.
That helps us plug in to that type of traditional model for how people support these systems. And that’s how people can report bugs as well. We also get bug reports from a lot of our partners. In the HPC market, it’s not like building a packaged product like Office, where you kind of ship it out there and you’re done.
I have to get OEMs like Dell and HP and IBM and SGI all lined up to ship our software. We have network hardware vendors, software vendors, critical applications like Mathematica and MatLab, that are all integrated in with our products. We have to get all of that put together.
So, we get bugs from those customers or partners as well. And those come, usually, directly in to us. They talk to somebody that’s kind of an account representative for them at Microsoft, and we get bugs filed directly. We get on the phone with them, and partners get a really close level of support in that way. When end users run into problems, they post to the newsgroups, and we help them out there.
So, the interesting thing here is that we do both. We do stuff that’s a lot more like the open source community as far as finding out about bugs as well as stuff that is more traditional.
The bug reporting service built into Windows gives us really great, qualitative data about where things crash, like the fact that most crashes in Windows occurred because of video device drivers. I forget the exact numbers, but something like 50 percent of crashes in Windows used to be because of video device drivers.
And then we have something called WHQL, Windows Hardware Qualification Lab, and they run tests on drivers. In order for a driver to get WHQL certification, it has to pass a battery of tests. So, we would look at the ways these different video drivers were crashing, and create tests that we would then put into WHQL, and that would improve the quality of those drivers continuously. Because even if it’s a particular driver that crashes, you still blame Microsoft, because Windows crashed. And so, it behooves us to really go through making sure that all of that code is performing correctly.
Scott: In other words, you’re able to regulate the ecosystem to some degree by saying, “If you want to be certified for Windows, submit your driver to our lab.” And then if your lab is running tests and they’re getting the driver to crash, you go back to the vendor and tell them to fix it?
Ryan: Actually, the qualification labs are run by a separate agency. They can contact us to get a waiver for a particular company, and under some circumstances that’s worth doing, to grant a waiver if they’re not able to pass a particular test.
It could also be, as hardware continually evolves, that the tests may not always keep up. If a hardware vendor implements a totally new piece of hardware, the older tests may not work correctly, so you have to be flexible, of course. But everyone wants to go through that qualification process, because the OEMs — like Dell and HP and IBM, etcetera — all want WHQL-qualified drivers on their systems. Otherwise, they’re going to be getting support calls as well.
Scott: One of the things that Microsoft’s been really focused on for about the last five years is security — just increasing the security of the stuff that you guys ship. Is any of that included in your certification process?
Ryan: There are parts. We’ve been releasing a lot of our security tools to the community, for people to use the same security tools that we use in their own products as they build their own software. We have a lot of very sophisticated software analysis tools that help identify security bugs before they even pop up. We published a book called “Writing Secure Code.”
Scott: We interviewed Michael Howard, so we’ve got a pretty good background on that.
Ryan: Actually, Michael and I have worked really closely. As he was doing the first round of the security development life-cycle, I was responsible for security on mobile devices like Pocket PC and Smartphone.
I would say that we do really well on the security end of things. And if you talked to Michael, you know that the issue is that if you miss one thing, there’s some hacker out there that has years to try and find that one bug that somebody missed.
Scott: He also pointed out that as Microsoft hardens their products, hackers are focusing more on the ecosystem because they’re softer targets.
Ryan: I’ve read that same data and I think that’s fascinating, the way that these kinds of ecosystems change over time.
Scott: So, the third party that does the certification runs tests, but I’m guessing it hasn’t evolved to the point where they say to a driver vendor, “Well, submit your threat models.” Do you see things evolving in that direction, or is that more intrusive into how an organization does what it does than Microsoft would want to get?
Ryan: I would say it leans toward your second point, that when you hand over your threat models, you’re basically handing over the architecture of your product. And I’m not sure that a lot of people want to do that.
Scott: One of the things that Michael also pointed out is that certain APIs have just been banned, things as simple as strcpy(), because they’re vulnerable to buffer overruns.
It seems like you guys would be in a difficult position because high-performance computing, on one hand, is all about speed, but I’m guessing that there’s a security aspect to it, too. So, how do you balance security and performance, or do you see that there’s any tension between those at all?
Ryan: I don’t think it’s black and white. I think there’s a little bit of tension, but not a lot. Here’s what people traditionally do with HPC systems: you have a cluster — let’s say you have 500 machines — and you have what’s called a head node. And a head node is where you submit computational jobs.
That head node will be sitting on the public network, and when you submit your job and your job is ready to run against all the compute nodes in the cluster, those compute nodes are sitting behind a firewall on a private network.
In a sense, all of those compute nodes can have a very open relationship with each other. They may not be hardened in the same way that every other system on the network is. And that’s OK, because they’re all behind a firewall. And all the jobs, when they’re submitted, are validated. You know what user submitted them, and they have permission to submit jobs. And if they don’t have permission, the job is rejected, and so forth. Those jobs are submitted into that cluster and they run on the cluster at that point.
There’s a pretty well-established security model for how clusters run computational jobs already. In the military installations that we’ve talked to, they actually hook them up in separate rooms. They’re not plugged into any kind of public network at all.
Scott: I would assume that just for performance reasons if nothing else, you don’t want your cluster sharing a network, and it’s also probably on a much higher speed network than the rest of your organization. I would assume that it’s pretty isolated.
Ryan: Here’s where it gets really interesting. The thing that my group is really pushing for is moving high-performance computing into more mainstream areas. We don’t focus on share-shift of moving people from Linux to Windows in the existing market; our job is really to grow the HPC market. And we expect that there’s going to be a little bit of share-shift, and that’s great for us, but we’re really focused on growing that market in new spaces.
One of the areas that’s very interesting is departmental and workgroup clusters. Those people may set up the whole cluster on a public network. In those systems we need to make sure that that system is secure by default when it comes up.
Scott: Do you ever see a day where every computer in the organization is potentially also part of a high performance cluster? Or do you think there will always be dedicated machines that are separate from machines running the user-workloads?
Ryan: Are you talking about a situation where my computer in front of me would actually be plugged into a cluster on the back-end?
Scott: Right. Kind of like the SETI@home thing, where my machine is idle, so it starts working on this job.
Ryan: In some cases, I do see that happening. People call it “cycle stealing,” or “workstation clusters.”
Those systems only work well for a particular set of computing problems, though. For certain kinds of computational problems that have a high number of cycles that provide data, those kinds of clusters work great. So SETI@home is a great example; there’s also a protein folding project which works this way. When your screensaver kicks on, you start doing your computation as a part of the screensaver. There’s a whole set of other problems that don’t actually work that well. Seismic processing is a great example of a problem that doesn’t work very well, because they have very low number of cycles of computation per byte of data. I talked to one of the companies that does this kind of work on a 10,000 node cluster, and when they get customer data in from an oil company, it’s a whole pallet of tapes.
The whole thing gets loaded into a gigantic tape array, and then they farm it out across a 10,000-node cluster for computational analysis. They’re dealing with just ridiculous amounts of data, and that stuff wouldn’t work well with cycle stealing. When you really get these systems working in the enterprise phase, you’ve got to figure out how to make sure that data that’s residing in London doesn’t get shuffled off to China for computation, because it’s so inefficient to move it all the way around the world. You may actually spend more time moving data than you do computing it.
Scott: So, in other words, ‘high volume of data, low computation per byte’ is bad for that cycle stealing scenario. ‘Low volume of data, high computation per byte’ is good for cycle stealing, because you ship a little bit of data out to a machine, and it just crunches on it for quite some time and submits, again, a fairly small result set back.
Ryan: Yes. Here’s what I think is a more interesting direction, in terms of your question about what’s happening with my PC and my cluster in the background. We’ve hit a point where CPUs aren’t going faster any longer. 4GHz is pretty fast, and the amount of heat generated by a processor running at 4GHz is enormous. The reason we can’t go faster is they just get too hot.
So the silicon vendors have started moving in a multi-core direction, and so we see four cores right now, we’re going to see eight cores, and pretty soon a bazillion cores are going to be on these machines.
And this is an issue for the software industry as well as for Microsoft. What’s been so great about the software industry is, as processors get faster, we can add more and more features, right? The Linux kernel gets bigger, the Windows operating system gets bigger, we have richer features, we can do more sophisticated stuff than ever before. But now that processors have hit kind of their Gigahertz speed limit, we can’t just willy-nilly add features into our products any longer.
The bad thing that could happen is that the software industry would become very similar to the white goods industry. You don’t buy a new washing machine because ooh wow, this one has this great new spin cycle! No, you wait until your washing machine breaks, and then you buy another one.
What has to happen in the software industry is that all software development needs to start moving in the direction of being aware of concurrency.
That concurrency could be across multiple cores on your local machine, or it could be across multiple machines running in a cluster. And so, the thing that I predict we’re going to see in the future, is that programming models and the kind of programs that are being written will not only be better at taking advantage of multiple cores, but those programs will be better able to run on a cluster as well.
Scott: I’m sure that this is an area of just absolutely intense research, but right now, it’s just too hard for the average developer to write a good parallel program.
It’s too hard to write a multi-threaded program; it’s too hard to write a program that’s going to run on a cluster. Event driven programming became easy when it was just something the languages natively knew about. And right now, the languages don’t really natively know about threads, and concurrency, and all that stuff. It’s done through libraries; it’s not really done through core concepts in the language. You have to write the thread synchronization logic, and you can’t really just declare a synchronized integer, for example.
What do you see as the future of the tools, so that it’s not so insanely difficult to do it right? Do you see the way it’s done, with high performance computing, where you’re sending off batches of work and getting results back, do you see that as being a viable model for a multi-core system? Or do you think there’s some new paradigm on the horizon that might unify both?
Ryan: For general purpose programming, it’s the latter. We start moving to new programming models — not radically new programming models, because we can’t retrain everybody in totally new programming constructs, but you’ll see subtle shifts. Languages like C# will continue to evolve and understand better concurrency, and it will become simple for developers.
I think we have to get there, because I agree with you, trying to teach average software developers about how to do concurrent programming just isn’t going to work, as it stands right now. And if you think that’s hard, you should look at the MPI programming that people write. This is akin to Assembly level programming — it’s just outrageously complicated.
You get this 500-node cluster, and the communications are not handled automatically. One node switches to receive mode, the other node switches to send mode, and then it sends data to the machine that’s in receive mode. If they both switch to send simultaneously, or they both switch to receive simultaneously, it deadlocks! And your whole 500-node job halts.
Scott: Wow. Getting back on some of the differences between closed source and open source, to follow upon your statement that “you can’t just change everything because it’s hard to train everybody to do things differently,” open source seems to move very incrementally, because of the way it’s built, because things are submitted as 100 lines of code to a mailing list where they get scrutinized, and maybe it makes it in and maybe it doesn’t. And a lot of times, things are worked on in fairly small pieces.
It seems like it’s fairly impossible for an open source project to make a radical leap. There seem to only be two ways that happens: one is that somebody forks the source and does a fairly major change to it, or a whole new thing comes out of nowhere; you know, Ruby on Rails shows up out of nowhere, and it’s all the rage for building web stuff. People look at it, and they have to decide whether to stick with PHP or to learn a whole new thing.
In closed source, it seems to be a little different. It seems that there’s a lot higher barrier to just releasing something completely new that has no connection to an existing product. There’s a lower barrier to taking an existing product and maybe doing something a little bit more radical with it. Like in Office, they came out with the Ribbon, a whole new UI paradigm. I don’t know that something like OpenOffice would have ever evolved something like that.
Vista is obviously built on the same code base as previous operating systems, but there were some pretty radical departures there that had some impacts on application compatibility and issues like that. And again, it would be kind of a stretch for open source to do that. But on the other hand, you might come out with a whole new shell that competes with Gnome or KDE or their peers.
Talk about that a little bit. Do you think I’m onto something, or do you see things that run counter to my observation?
Ryan: I think it’s a very good point, and it’s something that I’ve also thought about and wondered about. Part of what’s powerful about the open source community is that it is incremental. And as they make their incremental changes, if they’re good, people use them; if they don’t, nobody uses them, and they eventually are discarded. A good example is user interface models. We have one, Apple has one, and those are very popular user interface models.
The X-Windows model was never really that popular. It’s popular among a particular group of people, but not like the Macintosh and Windows paradigms were. And they’ve continually competed and evolved with each other. And I think you see Gnome and KDE coming in and filing the best of what they like out of those models and evolving that, and maybe keeping a little bit of some of the X-Windows stuff that they like as well.
Those are highly evolutionary systems. Things like OpenOffice can really pick at what has been successful–like Office, for example–and take that and put it into their products. At the same time — and I think this will be true for a lot of commercial software companies — because we spend so much time trying to figure out what people would pay for, if we think something is going to be paid for, we’ll invest the money there.
Scott: It seems like there’s almost a pressure not to do things that are incremental, because unless it’s a big enough delta between this version and the previous version, people won’t pay for it.
Ryan: That’s right. We’re constantly in this business of having to generate value on a huge scale, because it’s not like we can build something that’s interesting for a hundred or a thousand people. It has to be interesting for millions of people. And so that’s why we can spend so much time figuring out the next big shift that’s going to come along.
Sometimes that means that we look at stuff that’s been popular in niche markets and figure out how to make that go really big. And sometimes it’s radical new things. In some ways, for example, you can say that the original Windows NT operating system was an evolution of traditional systems, based on work that was done at DEC, and work that was done at some other software companies, to create this new kind of kernel.
But NT was a fascinating concept. If we go back and think about it, the idea was that you could write a kernel that would span from the smallest laptop computer up to gigantic SMP systems.
Scott: The whole Hardware Abstraction Layer was an unproven theory and a pretty radical idea.
Ryan: Right, and it’s now the foundation for Vista.
You know, HPC is a really interesting market to study from an open source point of view, because it’s a full ecosystem, and it’s still pretty heavily driven by the academic community, which is, by nature, fairly open source-oriented.
We fund work at something like 10 universities and research labs around the world, like Jiaotong University in China, and Pacific Northwest National Labs, and the University of Tennessee where Jack Dongarra does all of his HPC research. And we do that because it’s still critical to really be working well with the academic community, in order to be successful in this market.
Scott: They’re the ones running the climate simulations, things like that. They’re the ones who, largely, are running the kind of models that need high-performance computing. One of the things that’s kind of interesting — it’s not really high-performance computing — but Amazon has in beta what they call their Enterprise Compute Cluster.
Ryan: Yeah. My friend Marvin Theimer is an architect at Amazon.
Scott: Do you see a future for hosted high-performance computing? Do you see a future for maybe ‘high-performance computing lite,’ where it is more tailored to the kind of jobs where you’re not moving enormous quantities of data, the computation per byte is higher, and you can have simplified communication between the nodes?
I wouldn’t have to have 500 nodes, but I could push a job up to some hosted environment that could put 10 nodes or 10,000 nodes on it. Does that kind of thing exist today?
Ryan: It does exist today. Sun has been doing this for a few years now; it’s at least two years. They have this model where they charge on a CPU-hour basis. We talk to those guys, and they’re very open about what they’ve been up to, or at least they used to be. And the way that those jobs work is you actually ship all the disks over there.
Jim Gray did some really interesting research here at Microsoft. He computed how much it costs to move a certain amount of data and how long it would take to move that much data over a particular kind of pipe, because the question is: how long does it take you to get a terabyte of data from one site to another?
Scott: DVDs and FedEx have an incredibly high bandwidth.
Ryan: Yeah. That was the result of the research, years ago: “Just put it on DVDs and ship it that way.” And so what Sun does is that you actually ship them a cabinet of hard drives, and they plug that into the cluster and use it to compute.
Your point is very good though, that a lot of customers will typically do small computational jobs and occasionally they’ll need a really big cluster. I think that is true going forward, that we’ll see more and more of that, and people will occasionally need to run a big job.
If part of what we believe is true, that this revolution will happen in workgroup and departmental clusters, it’s because workgroups and departments will need those smaller clusters for some of their work, but every now and then they’re going to want to hand jobs to a huge cluster for computation.
And that may be handled using grid standards that are generated by the open grid forum. That may be handled by just handing off to a place like Amazon or Sun.
I also think that, in closing on the discussion of open source and the HPC market and the commercial software that we’re building for HPC, I’m really hoping that what’s going to happen is that our group at Microsoft will help pave the way for how commercial groups at Microsoft are better able to work with open source projects.
Scott: Thanks, Ryan, for all your insight. This was a great conversation.