Actually, this is a really good example of how even robust vulnerability testing can fail to find all issues. Quoting the article:
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Lipner confirmed that the faulty software code was created years ago and included in every successive generation of Windows software without programmers ever realizing it was so seriously flawed...<HR></BLOCKQUOTE>
I have done and still do a lot of vulnerability testing as well as the simulating of "hack attacks" and I can tell you that even in programs much simpler (relatively speaking) than an operating system, it is possible that certain things are completely unknown even after a full suite of vulnerability tests and even if those tests have been done over the course of time rather than in just one shot. Also quoting the article:
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>"I would have hoped this would have been caught," Lipner said. "Clearly it's one of those things we'll be looking at."<HR></BLOCKQUOTE>
As we all know as testers, some things simply do get missed. Sometimes mistakes are made or foresight is not used that, in hindsight, perhaps should have been. Sometimes the flaws are not obvious or do not manifest until long after the time of their creation due to other mitigating circumstances. Sometimes new code interacts with legacy code in an unforeseen manner that causes problems, even when the new code or the legacy code by itself would not cause any problem. Any tester who has done vulnerability or security testing to any degree is all too cognizant of these unfortunate facts, particularly when one throws configuration dependencies into that mix of testing.
One thing that is good is that Microsoft has a good TechNet Security site that clearly delineates problems and what to do to fix them. I also find this is an instructive way to teach new testers how to consider security issues. In fact, when demonstrating techniques to new testers, I like to have a Windows system that does not have the patches installed and then show them how the exploits are done and some of the things they can do or consider when testing for such vulnerable areas of their own applications.
This particular issue is also an interesting example of how such problems can surface. In this case, the Windows Script (JScript) engine processes a script but does not correctly size a buffer during a memory operation. A seemingly innocuous issue and yet one that leads to a systemic vulnerability. Learning how these vulnerabilities manifest in one application might give you ideas of how they could manifest in another. This is always an exercise I recommend testers undertake, not only for their own edification, but also to be more effective testers for their organization.