Kevin and I had a paper accepted at ISSTA, a software analysis and testing conference. It's been a very interesting experience, primarily as an outsider - while we do some analysis in Dyninst it's very basic, and we are not strongly associated with the testing community. The short summary? "Software testing is hard" :)
I'll start with the keynotes (there were two of them). The first was by Laurie Hendren, who is known for her involvement in aspect-oriented Java. She spoke on testing programs written in Matlab, a numerical computation language. In short, Matlab is awful and her work primarily focused on adding such fundamentals things as, say, type checking to a language that allows on-the-fly variable creation. Interesting stuff overall. I came away with an interesting hypothesis, though - that code writers (not necessarily programmers) vastly prefer such "sloppy" languages because the language allows the writer to write faster. Realistically, time spent defining variables and interfaces, adding assertions and requirements, is time that is wasted from a "getting it done" perspective. What that extra effort does help with is solving problems - but you only solve problems after you finish writing the initial code. So my hypothesis is this: sloppy languages are popular because they allow you to complete a task faster (even if it is incorrect), and this popularity means the language becomes popular. Of course, it never works in the long term, because you eventually have to debug and without the tools we are used to debugging is hell. However, the language is still popular.
The second keynote was by John Regehr, who focused on testing embedded systems. He primarily focused on why this was an area rich for exploration, which it is - massively distributed, low resource, and lacking good tools. Although I'm not really a testing person, it was very interesting and hopefully there will bew further exploration.
On to the talks. A great focus of testing is in trying to produce automated tests for programs, which in general rely on some amount of symbolic evaluation to try and find program inputs that will test all possible aspects of the program. Here I ran into my greatest frustration with testing and analysis - scale and applicability. I understand scale and complexity - trust me, I do, but it is still frustrating when I see good results on small Java programs when I would kill for a good tool on large C++ programs.
As I write this, the speaker is presenting a survey of using automated debugging tools with real programmers in real applications, and the result was not that great - the tools didn't match how the programmers wanted to work. The open question is this: do we need to educate developers or redesign the tools? As an Apple fan, I'm leaning towards the latter - we in the computer science field need to match our tools to what people want to use, rather than confining people in our mold.