Tuesday, January 3, 2012

"QA" vs. "QE"

I've been doing software testing for close to twenty years now. Throughout my career, I've generally had "QA" somewhere in my title, followed by the word "engineer" or "manager". Once it was preceded by "Director", but that was one of those puffed-up start-up titles that is intended to impress the rubes (and maybe myself) rather than really mean anything.

So, for example, I might be a Senior QA Engineer or a QA Manager or what-have you -- "QA" meaning "Quality Assurance" of course.

I left the MathWorks (the MATLAB folks) in 1994, after having three different titles with "QA" in them. Some time after that, they changed the title of the group, and the identifier common to its practitioners, from "QA" to "QE" ("Quality Engineering"). I understand that that nomenclature has gained some traction out there in the wider world, too.

I really don't like that terminology.

I have to admit, part of it is just a negative first reaction. I have negative first reaction in a big way in general, and definitely to this name change in particular "OK, you're going to change what you call my job? There has to be some ulterior motive. And $%^# you, by the way," says my inner voice. (My inner voice is a real %^#$*@, in case you need that explanation.) If I slow down and try to think it through a little more, changing the title of the function from "QA" to "QE" smacks of the old joke about the trash guy calling himself a "sanitation engineer". The first step goes "everybody knows that people with low-status jobs try to bling up their titles to impress people, and everybody sees right through it." The second step goes "why would people want to start calling the function 'Quality Engineering' unless they want to ensure that it remains perceived as a junior-varsity discipline because everyone's going to see the blinging-up of the title as exactly the same thing as what the trash guy is doing?"

It doesn't help that I have to hear people outside of software testing trot out some condescending-sounding argument like "Well, now that you quality guys are doing some real engineering, you should have a real-engineering title." Excuse me, I've actually been doing hypothesis testing and applied philosophy all this time, says my inner voice, so take your "real engineering" and ... well you can probably imagine the general idea of what my inner voice says.

I mean, really. It's bad enough that almost everyone in software thinks he knows what QA is supposed to do, and when we're not doing exactly what they think we should be, they think it's some kind of competence or learning-curve problem and they "explain" their (often incomplete, inefficient, and/or just plain screwed-up) idea in a condescending way as if they're helping me out. OK, so if you're not going to let me own the job I'm doing, at least let me own its $%^&*^ title.

Monday, April 26, 2010

More on the Challenger Slide at Atlantis

The Challenger Slide is a pair of side-by-side water slides. The top is about 60 feet up. The slide is not straight; it initially drops at, oh, maybe 45 degrees, then it flattens out briefly, then it pitches back down to 45 degrees, and then it flattens out again at the bottom. There is an optical sensor across the slide, a few feet from where you start, breaking which starts the clock. There is another sensor across the slide at the bottom, breaking which stops the clock.

Usually the people who work there want you to sit down in the slide and start by grabbing the sides of the slide, pulling yourself along, and laying back and letting gravity take over. If you're a full-grown adult, you pull as hard as you can, you help by sliding your butt the right way, and you avoid hitting the sides of the slide on the way down, you can get a little under five seconds with this technique. Most adults were getting between 5 and 5.4 seconds. Kids generally clocked in closer to 6 seconds.

I tried the "two pull" strategy: I sat back as far as I could on the top of the slide, pulled a little to get myself started, and then I grabbed the sides and pulled myself again (before my feet got to the starting optical sensor). A real good second pull followed by good and lucky technique down the slide allowed me to hit 4.6 seconds regularly.

I was all kinds of proud of my 4.6 seconds until the afternoon I met a group of guys from Vermont. The Atlantis personnel were letting these guys go down more aggressively, because they (the Vermonters) were sober, respectful, and probably pushed the envelope a little at a time. By the time I showed up and joined them, they were being allowed to brace their feet on the back wall of the slide, hold on to the wall on each side of the slide entrance, and start by pushing hard off the feet and pulling hard with the hands. This start allows you to actually catch a little air before you land on the slide. The times these guys were getting ranged from 4.35 seconds for a fellow that was in pretty good shape for his apparent sixty or so years of age, to 4.15 seconds for the sixteen-year-old in their group.

I joined them and we started egging each other on. I did manage 4.24 seconds with this aggressive starting technique, but then I slowed down (I was probably getting tired). One of the Vermonters popped a 3.93 while I was racing against him. He totally smoked me. It was excellent. We suspect that he pulled/jumped hard enough that his butt, or even his back, was what actually broke the top beam, instead of the feet as usual.

The people working there tell stories at various levels of probable exaggeration about the times they've gotten or seen. The young woman who was lifeguarding at the bottom of the slides, while the Vermonters were in full form, said the fastest she'd ever seen was 3.90. She and the guy working the top were certainly very impressed with the 3.93. Maybe that's the fastest they'd seen by a hotel guest rather than a local. An extremely athletic man working at the neighboring kid pool said he'd once gotten a 3.6-something, but that he had taken a running jump. I believe him, in part because he explained that, when you take a running jump, you have to be very careful to land straight on your butt and back, because twisting around and landing crooked really hurts and risks injury. There was a vague claim about someone getting close to three seconds flat, but I kind of have a hard time believing that. If I'm doing the math right, a mass would go down a frictionless 60-foot high slide, pitched at 45 degrees throughout, at just under three seconds. The fact that there is friction on the slide, and it is not 45 degrees throughout, makes me believe that 3 seconds flat is just plain physically impossible. I think the curves and the friction much more than compensate for the fact that really aggressive jumping technique might give you 8 fps speed along the slide by the time you break the top beam.

Of course, I could be wrong about the pitch and height of the slide. But I'm pretty sure I'm pretty close. They make a big deal about the Leap of Faith slide being 60 feet high, and the Challenger is the same height.

Anyway, I'm still reasonably proud of my 4.24 seconds, and that random Vermont guy can be very proud of his 3.93.

Any inferences drawn about my inherently competitive nature, and propensity to bring the geekitude even when on vacation, are probably justified.

--JMike

A software testing metaphor, or why I hate the literature so much

My father was (and is) a competent golfer. My brother was good enough to flirt with the bottom rung of pro status. I have always been terrible. I never learned how to control the swing. I whack way too hard at the ball. I don't have any "feel" for the club, or the swing. The rare times I make reasonably good contact, my typical ball flight is somewhere between a high fade and an outright slice.

With me so far?

In an effort to help me out, someone bought me a golf technique book by Ben Hogan. I really tried to read through it, think about what Hogan was trying to explain, practice, and so on. But it just didn't work.

I think Hogan's book failed to help me for one specific reason and one general reason. The specific reason is that Ben Hogan was a fairly short and very strong man whose main problem was a hook. So the grip, timing, and other techniques he used were primarily to prevent a hook. The general reason is that I am so bad a golfer that I still have no idea what a good golf swing should feel like - what is the best tempo, how should the club feel in my hands, what does it feel like to bring the club face squarely in contact with the ball, and so on. I also have no idea what it feels like to miss this idea swing in any of the several different ways that I can miss. How can I tell that I am lunging my shoulders forward, when not only do I not really know where my shoulders should be, but I also don't really feel small differences in where my shoulders are?

A little while ago, we bought Wii Fit Plus. One of the things on it is a golf simulator. I doubt that it is particularly precise, and I suspect that it might not even be accurate. (Obligatory digression: the important difference between "accuracy" and "precision" is lost on many. My high school chemistry teacher John Belk taught it to me and I urge you to go research it. Maybe finish reading this post first.) But the point is that I think I am beginning to learn, in a very broad sense, what a reasonable golf swing feels like, and what it feels like to miss reasonableness in different ways. Maybe if I continue practicing with the Wii for a while, I might be able to go out to the real physical driving range and react properly to mistakes in my swing. The thing is, even when I get to this point, I am quite sure that Ben Hogan's book is not going to be the right book for me, because of our physical differences.

I got my hands on "Think Like a Grandmaster" by Kotov fairly early in my chess career. The first time I read it, it was clearly over my head. I put it down and didn't pick it back up for a long time. Now I can tell that I would benefit from it -- and in fact it would be one of the best books I could study at this point, if I had the time or the interest.

OK, let me bring this discussion over to software testing where it belongs.

One of the most interesting (and fraught) things about software testing and quality assurance is that you think "outside the box", and at many different levels, to provide information to the people who are trying to build a software product. Just for starters on why it's so fraught: I have been doing this job for at least fifteen years now, and have had some success at it, and I still do not know where "testing" ends and "quality assurance" begins. I think that some people think they know, but I have never been able to agree with any particular definition, or to even find the effort at definition useful. But anyway, in general, what you usually do as a software QA engineer (whether you know you are doing this or not) is conduct experiments. A hypothesis comes to mind -- this line of code is not doing what it's supposed to, this component is not doing what we say it's doing, this subsystem is written in a way that is insufficiently generalizable, this feature is not delivering enough value, this way we're doing work is inefficient. You design and execute an experiment. The result of this experiment is increased knowledge about the product. This increased knowledge is the primary output of your work. There are many side effects: the line of code is fixed to do what it's supposed to, the component (or its documentation) is changed so that it is doing what we say it is doing, the subsystem is refactored, the feature's requirements are re-thought, the process is improved. But the doing of those things is usually someone else's job. The QA engineer's responsibility usually ends with the discovery and analysis of the problem.

Now let me bring up something that I have just hated for years. There are hundreds of books out there that tell you how to do software QA. There are organizations that will, for the right price, sit you through classes that indoctrinate you in how to think about the QA job, hand you a certificate, and encourage you to use it to impress the rubes and convince yourself that you're Doing Something Professional. But -- and here's my point finally -- most of those books and courses are like Ben Hogan's book, only worse. Not only do most software QA books only apply if your situation is enough like the author's, but it's not as easy to determine whether the author is really someone whose advice you really ought to be taking. Going back to the golf books for a moment: if you read a book by Ben Hogan, or Jack Nicklaus, or Tiger Woods, you'd probably start off giving the author the benefit of the doubt and would assume his advice was worthwhile until proven otherwise. If you read a book by someone whose name you did not know, but who had coached several professional golfers, you might think that the advice was probably going to be good, but you might want to come up with a way to find out whether the golfers' success was due to their own abilities or the author's coaching before you really put faith in the advice. In software, there are so many more factors and people involved in a product's success than there are in golf, that even if an author was the QA Director of a hugely successful project, it will be very difficult to determine whether the project succeeded because of, or in spite of, that person's contribution.

I have read dozens of books that were like bad versions of Ben Hogan's golf book: of uncertain value, and almost definitely not relevant to my situation.

But I have also discovered some software testing/QA books like "Think Like a Grandmaster" by people like Grandmaster Kotov. The people generally identify with a crowd calling itself the "Context-Driven School of Software Testing". See this place called Satisfice by a guy named James Bach, for example. I mean this comparison to be not just glib, but also deep: Most of what Bach talks about is "meta" - more about the thought process than any specific thing you're doing. The book "Lessons Learned in Software Testing" (by Bach, Cem Kaner, and Bret Pettichord), is a lot like "Think Like a Grandmaster" - and it is therefore hugely more useful and relevant than the 95% of the literature that is like bad versions of Ben Hogan's golf book. It was a real revelation to run across the Context-Driven crowd.

I have a whole lot more to say about software testing and QA -- both in general and in many specifics. It's safe to say I have a lot more to say than I can actually say. I should have started blogging eight or ten years ago, but I was always writer's-blocked. So I'm hoping that the posts labeled "softwaretesting" will mainly serve me as therapy. Maybe I'll eventually be able to say something clear and useful as well!

Back to the grind

I just got back from a week's vacation with the family at Atlantis, near Nassau in the Bahamas.

This is the third time we've gone there in the winter or spring. So that's a vote of confidence in the place.

But if we take an "escape to somewhere warm" vacation next spring break, we're going to do a little more comparison shopping. We're probably going to look at
  • All-inclusive Caribbean resorts - in the hopes of spending about the same amount of money as we would at Atlantis and having either a better, or at least a different, experience.
  • Southeastern U.S. beach resort hotels - in the hopes of having almost as good an experience as at Atlantis for a lot less money.
I want to make it clear that I had a perfectly good time at Atlantis. I especially like the Challenger slide, where the 4.24 seconds I achieved this time beat my previous best by more than half a second, but wasn't even in the top three times recorded that day! I'll get into this in a little more detail if anyone asks (and probably even if nobody asks).

So now it's back to the grind.