Showing posts with label Quality Assurance. Show all posts
Showing posts with label Quality Assurance. Show all posts

Friday, July 27, 2018

Testing Encryption - 3 years of Dan Boneh's Online Cryptography Course

Three years ago in July, I completed Dan Boneh's online cryptography course with distinction through Coursera's Cryptography 1.  Since then, I've had the opportunity to use and test cryptographic systems at work and for hobbies.  Here are a few lessons learned when testing encryption.

I have found my fair share of bugs in the crypto we chose to use at work.  I've gotten into a routine when testing encryption used for message authentication:
  • Test the same plaintext multiple times.  Does it need to be different each time?  How much of the MAC is different each time?  It might help to explore the data your hashing function spits out as it can tell you how your hash function does what it does.
  • Replay it.  How can a user abuse identical MAC'd data if they replay it at a later date?  For a different user?  Can you add items to the plaintext that will allow you to validate not only the data but the source or timeframe as well?
  • Ensure your hashes are detecting changes. Is your MAC rejected if you change the data at various places within the message?
  • Rotate the key. Do you need a hash to survive a key change?  Usually you can just regenerate the data and re-MAC it, so figure out if you really need to use MACs over long lifetimes.  They're easy to compute.
  • Generate a bunch at once.  Is performance an issue with the service?  Most hashes are built for speed, but is yours?
For each of these failure modes, I'm looking mostly for hints of weakness.  I'm expecting pseudo-random noise, but how does my brain distinguish that from almost random noise?

There are many times when you need to generate a unique but random value but don't have the space to use a GUID.  To evaluate if a solution will be "unique enough", check out the Birthday problem wikipedia page, and this table of probabilities in particular.  Find out how many possible values exist (9 numeric digits = 10^9 ~= 2^30).  Compare on the table with that value as the hash space size versus the number of times you'll be setting this value.  This will tell you if the algorithm you want to use is sufficient.  If you are making long-term IDs that can only be created once, you obviously  want the probability of collision to be extremely low.  If you can recover from a collision by creating a new transaction fairly readily, you might not need as much assurance.  Ive used this to help drive a decision to increase unique token size from 13 to 40 characters, guide switching from SQL auto-numbers to random digits to hide transaction volumes, and ensure internal transaction IDs are unique enough to guide troubleshooting and reporting.

Time and again, the past three years have taught me that cryptography must be easy for it to be used widely.  I've stayed with Signal for text messaging because it just works.  I can invite friends and not be embarrassed at its user interface.  It doesn't tick all the boxes (anonymity is an issue being a centralized solution), but it has enough features to be useful and few shortcomings.  This is the key to widespread adoption of encryption for securing communications.  Since Snowden revealed the extent of the NSA's data collection capability, sites everywhere have switched on HTTPS through Let's Encrypt. Learning more about each implementation of SSH and TLS in the course was both informative and daunting. I was anxious to get HTTPS enabled without rehosting the site on my own.  Early 2018, Blogger added the ability to do just that through Let's Encrypt.  It requires zero configuration once I toggle it on.  I can't sing its praises enough.  The content of this blog isn't exactly revolutionary, but this little move toward a private and authentic web helps us all.

Dan Boneh's Cryptography course continues to inform my testing.  The core lesson still applies: "Never roll your own cryptography."  And the second is how fragile these constructs are.  Randomness is only random enough given the time constraints.  Secure is only secure enough for this defined application.  Every proof in the course is only as good as our understanding of the math, and every implementation is vulnerable at the hardware, software, and user layers.  In spite of this, it continues to work because we test it and prove it hasn't broken yet.  I'm looking forward to another three years of picking it apart.

Tuesday, June 12, 2018

Quotes from Dan Kaminsky's Keynote at DEF CON China


Above is Dan Kaminsky's keynote at the inaugural DEF CON China.  It was nominally about Spectre and Meltdown, and I thought it was immediately applicable to testing at all levels.  Here are some moments that jumped out at me:

On Context:

"There's a problem where we talk about hacking in terms of only software...What does hacking look like when it has nothing to do with software." 1:55

"But let's keep digging." Throughout, but especially 5:40

"Actual physics encourages 60 frames per second. I did not expect to find anything close to this when I started digging into the number 60...This might be correct, this might not be. And that is a part of hacking too." 6:10

"Stay intellectually honest as go through these deep dives. Understand really you are operating from ignorance. That's actually your strong point. You don't know why the thing is doing what it is doing...Have some humility as you explore, but also explore." 7:40

"We really really do not like having microprocessor flaws...and so we make sure where the right bits come in, the right bits come out. Time has not been part of the equation...Security [re: Specter/Meltdown] has been made to depend on an undefined element. Context matters." 15:00

"Are two computers doing the same thing?...There is not a right answer to that. There is no one context. A huge amount of what we do in hacking...is we play contexts of one another." 17:50

[Re: Spectre and Meltdown] "These attackers changed time which in this context is not defined to exist...Fast and slow...means nothing to the chip but it means everything to the users, to the administrators, to the security models..." 21:00

"Look for things people think don't matter. Look for the flawed assumptions...between how people think the system works and how it actually does." 35:00

"People think bug finding is purely a technical task. It is not because you are playing with people's assumptions...Understand the source and you'll find the destination." 37:05

"Our hardest problems in Security require alignment between how we build systems, and how we verify them. And our best solutions in technology require understanding the past, how we got here." 59:50

On Faulty Assumptions:

"[Example of clocks running slow because power was not 60Hz] You could get cheap, and just use whatever is coming out of the wall, and assume it will never change. Just because you can doesn't mean you should...We'll just get it from the upstream." 4:15

"[Re: Spectre and Meltdown] We turned a stability boundary into a security boundary and hoped it would work. Spoiler alert: it did not work." 18:40

"We hope the design of our interesting architectures mean when we switch from one context to another, nothing is left over...[but] if you want two security domains, get two computers. You can do that. Computers are small now. [Extensive geeking out about tiny computers]" 23:10

"[RIM] made a really compelling argument that the iPhone was totally impossible, and their argument was incredibly compelling until the moment that Steve Jobs dropped an iPhone on the table..." 25:50

"If you don't care if your work affects the [other people working on the system], you're going to crash." 37:30

"What happens when you define your constraints incorrectly?... Vulnerabilities. ...At best, you get the wrong answer. Most commonly, you get undefined behavior which in the presence of hacking becomes redefinable behavior." 41:35

"It's important to realize that we are loosening the assumption that the developer knows what the system is supposed to do...Everyone who touches the computer is a little bit ignorant." 45:20

On Heuristics

"When you say the same thing, but you say it in a different time, sometimes you're not saying the same thing." 9:10

"Hackers are actually pretty well-behaved. When hackers crash code...it does really controlled things...changing smaller things from the computer's perspective that are bigger things from a human's perspective." 20:25

"Bugs aren't random because their sources aren't random." 35:25

"Hackers aren't modeling code...hackers are modeling the developers and thinking, 'What did [they] screw up?' [I would ask a team to] tell me how you think your system works...I would listen to what they didn't talk about. That was always where my first bugs came from." 35:45

On Bug Advocacy

"In twenty years...I have never seen stupid moralization fix anything...We're engineers. Sometimes things are going to fail." 10:30

"We have patched everything in case there's a security boundary. That doesn't actually mean there's a security boundary." 28:10

"Build your boundaries to what the actual security model is...Security that doesn't care about the rest of IT, is security that grows increasingly irrelevant." 33:20

"We're not, as hackers, able to break things. We're able to redefine them so they can't be broken in the first place." 59:25

On Automation

"The theorem provers didn't fail when they showed no leakage of information between contexts because the right bits went to the right places They just weren't being asked to prove these particular elements." 18:25

"All of our tools are incomplete. All of our tools are blind" 46:20

"Having kind of a fakey root environment seems weird, but it's kind of what we're doing with VMs, it's what we're doing with containers." 53:20

On Testing in the SDLC

"We do have cultural elements that block the integration of forward and reverse [engineering], and the primary thing we seem to do wrong is that we have aggressively separated development and testing, and it's biting us." 38:20

"[Re Penetration Testing]: Testing is the important part of that phrase. We are a specific branch of testers that gets on cooler stages...Testing shouldn't be split off, but it kinda has been." 38:50

Ctd. "Testing shouldn't be split off, but it kinda has to have been because people, when they write code, tend to see that code for what it's supposed to be. And as a tester, you're trying to see it for what it really is. These are two different things." 39:05

"[D]evelopers, who already have a problem psychologically of only seeing what their code is supposed do, are also isolated from all the software that would tell them [otherwise]. Anything that's too testy goes to the test people." 39:30

"[Re: PyAnnotate by @Dropbox] 'This is the thing you don't do. Only the developer is allowed to touch the code.' That is an unnecessary constraint." 43:25

"If I'm using an open source platform, why can't I see the source every time something crashes? ...show me the source code that's crashing...It's lovely." 47:20

"We should not be separating Development and Testing... Computers are capable of magic, and we're just trying to make them our magic..." 59:35

Misc

"Branch Prediction: because we didn't have the words Machine Learning yet. Prediction and learning, of course they're linked. Kind of obvious in retrospect." 27:55

"Usually when you give people who are just learning computing root access, the first thing they do is totally destroy their computer." 53:40 #DontHaveKids

"You can have a talent bar for users (N.B.: sliding scale of computer capability) or you can make it really easy to fix stuff." 55:10 #HelpDesk
"[Re: Ransomware] Why is it possible to have all our data deleted all at once? Who is this a feature for?!... We have too many people able to break stuff." 58:25

Sunday, June 10, 2018

Postman Masterclass Pt. 2

During my second Postman meetup as part of the Las Vegas Test Automation group, we were able to cover some of the more advanced features of Postman. It's a valuable tool for testing RESTful services (stronger opinions on that also exist), and they are piling on features so fast that it is hard to keep track. If you're a business trying to add automation, Postman is easily the lowest barrier to entry to doing so. And with a few tweaks (or another year of updates) it could probably solve most of your API testing.

The meetup covered the Documentation, Mock Server and Monitor functionality. These are pieces that can fit in your dev organization to smoothe adoption, unroadblock, and add automation with very little overhead. Particularly, the Mock servers they offer can break the dependency on third party integrations quite handily. This keeps Agile sprints moving in the face of outside roadblocks. The Monitors seem like a half-measure. They gave a GUI for setting up external monitors of your APIs, but you still need Jenkins and their Newman node package to do it within your dev env. The big caveat with each of these is that they are most powerful when bought in conjunction with the Postman Enterprise license.  Still, at $20 a head, it's far and away the least expensive offering on the market.

Since the meetup, I've found a few workarounds for the features I wish it had that aren't immediately accessible from the GUI. As we know in testing in general, there is no one-size fits all solution.  And the new features are nice, but they don't offer some of the basics I rely on to make my job easier.  Here is my ever-expanding list of add-ons and hidden things you might not know about.  Feel free to comment or message me with more:

Postman has data generation in requests through Dynamic Variables, but they're severely limited in functionality. Luckily, someone dockerized npm faker into a restful service. This is super easy to slip stream into your Postman Collections to create rich and real-enough test data. Just stand it up, query, save the results to global variables, and reuse them in your tests.

The integrated JavaScript libraries in the Postman Sandbox are worth a fresh look. The bulk of my work uses lodash, crypto libraries, and tools for validating and parsing JSON. This turns your simple requests to data validation and schema tracking wonders. 

  • Have a Swagger definition you don't trust? Throw it in the tv4 schema validator. 
  • Have a deep tree of objects you need to be able to navigate RESTfully? Slice and dice with lodash, pick objects at random, and throw it up into a monitor. Running it every ten minutes should get you down onto the nooks and crannies.
This article on bringing the big list of naughty strings (https://ambertests.com/2018/05/29/testing-with-naughty-strings-in-postman/amp/) is another fantastic way to fold in interesting data to otherwise static tests. The key is to ensure you investigate failures. To get the most value, you need good logs, and you need to pay attention to your results in your Monitors.

If you have even moderate coding skills among your testers, they can work magic on a Postman budget. If you were used to adding your own libraries in the Chrome App, beware: the move to a packaged app means you no longer have the flexibility to add that needed library on your own (faker, please?).

More to come as I hear of them.

Tuesday, September 1, 2015

Failing Faster, Succeeding...Soon?

I listened to a good podcast about having and executing on ideas.  Here was the gist of it:


  1. Have an Idea: Gather info directly from customers.  Implement now or Punt for later
  2. Once implemented, get a Minimum Viable Product to a Website, county fair, etc.  Fulfillment can be slow at first.  Persevere and refine or Punt
  3. Once it is selling, enter a Customer Validation Loop and handle their concerns first.  New ideas?  Start at top.  
  4. Once major customer concerns are addressed, enter a Product Design Loop: Change design or manufacturing as needed.

The core of the idea is to fail faster in the hopes that you succeed sooner.  Your backlog of unvalidated ideas are there to experiment on and validate.  Then you Implement, Persevere, Resolve and Redesign or Punt and wait until you've churned through your good ideas.


Another formulation of this is the 2-2-2-2-2 method.  When you are trying to determine if an idea is feasible, first spend 2 minutes getting it down on paper.  If it still captures your interest, spend 2 hours fleshing it out.  As it grows, time box your commitment to the project.  See it through or bin it.  By the time you're spending 2 weeks or months on an idea, it should be clear whether it can bear fruit or not.  I cannot find an online version of this idea.  If you can place it, let me know in the comments.

While this applies to product development, it can also apply to hobbies, chores and other activities.  Have an idea for homemade Christmas presents?  Try it out on a small batch before you become consumed with a monster of a project with no practical timeline for delivery. Have a request from a friend to help you with a project?  Spend a few minutes talking logistics.  If you get down to a trip to the hardware store, make sure you can finish that phase with results in an afternoon.  Re-evaluate before committing to future efforts: is the benefit still worth your collective time?


Tuesday, August 11, 2015

Responding to Vulnerabilities and Weaknesses in your Product: A Tester's Perspective

Compare and Contrast: Tesla's response to researchers: 

With Oracle's: 

To be fair, no company should have to sift through an automated report from a static analysis tool.  It’s not worth their time.  In fact, the tone of the Oracle Blog that isn’t completely unproductive is, “Do the research for yourself!  Give me exploits or give me death!”  As a Tester, this is the core of bug advocacy, and I want to destroy the trust lazy researchers put in automated scanners, lazy managers put into automated checking, and the lack of human interaction endemic in development in general.


That being said, chiding someone for spending their own coin to find a exploit with, “But you really shouldn’t have broken the EULA.  Nanny Nanny Boo Boo,” is unproductive at best and an invitation to become the target of malicious actors at worst.  No one cares about your EULA.  Not even the government gives it the time of day.  Your tantrum just makes that many more people want to do things to piss you off.

Monday, February 16, 2015

The Uncanny Valley of User Interfaces

Of all the things to get peeved about, you wouldn't think commercials would be one of them.  In spite of avoiding commercials by cutting the cable cord, I have found myself employed at a place that runs ESPN 24/7.  While it's not as bad as commercials on Fox News, certain commercials have begun to irk me for reasons that only a software guy can understand.  In general, I believe this revulsion can teach us how to test user experiences.

To start, allow me present two exhibits: Jillian Michaels for Nordic-track and Trivago Guy for Trivago.  Have you watched them?  Good.  Are ads for them already populating your sidebar?  Allow me to apologize.  So, did anything bother you?

Now, here is a second exhibit: Fiddler on the Roof.  I'm sorry if I ruin this for you, but the fiddler is not actually playing his instrument.  In high school, I saw this movie once in class, and again after I spent a year in orchestra.  The second time through, I knew as little about playing the violin as one can, and it still bothered me that the fiddler wasn't successfully playing in sync with the music.  The distraction caused by bad miming is well documented.  Recent Amazon series Mozart in the Jungle has it.  Other movies aren't immune either.  Even a talented musician can be affected by it without equal attention to production (try not to notice the out-of-sync miming and white truck in the background 52 seconds in).  The core problem is that you need to train someone who is good at acting to fake a highly skilled activity convincingly enough to keep the audience's focus.  If the audience knows even a little, their belief is unsuspended, and you can lose all credibility.

Normally, this is fine.  in a two hour film or season of a television show, the viewer is pulled in by the drama.  The story isn't about how well the artist can play the violin; it is about the character's story. Also, most directors can rely on the majority of their audiences not knowing what skilled activities like playing the violin and hacking actually look like.  Camera tricks can make some instruments easier to fake than others.  Furthermore, some miming has become acceptable and celebrated through parody.  We know it's hard to convey on film.  Should that mean we don't tell this story?

Unfortunately, there is a class of film that cannot rely on the social contract to ignore non-experts acting out difficult skills.  Its short format leaves no time to pull a person back in after having been jarred from their witless stupor.  What pitiless medium is this?  Commercials.  In 30 seconds or less, the disingenuous spokesman is called out.  The thankless medium that pays for most of our entertainment is reviled for cheesy effects, leaps of credulity, and now one more heresy: poorly mimed user interfaces.

Let's return to our exhibits, shall we?  Nordic-track wants to sell us another piece of exercise equipment that we will only use as long as we are paying for it.  To do this, they've strapped on a touchscreen to navigate workouts.  What does Jillian Michaels use to navigate?  Two fingers, and she watches the screen as she swipes left.  Thinking back to your own interactions with iPads, touchscreen kiosks and mobile phones, when have you ever watched the old screen disappear?  And when have two fingers ever been used to navigate a touchscreen?

The second is Trivago Guy.  Not only has he drawn attention for being creepy, but his ham-handed interaction with the user interface makes me cringe.  Poking at points on a calendar, full-handed presses of the Search button, and similar miming make me look up from my desk and gag.  Similar interactions with after-effects like those in the Endurance Extended Warranty commercial make me wonder if anyone thought to proof the commercial before buying ad spots day after day, year after year.  An alternative explanation would be that the producers honestly think this is the way people interact with computers.  Either one disarms the viewer and places the product as unfavorable in their eyes.

I would like to propose that each of the above cases can be grouped together as potential examples of the Uncanny Valley.  As a movie viewer familiar with how a violin is played, I connect notes and the movement of bow in a way that the uninitiated cannot.  I reject the characterization as invalid for a brief period, but my emotions pull me back in with other human interactions elicited by the actor's performance.  For these commercials, this does not happen.  The terrible user interface interactions remove focus from the message of the commercial, and it is judged as unfit just long enough that I reject the product on offer.  Worse, subsequent viewings reinforce my first impressions.

Generalized lessons in the User Experience design space are many.  After testing a new user interface, I have found it helpful to let the uninitiated take it for a spin.  While I have been long-desensitized to bad interaction by other considerations ("It actually works once you get used to it!" being one, "I just want to be done with this." being another), initiates see the unnatural interaction, and one can read the revulsion in their face even when they don't come right out and say it.  This commercial for the new product is a failure, and I don't wait around to see them lulled into the same sense of security.  It stinks, they know it, and now I do to.  While users might not be expected to use a new interface right away, something that is counter-intuitive from the start should be avoided.  Depending on the medium, counter-intuitive steps can be overcome, but not without good decisions elsewhere to draw the user back in.

Wednesday, February 11, 2015

Bike Rentals, An Adventure

The local subreddit received a post requesting one of us rent a bike to a visitor while their significant other was at a conference.  This spawned an interesting adventure, and I learned a ton.

The post provided me with a list of expectations, and we quickly moved to Private Messages to hash out the details.  The end result was $20 per day for a bike, helmet and tools needed to keep you going.

I quickly found out I wasn't as prepared to rent as I had presumed.  For the bike, I had a $150 Wal-Mart special with a good amount of wear, baskets on the pedals, and some upgrades like a headlight.  The back tire on the bike was completely shot, so any money from this venture was going to go right back into it.  I didn't have patches, a portable pump, and my bike tool was nowhere to be found.  A trip to JT's and I was set.

The renter was staying at the Green Valley Ranch, a local hotel/resort, and their bell desk was endlessly accommodating.  I dropped it off with a note for the person staying in the hotel.  I communicated the tag ID to the renter, and I was off.  I even did it on the way to work, so it was relatively painless.

Until I got the phone call.

As a software tester, you would think I would learn to test my own stuff before I deploy.  Unfortunately, I forgot this portion and ended up handing over a bike with a disabled chain.  I got the phone call in the morning after the renter's arrival, and I was frantic and embarrassed.  U rushed over on an early lunch, fixed the mangled chain, gave it a spin around the parking lot, and kicked the tires for good measure.  Again, the bell staff was extremely accommodating, and it was stowed securely in time for lunch.  The rest of the experience was relatively painless.  I picked it up after the renter had flown.  I paid the bell desk a tip on pick-up.  All the kit was there and intact.

Could I streamline and improve this service?  Here are some ideas:

  • The sign-up process could be accomplished online.  
  • Several waivers should be added to make sure the lawyers don't come calling after our first injury.  
  • Accident insurance and similar services could be added on as well.  Neither renter nor owner wants to be caught unawares.  
  • A service level could also be established: will work on delivery (oops), service calls available within X hours, deposits or charges for repairs, and so forth.  
  • Instructions for the bell desk, advertisements, and similar services could also be bolted on.  Making it easy for the staff engenders trust and is good advertising.
  • The kit was mostly good, but delivery could have been more glamorous (kit bag attached to bike instead of in a plastic grocery sack).  
  • I would make people bring their own helmets or have them available for purchase.  Helmets are very hard to gauge if they have gone bad.  Why risk the lawsuit if an injury does occur.

So, was it worth it?  That is a definite no.  Could I make it worthwhile?  Maybe.

The cost to take the bike, if everything went smoothly, would be gas and time for delivery,  Spread over enough hotels, this could be accomplished relatively easily once the service hit critical mass.  The repair was a huge hit to profitability (driving there and back on lunch), but careful testing and integration with deliveries/pickups would also make it something that could be priced in with some research.  Theft could be mitigated by insurance, but it would need to be managed carefully and included in the cost.  Finally, payment was through PayPal which took a sizable cut.  Cash might be better, but since the ideal rental involves never meeting your customer, it is impractical.  Credit would slice the charges in half.

An attractive alternative is to offer rental services to the hotels/resorts themselves and only deal with them.  It would be a simple way to attract business, and they could take advantage of existing infrastructure for payments, renting, waivers, etc.  With enough coverage, it might just make a profit.

Tuesday, January 6, 2015

Context-Driven Testing, An Education

Coming into my fifth year of Software Testing, I began to rethink it as a discipline. The current debate is between traditional methods of testing and more modern schools of thought:
  • Entrenched methods are represented by the International Software Testing Qualifications Board (ISTQB) and its many local equivalents. Certification is their path to expertise, with accompanying wares of training sessions, books, tests and standards.
  • Context-driven Testing focuses more on a set of tools and skills that typify testing.  Advocate offer classes at conferences, but certs and best-practice are four letter words.  They state that there are no best practices, and a tester knows best how to apply the tools to explore, experiment and learn about the system under test.
The conflict seemed from the sidelines like a pitched battle over the future of testing. The ISTQB and affiliated consultants had history and 300,000 certifications on its side.  The context-driven school was relatively young, but it had a few charismatic evangelists and professional results that could not be ignored.  It was plain that I needed to sort this out.

Something Just Didn't Feel Right

I strode onto this battleground in 2011 as a new manager and new tester.  Promoted from an application integration team, I was used to working with outside developers while using and abusing buggy product.  This did little to prepare me for the reality of testing: limited time and endless defects!  I dove into the online community in the hopes that it would help sort the good from the bad.  What I found to be the central influences were conferences, consultants and blogs.

At testing conferences, half the talks were advertisements for bug trackers, test case repositories and automation frameworks.  These were great for managing a project, but they didn't support the essence of testing: finding more defects.  Expensive tutorials before the conferences showed similar taint: how to use this tool, best practices for testing this specific type of thing, certification in a process and not a portable technique.  The cost was the most surprising thing: Thousands of dollars for something I couldn't justify to myself, the eager tester with a training budget to burn.

Delving into webinars brought more despair.  A demo of the latest automation tool invariably lead to a pitch to get me to purchase something.  The topic purported to discuss principles, but I ended up in the morass of industry jargon.  I learned how to write, automate, and justify my time, but I was no closer to actually finding bugs in a more efficient manner.  And what's more, I was spammed by vendors for months after.  I found zero techniques that were universal.

Finally, the blogs showed a glimmer of hope.  Some people wrote about techniques.  Others wrote about test cases and how they managed their overhead.  Still others advocated that QA should be involved earlier in the development process.  Nowhere did anyone extol the virtues of their test management system, bug tracker or automation too in helping them find bugs.  This was a breath of fresh air, but it still felt stunted and directionless.  My closest analogue, software development, spawned a generation of people applying agile to everything from families to manufacturing, but there wasn't a similarly powerful framework for testers.  I started to feel that no one was excited about my new career outside of the context of getting paid for it.

This muddled world left me questioning: Shouldn't there be some "ethos of testing" to unify our efforts just like in agile software development, lean manufacturing, and so forth?  Why do vendors have a place at the table at industry conferences?  Why isn't anyone embarrassed that the main competitor to major test management software is a spreadsheet?  Who cares about my expertise and not just my dollars?

"Answers" and Answers

For a long time, I thought the answer was in certification.  Surely, if ever there was an organization that could be a cheerleader for quality, it would be the ISTQB.  However, the reality is much different.  Manuals are filled with process, conferences host vendors and not practice sessions, and training classes are about extracting fees and not learning techniques.  The certification exam that guarantees your resume goes to the top of the pile and organization that proctors it is a laughing stock.

The alternative came through an unlikely route: Twitter.  Long a tool of celebrity publicists and companies looking to engage directly with individuals, Twitter also has a reputation as the way to communicate within a subcultures.  Have an idea?  Publish it in under 140 characters.  Want to learn the pulse of an industry?  Follow its leaders.  Computer security wonks, hackers, and now testers joined my follow list, and I was soon introduced to a new debate: is certification a waste of time?  I'd found my people.

The new school touted something called Context-driven Testing.  Instead of best practice, effective testing was supposed to be driven by context.  Instead of test cases, a set of tools were taught that could be used depending on the product (mainframes are tested differently than mobile devices).  Even among superficially similar products, the most effective testers would make judgement calls based on the needs of the customer and the time available.  Testing was not a set of rigid processes, but a scientific exploration of the software.  The knowledge gained by testing increases the confidence of the organization in the software.  In other words, testing challenges the assumptions made by the developers that the product works in the time we have available.  This sounded like the meat I was looking for, but the results were amazing too.

In an experiment, I had our work study group put down the ISTQB manual with its test cases, and I instead introduced them to exploratory techniques.  We first learned about the requirements and returned a bunch of questions to the developer.  Then we tested without scripts and tracked our coverage on a mind map.  It was the first time we had been prompted to field trial a technique in our study session.  The best part was that a person with very little previous experience in testing was able to pick up the technique almost organically.

This revelation about software testing was what we were all looking for, and it was delivered through experience instead of pronouncement.  James Marcus Bach, one of the proponents of Context-driven Testing compared the ISTQB and certification organizations to the medieval medicine of Galen.  People wrote down what testing was and proceeded to bleed their employers without knowing why it wasn't finding bugs.  Testers were outsourced instead of valued as their techniques were old or ineffective.  Yet in spite of all this, the consultants and conferences kept making money printing outdated works of questionable value.  Once context-driven techniques come to light, the old ways start dying.  We can only hope this continues so that meaningless certs are no longer valued by testers, managers and HR alike.

Where Next?

After stumbling through the world of testing for a few years, I have abandoned certification as a path to expertise.  As with computer security, network administration and technical support, certifications are a poor way of communicating true expertise.  This revelation places testers firmly in the camp of indispensable elements of the development organization.  They are not monkeys running scripts but knowledge workers with a valuable investigative skill that challenge the product from all angles.  They cannot be outsourced if you hope to be successful, and they cannot be replaced by automation.

I am beginning a new training regimen with my testing colleagues based around Context-driven techniques.  We hope to learn the techniques and apply them to our current projects and continually grow our skills in this new framework.

Further Reading


  1. A Context-driven Testing manifesto of sorts
  2. Black Box Software Testing, Coursework in Context-driven Testing
  3. Rapid Software Testing, James Marcus Bach's courses on Context-driven approaches:
  4. Exploratory Testing techniques explained
  5. Testing is not Automation.  Why Automation is another tool and not a cure all.

Friday, October 31, 2014

Resources to Check for Dead Links on a Website

http://www.brokenlinkcheck.com/
This external service hits your site and follows each link.  It is intelligent enough to check for loops.  Since it is an external service, it may artificially drive up hit count.


Check My Links Chrome Extension
This very handy chrome extension checks link status and places the results in-line.  Fairly turn-key, it even keeps a list of links to not follow that you can add to as you go along.  It seems to have trouble with blogspot controls and extensions, though adding them to the blacklist might be the solution.

Xenu's Link Sleuth
Heard about this one on UTest.  I am eager to try it out.  Free, long history, and automatable: all the right pieces for success.

Add more as you find them to the comments.

Thursday, October 30, 2014

StarWest 2014

I viewed StarWest's Virtual Conference offering again.  This and the affiliated Better Software conference are run by Techwell.  A few observations.


  1. I loved the talk on Healthcare.gov given by Ben Simo (@QualityFrog).  He communicates how easy it would have been to predict, find and fix the problems that would plague that site for more than a year.  It was a good choice putting him on keynote.
  2. To attend one of these conferences will run you or your company into the thousands of dollars.  Attending the tutorials is even more.  This in spite of being sponsored by some of the biggest software providers in the industry.  We are bombarded by ads for the latest ALM or bug tracking tool and they are called talks.  What is such sponsorship getting the attendees?  Who is benefiting from this other than the organizers?  If the conference organizers were a not-for-profit, would they charge the same amount?
  3. The online offering tries to simulate the networking opportunities for those who could not attend.  It tries to simulate the marketing side too by giving attendees contact info to vendors.  What about the testing opportunities?  With more than half the talks about web app testing, why aren't tutorial sites and learning apps available and promoted to virtual attendees?
Maybe DEF CON has spoiled me.  $200 for the most frenetic hands-on conference over twice the number of days?  A lot of that is a labor of love and volunteers, but then again most of it is also not sponsored by corporations too.  Maybe I need to bring DEF CON to testers, or testers to DEF CON.  See what shakes out.

Monday, September 29, 2014

Test Early, Test Often

Of late, I have been enamored of testing techniques that come earlier and earlier in the development cycle. It can be called static analysis, design auditing, prospective testing, shift -left or the like, but the research is in: testing before you get something bears fruit in most organizations.  Here I present a few examples from my own experience.

At the start of a sprint, we leave Sprint Planning with the requirements.  The next interaction with developers is when we review their Developer Design Overview document that spells out the development approach and helps QA scope their testing effort.  This developer had chosen to put an error message into a file usually reserved for configuration.  QA saw the DDO and raised concerns immediately.  Why was a message being added to this file when they were usually reserved for the language DB?  With this one question, before QA saw the code, we changed the trajectory of development.  The fix was in before we got our first build, and the story closed with the Sprint instead of carrying over with the do-over.

An even earlier example came when we looked to implement secure communications between two servers.  While I couldn't code my own implementation, I was able to provide recommendations at design-time by staying educated and confirming my understanding with developers who had dealt with crypto.  By starting early, we were on surer footing when troubleshooting and confirming the implementation was sound.

As the examples above illustrate, QA often saves time for developers by defending standards and consistent implementation early in the cycle, but that is not the only savings that comes from shifting left.  Often, test environment issues can also be aided by an early understanding of requirements.  In one case, as story had carried over from a previous sprint which meant we were already behind.  The roadblock was a production issue pulling the developer away from the story.  Instead of sitting on our laurels, QA worked with the configuration manager to make sure our test environments were ship shape before the code was completed.  When the developer's changes passed build verification, we were off and running almost instantly.  Not only did our preparation help us get to the work of testing faster, but it also helped us close more stories as environments were made ready before they could become an obstacle. Not only was I able to test early, but it lead to me testing more and in greater depth.

Most modern test engineers have their own war stories from early testing.  For every story where requirements changed and early notes became meaningless, there are ten stories where early questions lead to greater clarity, fewer bugs, and more time for digging in.  I consider projects that foster this early access for QA to be among the most fruitful and least volatile.

Friday, May 23, 2014

Behat Error: PHP Warning: array_combine():

When I attempted to run a Behat Feature with multiline arguments, I kept getting an error.  In the console, the Feature Description lines would print, but the following error would print 3 times thereafter.

PHP Warning: array_combine(): Both parameters should have an equal number of elements in /home/testadm/LbX/vendor/behat/gherkin/src/Behat/Gherkin/Node/TableNode.php on line 133

After hours of searching, I found that it was caused by my arguments and the PHP function array_combine().  This function requires all arrays to be combined to have the same number of elements.  I can't reproduce it now, but these are possible solutions:

  • The simple solution: make sure all your cells have values.
        | Good  | Data | Data |
        | Bad   |      |      | 
  • Don't put comments after a line in a multiline argument.
        | key   | value    |  # No comments after a cell 
  • Make sure your scenarios containing multiline arguments have a semicolon after them.
    Given you have a multiline argument:
        | Data | Value |
        | Data | Value |
    Then make sure you have a semicolon at the end of the line

I'll update this post when I figure out the true cause.