Apparently clickbait titles work, and I’ve always wanted to try one. And in keeping with the established etiquette, I’ve even lied a bit because there are only 7 reasons listed below.
But I do want to talk about something that’s causing a lot of confusion and frustration for testers around the world as they start their test automation journey: Using record and playback tools and why you most likely should avoid them.
We’ve often been asked why we don’t have a recorder in Leaptest. And the truth is, it wouldn’t be hard to make one. In this article, I’ve tried to outline the 8 (ok, 7) most important reasons why we made the decision not to.
Record and playback tools look deceptively simple, and if you follow instructions, you can probably get started in a couple of hours. But after a while, you will most likely hit a wall, which will be impossible to climb.
Problem Number 1: It Records Too Much
When you try to record a sequence of actions that you perform on a website or a desktop application, most recorders will record way too many actions and you’ll end up with something that looks like this (depending on the tool used, of course):
- Move mouse to position 153,429 (relative to window)
- Click on text field “txtEmail” (based on id)
- Wait 0.8 seconds
- Type “email@example.com“
- Move mouse to position 214,437 (relative to window)
- Wait 1.2 seconds
- Type [TAB]
- Type “password123“
- Move mouse to position 102,669 (relative to window)
- Click on button “btnLogin” (based on id)
Of course, this sequence of actions might be correct “on paper”, but most likely you’ll want to clean it up a bit. And to do that, you need to guess which actions should be deleted. Those mouse movements are probably not needed, right? Maybe also that 1.2 seconds waiting thing.
When you run the case again (playback), it suddenly doesn’t work. Turns out that 1.2 seconds wait was indeed needed because the application automatically updated its interface as soon as the username was being entered. But waiting for a set period of time isn’t a very robust solution.
Problem Number 2: It Records Too Little
The reverse can also be true, depending on the tool or your configuration of the recorder. Maybe you have to hover an item in a table for something to change inside the application, and only when this has happened should you move the mouse to a new, dynamically generated item. But all the recorder got was:
- Move mouse to position 842,192 (relative to window)
- Click on button “item_438“
Maybe next time, this won’t work anymore. Maybe new items have been added to the table, so the position has changed.
Problem Number 3: It Records The Wrong Thing
All recorders need a way to figure out what you meant when you performed a certain action, but they get it wrong a surprisingly large amount of the time.
Let’s take something simple. Imagine recording when you click on a button to save changes in an application. A recording of that might look like this:
- Click on button “btnSave” (based on id)
But what if the button didn’t have an id? Or even worse, if it had an auto-generated id like “ab29c5984df79123de“.
Maybe the button has the text “Save changes” inside. That could work. But what if the application was really clever and the button instead had the text “Save 4 changes” inside?
This leads directly to problem number 4.
Problem Number 4: You Don’t Know How it Works, So You Can’t Fix It
If you don’t know how something works, you won’t be able to fix it later. In the case of the “Save 4 changes” button above, you would need to thoroughly understand the way the recorder works and how it employs its object locator strategies.
It’s not uncommon for a recorder to gather the following information about the user interface object you interacted with:
- The object type (a “button” in this example)
- The object id
- The object’s size and position
- The object’s place in the object hierarchy
- The text inside the object
This information is then typically compiled into a single xpath statement that will be used to find the object on subsequent runs. An xpath statement to find the “Save 4 changes” button might look something like this:
- //*div[@id=’loginPanel’]//*button[@innertext=’Save 4 changes’]
In most cases, you’re left with manually editing the xpath statement. For reference, this is what it might look like if you want to check that the button just contains the word “Save”:
- //*div[@id=’loginPanel’]//*button[contains(normalize-space(.), ‘Save’)]
Of course, there are situations where in-depth knowledge about object hierarchies and xpath are relevant — but in those scenarios you won’t be able to rely on record and playback at all.
Problem Number 5: Reusability and Parameterization Nightmares
Once you start recording test cases, you’ll quickly discover the need to reuse certain sequences of actions across multiple test cases. But extracting sequences into reusable modules can be a very technical and confusing process, particularly if the recorder really just generates code scripts behind the scenes.
You might also face the problem of parameterization: If you manage to extract a “login” process into a reusable module, the next logical step would be to drive that module using data from eg. an Excel sheet. In most cases, you’ll get a sinking feeling when you discover that the only way to do that is to dive into the scripts that were generated behind the scenes and implement the parameterization with code.
It can quickly turn in to a nightmare.
Problem Number 6: It’s Hard or Impossible to Create Logic Rules
Speaking of code nightmares, you will most likely face many situations where you would like to create logic rules for how the test should function.
Maybe you want to lookup a value in a certain field and then depending on whether or not it’s greater than a threshold, branch off in one direction or another. Or maybe your application contains a dynamically changing number of rows in a table and you want to loop through them, performing certain sequences of actions for each one. Maybe you even need recursive logic (that is, logic that calls itself) in your test cases, which typically requires both branching and looping to work.
All of those things are entirely beyond the reach of record and playback mechanisms, and require you to dive into code (and stay there).
Problem Number 7: Churn Baby, Churn
Let’s say you manage to create a suite with a few hundred test cases using record and playback.
Then after a while, the product you are testing starts changing its shape a little bit. Maybe extra menu items are added, some visuals are changed or some features are reorganized. All those few hundred test cases used to be green, but now a growing number of them are starting to turn red whenever the suite is run.
To make matters worse, some of the tests you made got slightly modified xpath statements, while others simply got converted to code and then nobody really understood how it worked, because it involved some code snippets you copied from google.
You can’t release with those tests being red, right? So you end up being pragmatic, turning off the red tests and recording new ones. And you miss some minor — but important — actions or forget to fix those xpaths like you did the last time.
It’s an endless churn and a yuge waste of time and money.
The Bottom Line
The bottom line is, you might start using a record and playback tool and initially see good progress. But after a while, you are most likely going to hit a brick wall that is impossible to climb. This will be followed by lots of frustration, followed by rejection of the tool as a toy, followed by a budget fight before you finally can get a fresh start using a better approach.
We’ve designed Leaptest to not have a recorder — at least not in the traditional sense. Instead, we’ve tried to create an intuitive and very visual editor that lets you wire together easy-to-use building blocks that rely on image and text recognition to work (see video for a quick demonstration).
On top of this, we’re adding more and more features, some of which do require a more technical skillset in order to be used in advanced scenarios — not unlike some of the ones mentioned above. Our upcoming Selenium support is a good example of that. However, we are acutely aware of the challenges this presents to testers and we continue to work hard to make Leaptest as easy as possible to use.