February 4th, 2016 by inflectra
When test cases are written well, the automation follows simply from the steps. Writing a good test case isn’t difficult, it’s just tedious, boring and sometimes downright painful. But when it’s done right, the results are worth it.
If you are building an automated system, it’s best to know the final goal from the start. Moreover, a well-designed framework is both independent of and focused on the test case. Whatever happens behind the scenes in the automation framework, test cases will determine the quality of the Application Under Test (AUT). If a framework isn’t designed to meet the needs of the automated test cases, then it fails to me the most critical reason for its existence.
When test cases are done right, the QA team can be confident that it knows the software through and through. This knowledge is obtained through writing good test cases. Test cases built on the idea that the QA team should always be able to predict the outcome of any action taken anywhere within the AUT. Wherever QA falls short of this goal, is where the greatest number of bugs will lie fallow. Automation is manual testing made repeatable. It doesn’t reduce the amount of manual testing up front because an automated test case is only as good as the manual test case it sprang from. And if you’ve ever written a test case, you know that you’ve looked at it from every angle by the time it is fully automated.
I consider this paragraph, the big DUH! If writing test cases is anything but drudgery, you’re doing it wrong. There are as many definitions of a test case as there are people giving them but I give mine for the sake of having a common starting point. What follows describes what I mean by my definition. Note to disbelievers, it’s fine to disagree vehemently with my definition, just follow with me in order to understand my methods for writing automated test cases and I’m sure you’ll be able to use your own definition to achieve the ends I describe here.
The implementation I give here, based on this definition, is the key to writing great automation. Test cases, both manual and automated should differ in only one way, syntax. If computers could read English as it’s written, you’d only write a test case once.
I hate writing test cases for the simple reason that every, single piece of information required to perform the test and determine the outcome – pass or fail – must be written out explicitly. If you were writing these test cases as code, and you will be, you couldn’t leave a piece of information to the discretion of the CPU. Think of the person executing the manual test case as a CPU, capable of carrying out instructions but requiring every detail in writing to do it. When writing test case automation, always remember the compiler is a harsh task master! The need to describe in excruciating detail what is happening in the test is what makes writing test cases such a painful experience. But far more importantly, detail is the determining factor in getting the test case done right.
Now that I’ve whined about how painful writing test cases is, here’s the cheese. While writing great test cases challenges my patience, I have never been handed a test case written in such explicit detail, that when I looked at it, I was unhappy to have all that information. I have however, on many, many occasions dreaded reading someone else’s test case because I knew I would have no idea what to do with the information I’d been given. If you can’t begin clicking and typing from the first line of a test case right through to the last, it needs to be rewritten. And who you might ask should make this call?
This is the most important question to answer. In government there are checks and balances for one important reason, everyone is human and without checks and balances, individuals would mess things up for everyone. In QA terms: every person who writes test cases is human and will, at times, write bad test cases. We get tired, we get over-worked, we get bored and then we slip into shorthand where it’s inappropriate (I’ll get back to this “appropriate” bit). The person exercising a test case must not be the person who wrote it. Why? I know my shorthand. I know exactly what I mean (at least I think I do) by every step I’ve written out so if I exercise my own test cases, they’ll be as fuzzy as they were when I wrote them. And the results of all my testing will be equally fuzzy and unreproducible. There is really only one good way to get people to write good test cases, that is by sending back every test case to be fixed by the original writer if it isn’t up to snuff. No matter how trivial the change, consider the test case holy and inviolable. Never, ever change someone else’s test case. This is where good QA software will help. Good software will record every change made and every person who made the change. With the history being unchangeable, violating this rule will not go unnoticed. Then, if I have to make a request for a nit-picky change to one step of a test case, I can because there’s no alternative.
And why is there no alternative? Because if you don’t know exactly what the original writer intended for that test case, and you guess (remembering the CPU doesn’t guess) then you are responsible if you miss a bug because you didn’t test what the writer had in mind.
It is widely understood that not every test case can be automated. If you have a team writing automation, you have people who can determine whether or not a test case can be automated. The person sifting through test cases, marking them manual or automated will determine who executes the test case.
If the test is manual, from here the work is straight-forward. Follow the steps. If you can’t follow them, send them back to be rewritten. If you can, then pass the test, or fail it and write a bug.
If the test is marked for automation, then the engineer writing test case automation will be following the steps. Just as in the case of the manual tester, the person writing automation must stop if the test case’s steps aren’t absolutely crystal clear and return the test case to its author. Unlike a manual tester, the engineer cannot overlook ambiguity. Why? The compiler won’t let you call a method, passing the parameter “?” (Well it might, but the odds of that being the correct parameter aren’t so good). So the test is returned for a rewrite until every necessary parameter for automating the test case is clearly defined. The automation engineer doesn’t get out of writing the test manually. That’s why automation is so valuable. To write the automated script, the engineer will almost certainly run the test a number of times to get the automation to match the test case step by step. Writing automation provides clarity in its precision and attention to detail that will lead to far fewer bugs in the AUT as long as these rules are applied, along with this last one.
Now for the guts of an automated test case. I want to demonstrate the principles I’ve mentioned using a simple example. The AUT for my example is Wikipedia. In every case, this automation begins at the home page https://www.wikipedia.org/. Imagine you receive the following test case:
Synopsis: This test goes to the English landing page from the central logo link and verifies the URL is correct
The automated test case can use the manual test case as comments. Most importantly, there should be one method call for each step in the written test case. Here’s an idealized example that exists in running code available on GitHub:
[TestMethod]
public void goesToEnglishLandingPageFromCentralLogoLink()
{
// Click on the English link next to the Central Logo
// that leads to the English landing page
homePage.LinkCentralLogoEnglish.Click();
// Get the actual URL of the English landing page
string actualResult = homePage.getCurrentUrl();
// Get the expected URL for the English landing page
// https://en.wikipedia.org/wiki/Main_Page
string expectedResult =
HomePageTestResources.HomePageLinkToEnglishLandingPage;
// Compare actual and expected URLs, noting details if the test fails
Assert.IsTrue(actualResult.Equals(expectedResult),
CommonMethods.FormatAssertMessage(expectedResult, actualResult));
}
The point here is that the code is readable with little or no understanding of the C# language. The people writing the automation are using methods created in the framework that model the steps in a test case. This is why automation is a harsh task master. The code is written clearly, step by step from the test case. There is no question what is being tested. There is no question what the results are. If the test case fails, it will be abundantly clear what went wrong.
Here’s what I get back when this test fails. For the sake of the example, imagine that while writing the automated script, I mistakenly entered the URL as https://en.wikipedia.org/wiki/MainPage (this link doesn’t fail if you use it, but it fails in the automation).
Here is part of the message from the output of the failure:
Test Name: goesToEnglishLandingPageFromCentralLogoLink
Test Outcome: Failed
Test Duration: 0:00:06.9473255
Result Message: Assert.IsTrue failed.
Expected Value: https://en.wikipedia.org/wiki/MainPage
Actual Value: https://en.wikipedia.org/wiki/Main_Page
There is no question what the test was intended to do. There is no question that it failed or how it failed. How do you choose to fix this? That’s a question for another time.
When I started in the software industry, the number one Word Processor was WordStar. I got my first job there as a technical support specialist and had no formal training as a software engineer. It was at WordStar that I was introduced to software automation. While there I got my first introduction to writing an automated testing system. The hardware involved one machine executing the commands on a second machine and the automation language was Fortran. I left WordStar to work at Borland as a software quality assurance engineer. By my third job, I was working as a software engineer, writing the user interface (an early Wizard system) for Symantec’s LiveUpdate in C++, my first object oriented language. I have worked for a wide range of companies during my career: everything from Internet startups to large software companies like Borland, Symantec and EMC. My recent efforts have included work for companies like Wells Fargo and American Specialty Health. I currently work at QualiTest Group where our team manages test automation for Associated Press.
Ask an Inflectra expert:
SpiraTest combines test management, requirements traceability & bug-tracking
SpiraTeam brings your teams together, managing the entire application lifecycle
SpiraPlan lets you manage your programs and portfolio of projects like never before
Orchestrates your automated regression testing, functional, load and performance
The ultimate test automation platform for web, mobile, and desktop applications
The help desk system, designed specifically for software support teams
Cloud hosted, secure source code management - Git and Subversion
Exploratory testing capture tool that automatically records your testing activity
Let us deal with the IT pain so you don't have to. Or use on-premise if you prefer.
Our customers work in every industry imaginable. From financial services to healthcare and biotech to government and defense and more, we work with our customers to address their specific needs.
Our products do not enforce a methodology on you, instead they let you work your way. Whether you work in agile development, Scrum and XP, Kanban and Lean, Waterfall, hybrid, or Scaled Agile Inflectra can help.
If you want to learn more about application delivery, testing, and more take a look at our whitepapers, videos, background papers, blog, and presentations.
Customers use our tools to help automate repetitive tasks and streamline their business processes using our Robotic Process Automation (RPA) solutions.
We collaborate with a wide range of teams to bring our customers a range of services (including load testing, training, and consultation), complimentary technologies, and specialized tools for specific industries.
Learn how different organizations have benefited from using Inflectra products to manage their software testing and application develooment.
Outstanding support is the foundation of our company. We make support a priority over all other work. Take a look at our support policy.
Discover great tips, discussions, and technical solutions from fellow customers and Inflectra's technical experts.
If you can't find the answer you're looking for, please get in touch with us: over email, phone, or online.
We are constantly creating new videos to help customers learn about our products, including through in depth webinars, all freely available along with a wide selection of presentations.
We provide a number of resources to help customers learn how to get the most out of our products, with free online resources, virtual classrooms, and face to face.
Read about Inflectra, our manifesto, and values. Meet our incredible customers who are building awesome things. We are focused on their success using our tools.
The Inflectra Blog contains articles on all aspects of the software lifecycle.
In addition we have whitepapers,
background articles, videos and
presentations to help get you started.
Events are a big part of our awesome customer service. They are a chance to learn more about us, our products, and how to level up your skills with our tools.
We partner with educational institutions and individuals all over the world. We are also a great place to work and encourage you to explore joining our team.
Please contact us with your questions, feedback, comments, or suggestions. We'll get back to you as soon as possible.
When you need additional assistance (be it training, consulting, or integration services) our global certified solution provider partner network is ready to help.
At Inflectra, we are fully committed to provide our customers with the very best products and customer service.
We want to help developers extend and customize our tools to fit in with their needs. We provide robust APIs, sample code, and open source projects.