Functional Testing Best Practices

The design and coding of an application may go smoothly on paper, but if the finished product fails to live up to expectations (or even function properly), the project as a whole will fail – and all that work will have been for nothing.

One way to guard against this is by subjecting the application to rigorous testing before it goes out on general release.

To Begin: User Stories

Before we start a project, we always set to work crafting the best user stories – i.e. short, simple descriptions of particular features in the app as desired and described from a user’s perspective – we can muster. Testing then against these criteria at a later stage becomes a much more simplified and streamlined exercise.

The best user stories are invariably the simplest, and normally take the form of a very straightforward formula:

As a <type of user>, I want <some goal> so that <some reason>

To give you an idea, here are a couple of examples:

  • As a student, I want reminder notifications so I don’t miss deadlines.
  • As a customer, I want to review my shopping cart before purchase so I’m confident to spend

From here, we begin the build, and then, when the time comes, embrace rounds of functional testing at each stage of development, always keeping those basic user stories that we started out with in mind.

What Is Functional Testing?

Functional testing is a crucial element in ensuring the quality of a software product, and confirming that it performs its stated functions in a way that the users expect. Testing is used to verify that your designed application, website, software, or hardware executes its intended functions through a proper response to user commands, a consistent user interface, integration with other systems and business processes, and proper handling of data and searches.

The process involves a series of tests covering all aspects of an application's tool set: database, user interface, application programming interfaces (APIs), security features, network protocols, installation routines, etc. Tests are conducted on each feature of the software to determine its behaviour, using a combination of inputs simulating normal operating conditions, and deliberate anomalies and errors.

Usability issues are given prime consideration in the testing sequence, as it's the user experience (UX) that will ultimately determine the success or otherwise of a project. Testing of an application's GUI often reveals problems that simply don't show up in analysis of the underlying source code.

A Functional Test Structure

Functional testing is typically performed by a testing team, and requires some degree of co-ordination between the team members. These individuals need to be flexible to adapt to changing conditions and to adopt new roles within the team as the process demands.

The group has a responsibility not only to design and develop tests based on specific requirements, but also to perform those tests, and then analyse and report on any defects they throw up – all the while ensuring that the testing process remains in line with crucial timetables and business flows.

An optimised testing cycle might look like this:

  • Co-ordinate with the Development Team: This should be ongoing, as testing proceeds and feedback and issues are acted on.
  • Plan the Tests: This will be based on the software requirements already identified, and the features proposed to address them.
  • Perform the Tests: Giving critical elements (such as the user interface) and high-risk features a greater priority.
  • Manage the Defects Discovered: Here, a clear definition of what does or doesn't constitute a defect is essential to avoid confusion and save time.
  • Report on Test Results and Metrics: It's important to establish what should actually be measured, and to present the results in a clear and actionable manner.

Other Kinds of Tests

While functional testing acts at the interface between an application and the system and users associated with it, there are other kinds of testing that contribute to software development and Quality Assurance (QA).

Unit testing is performed on the smallest elements of a system, such as individual classes within an application. Each component is tested to ensure that it properly handles input and output under normal operation, borderline use cases, and error conditions.

Integration testing looks at a sub-division of a system, to ensure that all processes within it are working together smoothly.

Regression testing is a two-part process, applied on fixed code. First, a confirmation test verifies the integrity of the fix itself, then a regression test on the application as a whole confirms whether or not the applied fix has broken any of the program's functionality.

Smoke tests may be run as a final check, when the collaboration between developers and testers results in changes in code close to a finished product. The testing ensures that these changes have not destabilised the overall structure of the application or caused potentially fatal errors prior to release.

Usability testing validates each part of the software's GUI (buttons, text boxes, etc.) for their visibility, interaction, ease of use, and for compliance with relevant standards.

Browser compatibility tests are employed on Web and mobile applications to ensure the software's performance on various types and versions of browsers. The effects of changing server integration and links to third-party systems may also be tested.

As functional testing is often time-consuming, a hybrid approach combining the best fit of several relevant testing methods is wise.

Testing by Hand

Particularly where the software's user interface is concerned, a human tester is preferred, as a real live person can easily spot potential and actual problems of a program in use. By providing quick access to test results in a form that's easy for other testers and developers to understand, a human tester is the preferred option for applications receiving a lot of development input.

That being said, it must be taken into account that manual testing does of course take a long time, can't penetrate deep into the workings of many applications, and yields results which are subject to human error – though it is still essential, nonetheless.

Indeed, at certain stages, the app must also be subjected to thorough rounds of scenario testing. Scenario testing is of great importance when it comes to manual testing, for its purpose is to anticipate and emulate the real-world occurrences that users will face when using the app. This entails creating hypothetical situations users of the app might encounter, and then manually running through that situation to see if the app stands up to it. This helps testers to evaluate the program's real-world adaptability, as well as helping to test many functions that are not frequently used or tested (or perhaps aren't tested thoroughly enough).

Scenario testing also involves recreating conditions for when network connections are less than perfect, when a user only has limited battery power, when there are incoming calls, text messages and other alerts that may pop up.

Indeed, testing expert Matt Heusser suggests that testers should purchase real devices, rather than relying on simulators and emulators that mimic scenarios. As sophisticated as these programs can be, he says, they are never as accurate as the real thing.

“If you are testing a mobile application that targets multiple devices, forget about emulators and simulators and get your hands on some real devices. You need not just one or two, and not just what you happen to have lying around (although that helps), but a real set of test devices. A good place to start: a modern Android phone, a current iOS device and a one-generation-back iOS device. If the test team includes more than two people, get two of each device, put them on the local wireless network and get testing.

“You can run to the local store, whip out your credit card and buy some devices. Don't tell me you can't. Oh, you say the problem is budget? That is different, and it's kind of the point: Find out if your organization is serious about mobile testing. If the organization isn't willing to invest a couple thousand dollars, that tells you a lot about the priority of mobile testing.”

Automated Testing

Automation is generally the preferred option for software testing. Tests can be re-used, and scripts written to perform repetitive tasks. The tests can cover a wider range of issues with greater accuracy, and provide formal processes for detecting and reporting on any defects found.

We use Code Climate for our automated code testing. We love it – it’s powered by open source analysis engines, and enables automated code review for test coverage, complexity, duplication and style for virtually any programming language, including Ruby, JavaScript, Go, PHP, CoffeeScript, and CSS.

Clear and Accurate Reporting

The communications linking the testing and development teams should be established at the outset of a project, with feedback and reporting in clearly defined terms which are agreed upon by all.

The testing team should also act as a mediator between the development team and the user community, as feedback and usability issues are reported back from beta tests, and ongoing version releases.

Reports should give a feature-by-feature view of the overall health and defects of an application that can be used as a template for its improvement.

For these purposes, it helps no end to have an issue and project tracking tool in your arsenal – and we can’t recommend any more highly than JIRA. It’s perfect for planning, tracking, releasing and reporting, and integrates seamlessly with other productivity tools that you will probably already be using, such as HipChat, Bamboo and Bitbucket. Your whole team can literally see the app grow and evolve over time.

Want to know more or need help building an app, just drop me a line at krzysztof.marszalek@rst-it.com or visit our website.

Kris
Author

Krzysztof Marszałek

rst-it.com

I am the CEO of RST-IT.com - a software house specializing in building usable web and mobile apps for STARTUPS.

Comments