DDT – Digital Driven Testing


Exploratory Testing, Continuous Integration and UI Automation in a Digital studio


It started when I took on a role with a predominantly .Net digital studio in Brisbane in late 2011. They had not had a testing team before and it seemed I was the first Full time tester employed, there had however been a part timer working there who later joined us as full time.

They had some idea where they wanted to be in terms of a testing strategy and were looking at hiring a team to help with plan implementation and provide the execution.

A Few weeks into it and a ton of manual testing later we got a third member to the team who was a string manual tester which would help the cause, while we were trying to work on building an internal strategy it seemed all the testing resource was being consumed by the work load, it carried on this way until pretty much the end of the year.


Early into the new year and overwhelmed by the amount of manual testing (more often than not it was actually “confirming” rather than testing) I put together an action plan of what I was going to tackle to cover at least a few of the immediate issues we were having.

1.     Implementation of some type of Continuous integration (CI)
2.     Introduction of context driven testing approach and then inclusion of more exploratory testing
3.     Some kind of automation to cover the repetitive manual checks

The other team member’s were going to tackle issues such as documentation, Processes & procedures.


Number 1: Continuous Integration


First off was to tackle the mess that was CI, I found out that there had been attempts to get this running previously and they had instances of Team City, Hudson and Cruise control.


I set out on a mission to find out what would be best for a studio that would potentially do builds for .Net, iOS, Android and flash projects, although .Net builds were the priority.

Did some research and a significant amount of trial and error, threw together a business case and came up with a proposal to use the Jenkins CI due to a number of factors:

1.     It was open source (free)
2.     It had a variety of plugins that we could use out of the box
3.     Setup and configuration was fairly easy
4.     That it would work in an environment that uses multiple languages

Got it running on a Windows 2008 box (ideally I would have put in on linux but didn’t have this option available). Configured a mac mini to process the iOS builds (Xcode is required), setup SSH between the two.
Then setup a couple of windows VM’s, 1 to run an Android emulator to run the functional android tests and another one to share the functional browser UI tests (due to the overhead these created).

Created some reporting on the CI such as Ncover, Simian and FXcop so that we get a bunch of code stats and reporting for the BDD functional tests.

A year later this is still going strong, running between 40 and 50 different builds, pushing the iOS builds to the mac mini, supplying informative reporting and after some config transform manipulation it rolls the web apps straight out to a staging server upon a successful build.

Lessons learned:

- Profiles for the different iOS projects building on the slave can be a nightmare, do your research and apply wildcard provisioning profiles
- Don’t update Jenkins until you have checked the release notes and the latest bugs raised, did on occasion introduce new issues by doing the updates
- Create Jenkins its own AD account
- Run functional UI tests as separate job to the builds (slows them down beyond belief)

Number 2: Exploratory/CDT Testing

A lot of the projects we had we tight on timeframes and budgets, often had very little in the way of specifications and business rules and the client did little testing themselves (so we didn’t have to supply them mounds of meaningless test cases) which was I thought was the perfect environment for unleashing some ET.

I had come across a number of articles and discussions on CDT (context driven testing) and ET with some great inspirations coming from James Bach, Michael Bolton, Cem Kaner, check out their sites for further information on the subjects.

I managed to convince a number of Project and Account managers to give this a shot in place of the just the standard suite of test cases that we would usually delivered and instead we could supply them with a number of Test Session reports and a test case spread sheet with coverage matrix, often UAT clients required some direction and we provided this with a series of high level test cases that contained no direct step actions and instead gave ideas of what needed to be tested, we also defined the expected results, this allowed for the tester to use their initiative and take their own approach to the test supplied.

The plan was to define at least some of the Charters (test session objectives) in the project Test Plan, but this really never took off due to less than 15% of all the projects having budget for a test plan and the fact that when we got the specifications this was often well after the Test Plans had been drawn.

Instead it was up to the projects testers/PM’s/developers to define the charters based on the documentation/information/knowledge that they had of the project.

Session report example


The sessions allowed the testers to record observations, risks, bugs, expectations, questions, assumptions and outcomes, the idea being that the charter would be achieved after a set period of time and if any additional exploring was needed to be done then new charters would be created which would be reviewed and then the outstanding ones would be reassessed and reordered based on results of the previously executed ones.














PM’s and Tester’s were happy with the approach and the results, so to were the clients and the information / documentation they received and the transparency that the session reports gave them.

It wasn’t long before this approach was adopted studio wide and all projects that went out after these initial ones included an amount of exploratory testing based on a CDT approach.

Lessons:
- Tailor the session reports based on the project and your domain, there’s no best way to go about it.
- Start implementing on some smaller project to gather support
- While keeping the objectives in mind (applying the charter) allow the freedom to be dynamic
- Don’t be to strict on timings, allow some freedom either side


CDT/ET side effect (a good one!):

BDD (Behaviour Driven Testing) thanks mainly to a book I picked up called “The Cucumber book” and the further reading of a number of articles by Dan North.

Actually ended up writing a lot of the tests based around scenarios drawn up using a BDD approach, it made it easy for these to be read and accepted by the Clients /Project managers and other testers and it was a simple exercise to enter these acceptance tests into Specflow (.Net ported version of Cucumber) and the run these tests from Visual Studio.


Number 3: Automation implementation (Functional UI)


Out of the 3 projects this one took the most resource while probably achieving the least. There were a number of factors that contributed to this and are listed in the lessons learned.

While I’m not going to go to cover too much detail I will endeavour to explain some of the ways we approached this and the tools chosen.

As with most things you need some backing from management and support from other members of the team, I have tried on a number of occasions to push tools and processes myself and have found these to fail more often than not.

I did a fair bit of research into test automation and looked at a number of different tools, which could do the job most efficiently, were easy to learn and were maintainable and for a reasonable price, most I found were way to expensive to be used in a digital domain where only some of the jobs were likely to get any kind of automation and a lot of the others were overly complex so the on going training would be a big factor.

After the research and a lot of trial and error it was decided that we would use Specflow in Visual studio, Specflow would handle the BDD written tests and the step definitions and we could use C# code to drive the browser functionality through Selenium-Webdriver. So there was no real outlay as we had these tools available or they were open source.

I initially tried to get the testers to write the features/scenarios and then to load these into Visual Studio(VS) and generate the step definitions, then work with one of the project developers to write the code.
Had only mild success with this, often found the testers struggled with the use of a IDE (VS) and the overhead of having them run the app locally, add the data and then pushing it back up to the VCS (version control) often requiring a lot of training and resource which slowed the automation implementation initially.

I found it was easier (on most occasions) to hand this off to the developers but we then struggled getting a consistent resource to write the code for framework and to write the code for the tests, then to maintain them when they broke, but we were able to get more developers on board and this helped drive it more and they were able to champion a lot of the work for it.

So we ended up getting resource and budget to run this on a number of the bigger website projects.
Tests were written and implemented into VS, the CI had scripts to run the BDD scenarios and execute the tests using MSbuild.
These were setup as a child jobs of the staging build (we used a 3 tier approach. Dev, staging, UAT). This would be run after the staging build was completed.
Tests were pretty much end to end (more of a site smoke test) it was used to confirm site stability and some of functionality that was high risk.

None of the projects really achieved an ideal level of coverage due to the projects often running over budget and over time, I don’t believe this was due to the inclusion of automation, more likely caused by unrealistic expectations and poor management, other projects were too small to warrant the automation overhead needed so we often did not implement any.


Lessons learned:

- Keep it simple, attempt to apply small tests that provide as much coverage as possible this sounds pretty straight forward but automation tests can become complicated very quickly.
- Pick projects that have enough resource to justify the extra overhead, don’t try and push it for all projects
- Get management and stakeholder support also involve the developers to help develop the framework and to have them for test scripting if needed.
- Work out test case storage and maintenance of the tests early on
- Try and offset some of the framework costs against client project work and bill some internally
- Start with high-level functional UI tests, simple, fast and easy to maintain (a series of smoke tests is a good start)
- Try and secure developer resource in advance
- Good test automation can be achieved with open source (have also achieved this in Rails and PHP domains as well using the same or similar tools)

Reading ideas:

Lessons learned in software testing
The Cucumber book
Explore it!
The Buccaneer scholar


Title info:

DDT was a wrestling (back when WWF was a federation) move created and made famous by Jake the snake, believed to be named after the toxic insecticide described as a “persistent organic pollutant”.


Not surprisingly it tends to have similarities to the practice of factory testing.

Comments

Popular posts from this blog

Installing Testlink (Test Management Tool) on Ubuntu 12.04 (AWS EC2) with URL Rewrite

Running Postman tests on Jenkins using Newman and AWS (Ubuntu 14.04)

Installing ReadyAPI on a Jenkins EC2 instance using X11