Planit Index 2013
WRITTEN BY
G.VANDENBERGEN POSTED ON 1 Nov 2013
CATEGORIES

What can the testing industry learn from itself?

As a Test Analyst at Speedwell, a couple of weeks ago the QA team and I had the opportunity to take time out of our busy schedule to attend the Planit Testing Index 2013 (AU & NZ) hosted at the Hilton in Brisbane. The seminar was presented by Chris Carter, the Managing Director at Planit, a national and international software testing and business analysis consulting company.

 

Each year Planit runs the “Testing Index: The Benchmark in Software Testing and Systems Assurance” in major cities across Australia and New Zealand. The index surveys a range of companies from various sectors including Financial, Software Development & IT, Government, Telecoms, and other areas. This year the index attracted 303 respondents with up to 40% of those from large organisations with 2000+ staff.

 

Respondents by industry segment

Respondents by organisation size

 

It’s important to note that Speedwell falls into the industry segment for Software Development & IT and organisation size of 1-99, both represented by 18% of respondents. This means that information presented in the Planit seminars is geared more towards corporate environments as opposed to the boutique and more personalised context that Speedwell excels in. In the future it would be more interesting to see statistics presented specific to certain industry sectors and particular types of businesses that are more closely aligned to Speedwell’s specialties.

 

The index covers 5 broad topic categories:

  • Project budgets, allocations, and outcomes
  • Methodologies used and their level of success
  • Performance testing and test automation adoption
  • Popular tools for test management and technical testing
  • Project outlooks and job prospects for 2014

 

The seminars are a wealth of information of the testing industry in Australia and New Zealand. This post focuses on a handful of takeaways to do with business cases and budget for testing, testing activity and schedule, and the impact requirements defini

 

One of the questions respondents answered was what their primary business case justifications were for investing in testing. At 67%, the top response was to deliver reliable software to meet customer expectations and increase satisfaction. Additionally, 18% of primary business cases and 46% of secondary business cases were concerned with reducing costs, particularly for software maintenance.

Primary business case justification for testing

 

These business cases were reinforced by the observed benefits of testing, where 78% of respondents saw improved project delivery inline with expectations and 63% of respondents saw reduced maintenance costs.

Benefits observed when increasing investment in testing

 

This recognition of the importance of testing is reflected in its project budget allocation, with an average 20% of total project spending, second only to development at 34%, and followed closely by requirements definition at 19%. Chris Carter highlighted that this investment in requirements definition has gradually increased as organisations recognise the correlation between excellent requirements and project success.

Budget allocation across project phase

 

The make-up and distribution of testing activity has been consistent for the last three Planit Testing Indexes, with test execution taking up the largest amount of time allocated at 39%.

 

Breakdown of testing activity

 

It’s interesting to note that requirements review has the smallest allocation at 11%. One of the sticking points for Chris Carter, that reoccurs at each Testing Index, is that requirements definition is a consistent source of problems on surveyed projects. Referring back to Budget Allocation chart, you will see that 19% of budget is dedicated to requirements definition, but the testing task of Requirements Review is only allocated 11% of total testing time. This could indicate that there’s opportunity to defer more testing resources to Requirements Review in earlier project phases, and potentially save more of testing and development time in later phases.

 

Further to this, Planit surveyed respondents on the actual vs. desired timings of when testing begins in a project life-cycle. Fewer projects begin testing in the respondents’ preferred phase of requirements definition, reporting its lowest level since 2009.

 

When testing started - actual vs. desired

 

Chris Carter referred to the space between the two major peaks of desired vs. actual start of testing as the “risk gap”, highlighting the impact of delaying testing involvement in the project can have on its outcome. The importance of excellent requirements definition is further supported by respondents’ answers to the primary causes of project failure, which “poor or changing business requirements” ranked significantly higher than any other result at 70%, followed by “reduced budget” at a distant second, coming in at 12%.

Primary causes of project failure

 

Despite the corporate focused context of the seminar I believe the important takeaways are universal across industry sectors and company types: upfront investment in requirements definition across both development and testing disciplines can save time and money later in the project life cycle. It will be interesting to see how the art and science of requirements engineering has progressed by the next Planit Testing Index in 2014. Will the fields of testing and software development heed Chris Carter’s words and address the challenges or continue to repeat them?

 

I would like to thank Planit and Chris Carter for sharing this information with the local industry (with food and drinks provided, all for free!). For more information about Planit and their Testing Index please see their site at http://www.planit.net.au/.

CATEGORIES
Future Trends

Ready to advance your digital capabilities?

CONTACT US

X

By continuing to use our site, you indicate your acceptance
of our Privacy Policy and Website Access Rules.

Confirm