Predict Testing

Using Data to Predict Testing

Whether you are using an AGILE or Waterfall methodology for your testing or whether you are in two-week sprints or three months of intense functional testing, our process for testing remains the same.

We look at:
  • What functionality is being provided in that specific sprint or weekly deployment from development,
  • What use cases need to be validated,
  • Creating our test cases (manual or automated) to validate that functionality,
  • Executing our test cases,
  • Deploying the code either into production or elsewhere.

 

We put a tick in the box that we have validated the functionality, but have we really?

For example, here the project team have identified functionalities 5, 6, 7 and 15 as the highest priorities for testing. Therefore the majority of the testing effort has been placed on these areas.

However, by analysing the customers data upfront a different picture appears. Here we can see where actual customers & end use the functionality the most and, therefore, where we should prioritise our efforts.

Here we’ve overlaid the areas of functionality that are used the most over the priority the functionality was tested. According to the customer’s data which shows that in some areas we’ve over-tested (functionalities 1, 2, 5 and 6). Therefore we’ve wasted our effort as these areas are not used as much by the customer.

However, the functionality where we’ve placed a low priority (Functionalities 8, 9, 10, 11, 12 & 13) the customer’s data indicates that these are high areas of usage.

Realise that your customer’s data is invaluable, it gives you the true picture of functional priority and how an application is used. By analysing this data, you can add true value to a project’s quality and testing and ensure customers and end users experience the best from the end product.