Intelligent Testing

Intelligent Testing In An AI World

AI is here! Whilst it’s not quite SkyNet, there is no denying that robots and automation are going to be key to innovation over the next 5 years and it’s up to us to adapt to and embrace this change.

The focus on Quality & Testing in an AI universe is huge.

Let’s not stick our heads in the sand and hope that John Conner comes back in time to stop this revolution from happening. Instead, let’s stand up and embrace the challenges that are coming our way and champion the fact that the greatest database and IT system is still the one that sits within our skulls.

At its heart, AI systems are broken down into the following arenas:

Business Requirements

As per any standard project lifecycle: What are the businesses requirements? What do they wish to achieve?

Business Rules

What attributes need to be met positively for a situation to be accepted by the business? Alternatively, what happens when the attributes are not met as expected? In essence, to open a bank account, there are certain industry rules that MUST be met correctly before an account can be open (anywhere in NZ or globally); or, in online shopping, if a customer selects an item to purchase, the first business rule is to check whether stock is available, and so forth.

AI Core

The core of any AI system are: Machine Learning Algorithms and Good Quality Data.

Machine Learning Algorithms, capable of both supervised and unsupervised learning:

Supervised Learning – Using a given set of variables, a function is generated that maps the desired output(s). The model achieves a desired level of accuracy defined by the business.

Unsupervised Learning – Clustering populations in different groups which is widely used for customers in different groups for specific interventions. There is no target or outcome variable to predict

Good Quality Data:

This enables the system(s) to learn how we, as end users, interact with an application, and eventually allows the AI system to predict what information we are going to need before we ask for it

Application(s)

These don’t change much. The application still performs the expected functionality as before but it now sits within an Artificial Intelligence framework where the “brains” of the application now sit outside rather than inside of the application. This means that applications become much easier to maintain and test.

Testing the above

Treat the algorithm as a black box: we don’t need to validate this. What we need to test is the quality of the data being used, what the business rules are, that they are being applied correctly and what the expected outcome should be.

Finally, testing ensures the accuracy of the end user experience, how the interface learns over time and consistency of the user experience across hardware/software.

To Sum It All Up
  • Accurate customer data is key for any artificial intelligence testing, and if you can’t access the customers data then you must create synthetic data (however this introduces risk).
  • Ensure that the business rules are fully understood and validated, the beauty of business rules are that unless a new rule is reliant on the output of an existing rule there is NO need for regression testing: each rule sits in isolation.
  • Don’t waste time validating the algorithm(s). It should never change as the controls are kept within the business rules engine(s).
  • Predict the outcome based on the information you know from the rules and the data, validate that prediction: know the amount of error(s) you and your users are willing to accept.
  • Test with new data. Once you’ve trained the network and frozen the architecture and variables, use fresh inputs and outputs to verify its accuracy.
  • Don’t count on all results being accurate. That’s just the nature of the beast. Whilst algebraic equations aren’t usually complex, combining many of them in a network can occasionally produce head-scratching results.
  • You can’t explain it by following logic, so you have to test and take an occasional bad result with each good result.
  • If it’s not good enough you may have to recommend throwing out the entire network architecture and starting over again.
  • Understand the architecture of the network as part of the testing process. Understanding how the network is constructed will help testers determine if another architecture might produce better results.
  • Focus on the consistency and accuracy of the end user experience/end-user decision making.
  • Realise that a true AI system will learn with each cycle of testing therefore the boundaries of your tests will expand with each new action learnt.
  • Communicate the level of confidence you have in the results to management and users. Machine learning systems offer you the unique opportunity to describe confidence in statistical terms, so use them.
Some other important considerations
  • Be realistic in your test scenarios. Three may well be sufficient to represent the best case, average case and worst-case scenario.
  • You may not reach mathematical optimisation. We are, after all, dealing with algorithms that produce approximations (best guesses) and not exact results. Determine what level of outcomes are acceptable for each scenario.
  • Defects will be reflected in the inability of the model to achieve the goals of an application.