So what's this all about?
Many approaches to software testing exist, but few acknowledge the small window of time a software tester is given during any build cycle to do their job.
If you're a software tester you will know the pain of testing time being squeezed, but the same amount of work is expected to be delivered.
Adaptive Software Testing Approach is a way to apply software testing which makes the best use of the time allocated to testing on any given project.
The origin of Adaptive Software Testing Approach
As testers we have a responsibility to check the quality of software. We have a responsibility to make the best use of our time and, to report what we find to parties invested in the quality of the product (developers, product owners etc) in a way which is clear and concise (bug ticket format). We also have the respnsibility to report test metrics to management, to give them an easy to interpret overall view of the quality of the product.
In my many years as a tester I have used various approaches, worked within many software delivery methodologies, and observed and tried what works and what doesn't, when it comes to being a tester in a delivery team. These observations and experience are what I have used to formulate my Adaptive Software Testing Approach.
The purpose of this website is to share information of this approach with you and hopefully assist you in having the best software testing process you possibly can. Whether you are a fully fledged tester, an engineering manager looking to build a testing function, or a junior tester taking their first steps in to the software industry, I hope that the clear and concise approach to testing tasks, presented by the Adaptive Software Testing Approach will serve you well.
Giving a tester autonomy & making the best use of a testers time
A tester should be given the autonomy to decide how they will test a feature. We are employed to make decisions and trusted to execute them properly.
The Adaptive Testing Approach means each testing task is approached individually, based on risk and time constraints.
For example: the testing of a small 1 line change can be acknowledged with comment on the ticket and a small list of scenarios covered. But a large new feature will require the tester to put more consideration in to how to test, and to estimate the time it will take to complete testing.
If test cases are required ahead of time for a testing task and it feels comfortable to write them - write them (usually where complex functionality will exist.)
If time constraints mean full tests can't be written, and it makes more sense to write 'throw away' scripts which cover the main functionality and guide you to open up testing around the main functionality.
If the nature of the project means the AUD needs to be explored because documentation or understanding on expected behaviour is limited (proof of concepts, prototypes) exploratory testing should be applied.
The central repository
Alongside the testing activities a central repository of test cases must be maintained which cover all functionality. The test cases within can then be automated (based on priority), used to guide a manual regression test cycle, report coverage to stakeholders and guide new team members to learn the product. The central repository can also serve as a resource to document the expected behaviour of the application by way of test cases - this is especially handy in an environment where prototyping is happening.