Testing methodology

GadgetGuy reviews hundreds of products each year so we need a well defined and robust testing methodology.

It is vital to start with a strong and well-defined testing methodology. We develop typical ‘use’ paradigms first – what are the ‘top-ten’ things this product must do and to what level to meet consumer expectations?

We also look at the manufacturer’s performance claims. Often these are written by over-enthusiastic marketers that have never used the product. We abhor marketing BS and will call it out.

Then we use appropriate replicable testing tools and regimens applicable to real-world use.

Where do the review products come from?

Most items are provided by manufacturers at launch. After two decades of reviews we are well known and well respected by them. All items must be available for at least 90-days from launch and ideally have an Australian website, pricing, local distribution and support. Reviews are current until the next version is available.

Testing methodology

With smartphones we use the device for at least a week making sure to include email accounts, web surfing, taking pictures, and of course, making phone calls. Then we use software like GeekBench to give performance ratings.

With computers, we use them to write the review. We measurethe speed of things like web surfing, email, writing, and the odd game although these days its either Windows or macOS so there is little operational difference. We use software including UserBenchMark, Crystal Disk Mark and various stress tests.

With audio, we have a consistent playlist to check headphones and powered speakers. We also use a frequency response meter, decibel metre and tone generator.

And sometimes we have to invent tests and equipment like environmental humidity or temperature, actuation force, weight and wear and tear.

Ratings are out of five

Each device is rated out-of-five points for: Features; Value for money; Performance; Easy of Use; and Design. To a degree, these are subjective but generally reflect real-world use. If in doubt we ask other reviewers for their opinion.

Our ratings reflect the performance within a class. For example, a mass market smartphone under $200 should never be compared to a flagship phone over $1,000. That is why a low-cost phone may score 5 out-of-5  where it may score 1-out-of-5 against a more expensive one. We call it the fit-for-purpose test.

If a product fails to meet reasonable standards (above three-out-of-five) we generally don’t publish the review. Instead, we ask the manufacturer to address issues we have found.

Can you write for us?

Sorry no. Our testers have over one hundred years of cumulative experience and know what to look for. But we encourage you to comment on reviews if we have missed anything you are interested in.