From the moment I decided to start my media, I knew that I had to cover many categories alone because I still don’t have the resources (aka budget) to hire several editors who would take over several product categories. Thankfully I am a jack-of-all-trades with a strong interest in everything that requires electricity to operate. Still, my time is limited, so I have to work smart since I cannot work more than 12-14 hours per day, which I regularly hit, without stopping on weekends.
In most categories, I have invented methodologies that require minimal babysitting. In other words, you press a button, and all tests are automatically executed, and all results are automatically exported and processed.
I already do that in storage reviews.
In power supplies, I have sorted the testing time tremendously thanks to a custom application I have been developing for more than 14 years.
For coolers and chassis thermal load testing, I have made a special app that runs all tests and gathers all results.
I am also preparing something for monitors, which still needs some work to finish and will be tough.
GPU Automatic Benchmarking
In an IT site or a YouTube channel, because Hardware Busters is both, two product categories are among the most important, GPUs and CPUs. Thankfully most tests are the same between these two categories, but especially in GPUs, lots of time is required to finish all benchmarks, gather the results, and present them nicely and intuitively. After several GPU reviews, I realized that doing all game benchmarking manually is not the way since I lose precious time, and on top of that, it is super boring and through an automatic mode, the results can be even more accurate since you cannot play the same scene over and over again at the same way, in several games.
Yes, there are benchmarks like 3D Mark and several games with embedded benchmarks, but still, you need to find a way to run all of them in a series, be able to change resolutions (and other game settings f required), and why not? Have Powenetics logging power (and FPS once I find a way to synchronize PresentMon) simultaneously.
As far as I know, only TechPowerUp’s owner and GPU editor managed to build an automatic GPU testing suite so far, which is why he can produce so many GPU reviews in a limited period. All others, including me, must do most stuff manually, losing precious time. Several GPU reviewers that have done that for a long time have created a sort of automation, but nothing is even close to what TPU does. I aim to try to make something that will offer me full automation in GPU benchmarking, which I could easily modify to run CPU benchmarks.
First Problems and a Possible Solution
There are some problems that I have to deal with first, and I will explain them in the following lines. First of all, you need to start all games in series and be able also to start logging while the benchmark or gameplay is in progress. This is not hard if you use a scripting language like AutoIt.
I just started messing around with AutoIt, and it looks to be a great asset to achieve my goal. Its ability to automate procedures is invaluable; among its features, the simulation of keystrokes and mouse movements will be crucial. So, starting the games automatically, messing up with their settings to select different resolutions, and starting the frame capture and power analysis tools looks feasible through AutoIt.
Who will play the games? Bots?
And we reach the main problem, which is no other but how the game will be played or, to put it better, who will be the game’s player. This is million dollar question, how to perform this? I need a generic solution that will work in every game because I will have to frequently add new games in my automation routines to keep up with the developments in this field.
I could have a dummy player by only sending some keystrokes and mouse moves according to each game scene’s requirements. This is not an elegant solution, but it can work since I only have to play a scene for a limited period, usually around 30 seconds, to extract the required frame and power information. It will be good to have a trainer running in the background, which will protect the character from losing and ending the game session earlier than expected. Another problem is that if something happens, I won’t get any notification, so the scripting language will still believe that the game session is still running, and the results won’t be correct. I need to find a way to monitor the whole process constantly and have some fail-safes in case something goes wrong. Otherwise, I can have an entire benchmark session, running for many hours, go wrong only to find out once I get the results since I will not monitor the whole procedure. The main scope behind doing that is to have it in a “start testing and forget” mode.
The difficult way is explained in this video. Briefly, the guy uses a capture card to check on what is going on the screen, and with the use of an AI Python library, he controls a bot, to which he sends commands through a hacked Logitech dongle. I liked his approach very much, and it doesn’t seem too hard to do that, given my coding background, but I am unsure if I need such an advanced bot to measure GPU performance.
What Lies Ahead
We are currently moving to the new building we bought in Cyprus, so for the next two weeks, I won’t be able to do much in this area, but soon enough, I will start coding with AutoIt, which seems to be the best tool for the job. My priority is to automate the testing procedures in games with embedded benchmarks, and then I will move to the other games. Moreover, since I want to have power consumption data along with FPS, I will have to fix an FPS reporting issue I face in the Powenetics application. I could also add a CLI to it for easier control. All the above won’t be easy since, at the same time, I will have to deal with pending reviews and also my work at Cybenetics. But I am excited to start this new venture; hopefully, I will have results soon!