Possible Ways to Automate In-game Movements Using AI
The biggest challenge when trying to automate game movements in a real gameplay environment is the dynamic nature of the game, which makes it difficult to make predictions. Random in-game events can happen at unexpected times which might interfere with the testing process. That is the reason why we need to introduce some sort of way to analyze what is happening on the screen and respond according to the event.
In this case, using AI will make things a lot easier because we can add custom movement logic that will only be executed if that event happens. For instance, if there is an obstacle in the way of our playable character we can tell it to either jump over the obstacle or move around it. It still involves sequentially defining an action to be taken, but this approach makes testing a bit more organic, meaning, we can handle some unpredictable events. Using AI in this way to recognize what is happening on the screen makes it a lot easier to add automated in-game movements, as we can handle unpredictable cases and act upon them.
That said, it is important to note that even with AI, some events might be difficult to handle depending on the in-game situation. For example, fast-paced games or poorly lit areas can make object detection difficult for the AI. Even for slower-paced games, it is important to turn off some post-processing effects, such as motion blur, which otherwise might interfere with the recognition process.
How can in-game movements be automated?
In order to automate in-game movements, we first need to understand the sequence of actions that need to be taken in an algorithmic manner. To make it easier to understand the logic we can draw a flowchart to visualize this algorithm.
Afterward, we need to choose which programming language to use. A very easy choice is Python because it offers a sizable list of libraries that makes achieving our goal much easier. Do note that while these libraries let us sequentially automate some movement, there is no integration with AI. We need to create the integration ourselves.
Challenges with Python libraries
For automating keyboard actions, we can utilize the Python library PyAutoGui. Even though it works flawlessly with simple use cases, like entering text into a browser or a simple windowed application, it has issues working with in-game character controls. This is related to how keyboard events are processed when a game window is in focus. To circumvent this issue we can use another library called PyDirectInput, which uses the Windows native API to simulate an actual keyboard event. That addresses the keyboard event registration issue, but do note that this is a Windows-specific solution and alternatives must be used if GNU/Linux or MacOS is your target operating system.
Similarly, for mouse actions, PyAutoGUI does work for simple use cases and even allows us to interact with the game UI, but unfortunately, it cannot control playable characters, hence these events are simply ignored due to how they are processed. At the time of writing this article, PyDirectInput does not solve our issue, and mouse move actions are simply ignored the same way they are when using PyAutoGUI. There is a workaround that works for a specific set of games that give us a “Raw input” mouse setting, but that limits our ability to test a broad range of games. Unfortunately, there is no easy-to-use library that can let us control mouse actions in a way that PyAutoGUI does, and as such, we are limited to using the native Windows API. To access the Win32 API, we can use a library called PyWin32.
Utilizing the Windows Native APIs with Python
While using native APIs directly, most of the movement logic has to be implemented by us. Overall, implementation is relatively straightforward, but there are some tricky details that can make things complicated. For example, if mouse smoothing is necessary, we also need to implement an algorithm that is going to do that for us Otherwise mouse movements will be instant since native APIs do not provide a smoothing feature. In this particular case, mouse click and scroll commands will be directly forwarded to the Win32 API, and, for mouse move commands, we will implement some smoothing so that the mouse movements are not instantaneous.
The final step is to delegate which library is going to be responsible for which commands. For the most part, PyDirectInput and PyWin32 are going to do the bulk of the work by executing keyboard and mouse commands respectively, but PyAutoGUI will act as a fallback for other operating systems, and cases where some commands could not be implemented using PyDirectInput or PyWin32.
What are some caveats of automated game testing?
As much as automated game testing using AI sounds exciting, there are some caveats that need to be taken into account before integrating it into your workflow. The main problem stems from inconsistencies when detecting objects, which may lead to a test ending abruptly if it fails to detect a certain object that is expected to be in the sequence. We can make it handle a decent amount of random in-game events, but there is still a chance of something happening that will block the sequence from progressing.
Also worth noting is that lighting, screen clarity, and game pace all play a huge role in determining whether or not AI is a viable option. If the game takes place in a very dimly lit environment, has any post-processing effects enabled, such as motion blur, chromatic aberration, and lens flares, or even in the case that the game viewport moves very quickly, the object recognition process could falter significantly.
While a single model may not work for all games, there are some implementations, such as YOLOv7, that work relatively well for certain cases. However, some games require a purposefully trained model in order for it to work on very unique objects.
Implementing in-game movements ourselves is a tricky endeavor as there are a lot of caveats and details that need to be taken into account. At the time of writing this article, there is no universal all-in-one solution to control all possible movement types on all operating systems, so it is necessary to combine multiple libraries together. Although as game testing in this manner is still in its very early phase, there are bound to be challenges. If we see wider industry adoption of similar testing methodologies we can expect to see more sophisticated tools becoming available.
Check out our Artificial Intelligence testing services page for more information on how we can help you improve your AI product, or contact us for more details.