-
A sample video shows how computer vision (running on an external computer) detects the enemy and calculates how far the mouse needs to move to target that enemy.
When it comes to the cat-and-mouse game of stopping cheaters in online games, anti-cheat efforts often rely in part on technology that ensures the wider system running the game itself isn’t compromised. On the PC, that can mean so-called “kernel-level drivers” which monitor system memory for modifications that could affect the game’s intended operation. On consoles, that can mean relying on system-level security that prevents unsigned code from being run at all (until and unless the system is effectively hacked, that is).
But there’s a growing category of cheating methods that can now effectively get around these forms of detection in many first-person shooters. By using external tools like capture cards and “emulated input” devices, along with machine learning-powered computer vision software running on a separate computer, these cheating engines totally circumvent the secure environments set up by PC and console game makers. This is forcing the developers behind these games to look to alternate methods to detect and stop these cheaters in their tracks.
How it works
The basic toolchain used for these external emulated-input cheating methods is relatively simple. The first step is using an external video capture card to record a game’s live output and instantly send it to a separate computer. Those display frames are then run through a computer vision-based object detection algorithm like You Only Look Once (YOLO) that has been trained to find human-shaped enemies in the image (or at least in a small central portion of the image near the targeting reticle).