AI-Powered Detection

VANGUARD++

Catch cheaters with AI. Local processing, zero data collection, full privacy.

What It Does

Drop in gameplay footage, train a model on your clips, and let it figure out who's being a bit too accurate.

🎬

Video Analysis

Feeds your gameplay through a Vision Transformer, extracting 16 frames per clip. Supports most formats - MP4, AVI, MOV, MKV.

🔊

Audio Detection

Uses librosa to detect kill sounds and automatically snips 2-second clips around each one. Pretty neat actually.

🤖

Train Your Own Model

Label clips as cheating or legit, fine-tune a ViT model on them. The more data you feed it, the better it gets.

📊

Confidence Scores

Get scores from 0.0 (definitely clean) to 1.0 (yeah that's sus). Frame-by-frame breakdown included.

🔐

Runs Locally

Everything stays on your machine. No cloud uploads, no tracking, no sketchy data collection. Just you and your GPU.

💾

Export Results

Save everything as JSON for later. Build reports, track patterns, or just keep records for yourself.

Why I Made This

Got tired of getting beamed by players with aim that was just a bit too perfect, you know? Started wondering if AI could pick up on the same patterns I was noticing.

Turns out Vision Transformers are pretty good at spotting unnatural aim patterns. This project is the result of that curiosity - a local tool that analyzes gameplay and gives you confidence scores.

It's not perfect, and it's definitely not Riot's Vanguard, but it's a fun experiment in applying ML to gaming problems.

Vyapari-Dev

AI/ML Developer & Gaming Enthusiast

📍 Mumbai, India

@Vyapari-Dev on GitLab →

Installation

Should take about 5 minutes. You'll need Python and ideally a CUDA GPU.

Python 3.10+ 4GB+ RAM GPU (optional but recommended) ~20GB disk space
# clone the repo git clone https://gitlab.com/Vyapari-Dev/vanguard.git cd vanguard # setup virtual environment python -m venv venv venv\Scripts\activate # install dependencies (first run downloads models ~2-3GB) pip install -r requirements.txt # run the app python main.py

How to Use

Pretty straightforward once you get it running.

1

Process Your Videos

Load up your Valorant gameplay. The app will analyze the audio track and auto-extract moments where kills happen.

2

Label Your Clips

Sort clips into folders - cheating/ for sus plays, legitimate/ for normal gameplay. More clips = better model.

3

Train the Model

Hit the train button, grab a coffee. 10-20 epochs usually does the trick. Model saves automatically.

4

Analyze New Clips

Feed it new gameplay and get confidence scores. Anything above 0.7 is worth a closer look.