Projectsへ戻る
Python Deep Learning Computer Vision TimeSformer OpenCV

VALORANT Round Outcome Prediction

Collaborated with NAIST to build a transformer-based approach for predicting VALORANT round outcomes from minimap video analysis.

Open External Link
80.55%
Prediction Accuracy
21,228
Round Videos
IEEE CoG
Published At

Overview

This project focuses on predicting the outcome of rounds in the tactical shooter VALORANT using deep learning. By analyzing minimap video data, the model forecasts which team will win a round with high accuracy.

The system treats the minimap as a dense summary of player movement and team coordination. That made it possible to build a robust computer-vision pipeline without needing privileged in-game telemetry.

Key Features

Transformer Architecture

Used TimeSformer to capture temporal and spatial dependencies in minimap video sequences.

Large-Scale Dataset

Built a dataset from 1,376 tournament videos to train and validate the model.

Reliable Predictions

Reached 80.55% accuracy in forecasting round outcomes from live match state.

Video-First Pipeline

Processed minimap footage to infer player positions and team momentum in real time.

Technologies Used

Python PyTorch TimeSformer OpenCV NumPy CUDA

Outcomes & Impact

  • Published at IEEE Conference on Games 2025.
  • Demonstrated a novel use of video transformers in esports analytics.
  • Created a reusable dataset for follow-on research.

Collaboration

Conducted in collaboration with the Nara Institute of Science and Technology (NAIST).