Back to Projects
Python Deep Learning Computer Vision TimeSformer OpenCV

VALORANT Round Outcome Prediction

Collaborated with NAIST to develop a transformer-based deep learning approach for predicting round outcomes using minimap video analysis. Achieved 80.55% accuracy and constructed a dataset from 1,376 tournament videos.

View Publication
80.55%
Prediction Accuracy
21,228
Round Videos
IEEE CoG
Published At

Overview

This project focuses on predicting the outcome of rounds in the popular tactical shooter game VALORANT using deep learning techniques. By analyzing minimap video data, we developed a model capable of forecasting which team will win a round with high accuracy. The system processes video feeds of the in-game minimap, extracting player positions and movements to recognize patterns associated with winning strategies.

Key Features

Transformer Architecture

Utilized TimeSformer to capture temporal and spatial dependencies in video data for accurate predictions.

Large-Scale Dataset

Constructed a comprehensive dataset from 1,376 tournament videos to train and validate the model.

High Accuracy

Achieved 80.55% accuracy in predicting round outcomes, demonstrating the model's effectiveness.

Video Analysis

Processes minimap feeds to extract player positions and movement patterns in real-time.

Technologies Used

Python PyTorch TimeSformer OpenCV NumPy CUDA

Outcomes & Impact

  • Published at IEEE Conference on Games 2025
  • Demonstrated novel application of video transformers in esports analytics
  • Created reusable dataset for future research

Collaboration

This research was conducted in collaboration with the Nara Institute of Science and Technology (NAIST).