Hey folks,
Today I’ll be talking about time-to-first-action, why it’s important, and leveraging the concept to do some fun comparative analysis.
Time to First Action
Time to First Action (TTFA) is a way of organizing player actions in order to measure and evaluate player skill. It represents the time delta between a player moving their camera a significant amount – for instance, hitting a location key to return to their main base – and taking an action – say, injecting their hatchery. This idea was first introduced to me as Perception-Action Cycles (PACs) in a study on video games skill development. (thank you to Lalush for showing this to me)
Time to First Action was first conceived of, as far as I know, in Brood War. It was hypothesized as a more accurate predictor of player skill than actions-per-minute (APM). It measures how quickly a player responds to new information rather than simply measuring how many actions a player is executing.
APM is limited in its utility in measuring skill because it doesn’t differentiate between effective and ineffective actions – it’s hard to determine whether a change in APM correlates with a change in a player’s skill level. Time to first action tries to eliminate this by assuming that a player’s first action upon absorbing new information is likely to be useful – if this is true, executing it faster is better than executing it slower. It can therefore be used to approximate a player’s mechanical ability.
What Are We Measuring?
Each action measured by TTFA is composed of two distinct time segments:
- Cognitive segment – processing visual information, understanding what’s been seen, assembling a set of responses, deciding on the best response
- Mechanical segment – executing the best response
One of the consistent patterns I’ve observed across different types of games is that substantial increases in skill are associated with reducing or eliminating the cognitive segment associated with an action. Once this is achieved, the player only focuses on efficient mechanical execution.
It’s easy to see why this would be associated with a skill increase. The correct response is chosen almost immediately. Regardless of the mechanical difficulty associated with its execution, the player will quickly improve because they’re repeating the exact same thing over and over. What response to choose and how quickly it’s chosen has become a non-factor.
Time to First Action approximates, on average, how many of a game’s interactions a player has successfully transitioned from cognitive + mechanical challenges into pure mechanical challenges. The more a player practices and the more times they seem the same situation repeatedly, the faster they’ll react on average.
In theory, it also measures a player’s physical speed in how quickly they can react, but this has a fairly hard (and low) ceiling on it. The first action is usually very basic, like moving a group of units, so it’s easy to execute quickly. The rate that a game’s clock ticks – 16 times per second in StarCraft II’s case – also puts a ceiling on incremental mechanical improvement. Small, 10 to 20 millisecond improvements in reaction time won’t even register in the game engine.
Before we go on to our comparative analysis, I wanted to mention that this can be observed in traditional sports, too. Many professional athletes report that their games seem to occur in slow-motion; the better they get, the slower the game becomes. John McEnroe had this to say about professional tennis:
“Things slow down, the ball seems a lot bigger and you feel like you have more time. Everything computes – you have options, but you always take the right one.”
At Mr. McEnroe’s level of play, he had repeated the same kinds of tennis interactions – the ball is coming from this location to this location at this speed and this angle – so many times that in many cases he no longer even thought about them. He already knew the right response. When your mind isn’t thinking about what to do – when it’s purely focused on just doing it – time seems to move slower.
What Aren’t We Measuring?
Time to First Action measures how quickly a player reacts to new information. It cannot assess whether their reaction was correct. Its saving grace is the assumption that the first action players take upon absorbing new information will generally be a good one, if not the best one, particularly in competitive play.
Comparative Analysis
I conducted some replay analysis and compared three different players – ByuN (World Champion), MCanning (GM), and brownbear (high Diamond / low All-Time Legend).
Game events, player data, and other information were retrieved from replays using Blizzard’s s2protocol.
Each game event in a replay is associated with a type (like a camera update, a command, etc), a timestamp (measured in clock ticks), and a bunch of other data (such as coordinates for camera events).
I defined a “significant” camera movement as a camera update greater than 5000 units from the camera’s last location. For reference, a typical StarCraft II map is between 36,000 to 45,000 units across its X-axis and 36,000 to 45,000 units across its Y-axis.
Once a significant camera movement was observed, I calculated the latency until the next action. After processing a whole bunch of replays in this fashion, I generated some basic statistical data.
I used ByuN’s replays from Blizzcon, MCanning’s replays from his latest subscriber replay pack, and my own personal replays as the data sets for each player.
To start with, here’s a comparison of our time-to-first-action, The Y-Axis is time delta, in milliseconds, and the X-Axis is percentile. For instance, if you see that ByuN has a time-to-first-action of 1000 milliseconds at the 70 percentile mark, that means that about 30% of his time-to-first-actions are slower than 1000 milliseconds and about 70% are faster than 1000 milliseconds. The 50 percentile mark represents median, which typically represents a player’s “expected” time-to-first-action.
One of the first things we notice is the enormous difference between the Diamond player (me) and the two GM players. In the expected (median) case, I respond 250 milliseconds slower than they do. And this happens a lot – about 8 times a minute (half as often as the other two).
The other interesting piece is that the professional player (ByuN) and the GM player (MCanning) have similar time-to-first-action until the 60th percentile, where they begin to significantly diverge. This makes sense to me. By the time a player has reached GM, they’ve seen all of the most common interactions and have figured out the correct response. Only the professional player will bring down their reaction time on the long-tail situations that rarely occur. At the 90th percentile, ByuN’s reaction time is faster than MCanning’s to the same degree that MCanning’s reaction time is faster than mine at the 50th percentile. That may not seem like a lot, but because of how often these situations arise in a game, they add up really quickly.
While we’re on the subject of frequency, here’s a graph showing how often we each move our cameras per minute. The Y-Axis is camera movements per minute, and the X-Axis is percentile.
We once again see an enormous difference between between me and the two GM players – I move my camera almost half as often as they do. If we think about this in terms of Day9’s mental checklist, I’m doing almost half as many tasks as they are in my average game and typically responding a quarter of a second slower each time. That really adds up!
Finally, I ran the numbers for every player at Blizzcon. If you know the results, then you might be able to guess the fastest foreigner – Elazer! The guy is really fast.
Conclusion
Time to First Action is an interesting way of organizing player actions in order to approximate player skill. It’s an interesting starting point for thinking about skills development and how we get better at things. It’s by no means perfect – reacting to new information is only one of many components that compose mechanical skill, and even the way we measure it rests on an assumption of “smart execution” that is not always correct.
The comparative analysis provided here is just for fun. Even if we did want to draw serious conclusions from it, we would need a much larger sample size – the above is based off of ninety Blizzcon replays, thirty MCanning replays, and a couple hundred brownbear ladder games. Much like APM, no player should go out of their way to artificially increase their TTFA.
That’s everything I had for today. If you enjoyed this post, please consider following me on Twitter or Facebook to receive regular content updates, or checking out my game-design videos on YouTube and Twitch. Thanks for reading and see you next time.
Special thanks to Blizzard for releasing Blizzcon replays (and, more generally, for StarCraft) and MCanning (Twitter / Twitch) for releasing replays to subscribers. Also, thank you to Olimoley for allowing me to do this analysis on Olimoleague replays; I ended up not using the data for this post, but it helped me build the analysis tools.