Number Crunching Warzone Atlanta 2017

I haven’t been doing much in the way of tournament statistics this year – first I wanted to give 8th Edition some time to stabilize, and once it had, my life got busy. But the release of the results for Warzone: Atlanta gives me a chance to take a look at a relatively major tournament and see how the new edition is faring.

TL;DR: There’s some good, and some bad.

Attendance

Attendance was strong, with a little over 100 players, and nearly everyone finishing the tournament, which is one of the strong points of WZA – there’s some pretty strong incentives to show up for day two. Forty-two of the results had no ranking for last year’s Warzone, which means there’s some fresh blood coming in (sadly, I couldn’t make it this year due to travel for work).

Pleasantly, there’s a good spread of returning players, including a lot of representation at the “bottom” of the pack in terms of Battle Points. That’s great to see – that’s normally where I hang out, and WZA was a blast to play in even if you weren’t at the top tables. Below is a figure of the distribution of scores for new and returning players.

All in all, I think that’s a good sign of a healthy event.

It’s also not particularly stagnant in terms of who was winning. While there were lots of familiar names in the top ranks, and most people did stay roughly where they were last year, there were plenty of people who moved up or down a double-digit number of places between 2016 and 2017.

That’s good for someone like Adam Abramowicz, whose whole goal for the next year depends on their being some upward mobility in events.

Faction Representation

There are now a lot of factions in 40K. Really a lot. And it’s nice to see most of them showing up in tournaments. Chaos Space Marines are definitely over represented, and overall Chaos has a very healthy representation, while everyone else is a smear of either fairly common armies (Eldar, IG, AdMech, etc.) or rare armies (Blood angels, Grey Knights, Necrons, and one Adeptus Titanicus player). The notable difference between this year and last is the outright decline of Tau and Eldar armies from their previous highs, as they fall out of fashion due to their place in the current meta.

Surprisingly, there were very few Ynnari armies. Given they have a reputation as being slightly better than the Craftworlds, and that the Eldar codex released right on the Oct. 28th deadline for armies, I would have expected there to be more.

Army Performance

Probably the thing people care about the most – how did those armies do? We begin with my usual violin plot of the battle points distribution, dropping the armies that only had one representative (because a single data-point distribution is a nonsensical concept).

Remember what I said about the Ynnari being good? The few that were there were, consistently delivering slightly above median performance. The unsorted “Chaos” faction similarly delivers pretty stirring results, with the Dark Eldar and the Harlequins consistently underperforming – but also only present in small numbers. The Imperial Guard, as one might predict, are strong, with the Eldar doing surprisingly middling for all the rumors of their decline.

Let’s compare that to last year’s event:

I’m not fully convinced that the current tournament meta is any “healthier” – many of the same descriptions would apply – a few dominant factions, followed by a large number of middling-acceptable forces, and a few laggard armies. I certainly think it’s hard to argue, from this data at least, that 8th Edition has done much to fix faction imbalance.

But there is yet time.

Toward the end of 7th, I started taking a regression modeling approach to looking at tournament results, starting with WZA, and we’ll do it again here. The battle point scores are, roughly, uniform, which is handy. Last year I wasn’t really happy with how I handled missing data (like the absence of an ITC score for people for whom WZA is their only ITC event). Many people just ignore it, which means it drops out of the data entirely, but there’s lots of reasons not to do this – including that it absolutely demolishes your sample size. Last year I had assumed everyone was “average”, and so got a median ITC and median placing last year if they didn’t have one.

This year, I’m using a slightly more sophisticated approach called multiple imputation, which basically builds a model to predict the missing values, simulates those many times, and you use those combined simulations to make estimates. It sounds more complicated than it is.

This year, we’ve got additive scores, rather than multipliers (because the normalness of the battle points means I can use linear regression). Each army’s “Score” is the number they can expect to add to the average result due to their faction. Higher numbers are better (95% confidence intervals are in parens):

  • Blood Angels: 71.4 (94.7,48.1) (It helps when your one representative places really well)
  • Chaos Daemons: 49.8 (74.9,24.8)
  • Chaos: 56.3 (81.41, 31.2)
  • Chaos Space Marines: 31.3 (52.9, 9.65)
  • Custodes: 28.0 (77.19, -21.18)
  • Dark Angels: 33.4 (59.7,7.19)
  • Dark Eldar: 22.1 (47.7, -3.5)
  • Death Guard: 22.0 (46,4,-2.4)
  • Eldar: 24.3 (48.3, 0.32)
  • Grey Knights: -1.7 (19.3,-22.7) (ouch)
  • Harlequins: 39.9 (70.9, 8.9)
  • Imperial Guard: 26.4 (50.8, 2.0)
  • Adeptus Mechanicus: 20.0 (42.5, -2.6)
  • Necrons: 9.2 (32.3, -14)
  • Tyranids: 33.8 (61.8, 5.6)
  • Orks: 30.4 (62.8, -2.1)
  • Space Marines: 22.0 (49.9, 0.07)
  • Sisters of Battle: 43.5 (72.8,14.2) (*cheering*)
  • Space Wolves: 39.8 (65.5,14.0) (take that Dark Angels #fenrislives)
  • Tau: 35.6 (61.7, 9.5)
  • Adeptus Titanicus: 30.3 (54.0, 6.6)
  • Ynnari: 48.4 (78.0, 18.9)

Now many of those are quite unstable, as they’re based on a small number of games. But there are definitely some high performers – the Blood Angels, Chaos and Ynnari are all armies that, all other things being equal, seem to do well. As do the Sisters of Battle it appears.

But these don’t make any sense. Where are my Guardsmen at?

You’ll note the performance of the Guard is…sort of middling. Which is surprising, given Andrew Whittaker won with the Guard (we’re ignoring the kerfuffle about the artifact he took for the moment), and they seem to be performing quite well.

They key is I said all other things being equal. I also controlled for three additional potential factors: Your current ITC points, last year’s WZA finish, and whether or not your army had a codex, to try to pick apart some of the influence of codex vs. index.

Andrew, as an example, had a pretty strong showing in WZA 2016 and the IG have a codex (his ITC points are pretty low), so that lessens the influence of his placing on the performance of the Guard. Basically, in the hands of a good player, the Guard are potentially good. In the hands of…well…me…they might not carry the day.

So how about those additional factors? They too have a score:

  • ITC points: 0.03 (0.06, -0.01): Essentially, each ITC point is predicted to raise your battle points by about 0.03. So someone rocking a mighty 450 ITC points should get about 13.5 more battle points than someone with zero.
  • WZA 2016 Placing: -0.60 (-0.47,-0.73): Keep in mind this one is in reverse – your score going “up” means from going from 1st to 2nd, 10th to 11th, etc. So for each placing below 1st you get, you can subtract -0.60 predicted battle points. Not enough that there’s a one-to-one match, but it’s fairly influential – good players, by and large, remain good. And given WZA features a lot of people without particularly impressive ITC points, it’s probably the more important of the two.
  • Codex: 17.9 (30.3, 5.6): This one is a simple “Yes/No”. If your army has a codex, it should earn 17.9 more Battle Points than an army without one. Anyone playing with an index is, at this point, playing at a disadvantage, even if some of the codex armies (Grey Knights…) didn’t do particularly well.

Quality of Prediction

But how well does this model actually describe what happens at the tournament? I’ve been chasing this for awhile with…limited…success, but it appears this year the fit is pretty much as good as I really have the right to expect.

The scatter plot of predict vs. actual scores are in a noisy cloud around the ideal, with a very few outliers. I think, based on this and the Las Vegas Open results from last year, that the answer is that faction analysis can only get you so far, and a useful prediction model needs some expression of player skill as well to capture the top and bottom of the rankings with any degree of reliability.

Closing Thoughts

8th Edition was heralded as the great bringer of balance – or at least that’s what it was marketed as. To be frank, I don’t think that’s worked. Indeed, I think the overall picture looks a little less healthy, not more healthy. I think there’s a wider span of viability, but there are clearly still some armies that are dominating, though there are the occasional surprise showings. I think people need to become comfortable with the idea that there will be optimal solutions for tournaments, and while GW can do some things to disrupt that, they disrupt it to a new optimum, not a lack of optimum. There is a huge difference between those two.

But this does justify some of GW’s brutal release schedule – armies with a codex simple are better. Not by a tremendous amount, but the difference is the span between 1st and 3rd, or 10th and 26th.

I’ll be trying to pull in more data from other tournaments to see if we can shore up some of these numbers as we head to the close of the ITC season, and I’ve got some extended per-game analysis planned, time and data permitting, for Warzone Atlanta. Stay tuned.

Enjoy what you read? Enjoyed that it was ad free? Both of those things are courtesy of our generous Patreon supporters. If you’d like more quantitatively driven thoughts on 40K and miniatures wargaming, and a hand in deciding what we cover, please consider joining them.

5 Comments


  1. Great analysis. Thanks for your concise commentary on each of the factors including placing and score, as well as how close the modelling is to reality. I look forward to further revisions as we get more tournaments completed, and new Codices released.

    Reply

  2. What was the kerfuffle about the artefact?

    Reply

    1. Andrew gave a Cadia-only relic to a non-Cadian character.

      Reply

  3. This was my first WZA, and I had a blast. So much fun. So many great people. Everyone I met and played against was friendly.

    I finished 28th in battle points with my Eldar Craftworlds army. And if my final opponent hadn’t gotten in several excellent rolls at a critical juncture, I could have done even better.

    I also think I could have turned in a better performance if I’d had time to practice with my list beforehand. There just wasn’t enough time between the codex release and the tournament.

    Craftworlds is strong. I’m interested to see how well they do in tournaments as we go along.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.