Gauging the Community: A Cautionary Note on Surveys

There are a number of surveys that have gone around, including one from Games Workshop. In the coming days, I expect these to be trotted out to add an air of quantitative authority to debates online – this is, after all, what surveys are for.

I’d like to urge you to approach those with caution.

First, a disclaimer: I am not a survey methodologist.

The fact that this disclaimer exists should probably cause you to pause for a moment. I have a doctorate. I conduct studies for a living, and crunch numbers about wargames as a hobby. And the moment that I think something needs a population level survey, my first two responses are “Can we check and make sure someone hasn’t done this before?” and “Are we sure we need to?”

Why?

Because surveys are hard, and getting useful information out of them is even harder.

The reason for this is that a survey isn’t being used to discuss the people who answered the survey. The entire point of a survey is to be able to talk about the population as a whole – be that people who are going to vote in the next presidential election, dog owners in the Northwest, or players of Warhammer 40K. The people you survey have to be a random sample of the people you want to talk about – every 40K player needs to have been equally likely to have been selected for the survey. If that isn’t possible (and it rarely is) there are statistical methods to address this. And those methods are…you guessed it…complicated. And they involve knowing things about the population already.

For example (because I’ve done a little bit of research work on this problem), lets pretend you do a survey of people in your home town by calling their phones, and you notice only 10% of the people in your sample are between 18 and 22…but you live in a college down. What the hell just happened? Well, when you picked “your home town”, what the phone company gave you was a series of exchanges that cover Boulder, Ann Arbor, Blacksburg or wherever. But if all the college students kept their old cell phone numbers from their hometowns, they’re not on those exchanges, so you can’t reach them.

Coming up with examples that impact 40K/30K and the answers you get is pretty easy. Are people who answer online surveys likely to be younger than the average player? Are they more or less likely to be part of a club, and thus more likely to be able to play niche games like SW:A or Horus Heresy? Are they more likely to be tournament players? To own multiple armies? I could keep going but you should be getting the idea. Just having a survey isn’t enough – nor is posting it in lots of places. Designing the sampling strategy of even a simple survey is a very complex topic.

Then we get to question design.

If I ask you “What is your primary army?” what do I mean? Your biggest? Your current army? The one you play the most? The one you play the best? Your truest or first love?

For example, my brother’s primary army, when it comes down to it, is probably a form of Imperial Soup. He lovingly refers to it as the ‘Grand Armée of the Imperium’ because he fancies himself this guy except with more Forge World models:

Except the army he’s working on right now is Death Guard, the army he’s going to field at the Las Vegas Open is Eldar…you get the idea. How you choose to answer it will change the results. How the people analyzing the survey choose to interpret your answer will change the results. How the people analyzing it choose to handle what happens if you don’t answer will change the results…

So surveys are useless?

No. But they’re not inherently useful. They need context, and they need careful analysis. Universities, think tanks and large companies have survey centers, with PhDs who specialize in these things who charge really rather staggering consulting rates. If a random guy with a Survey Monkey account would do the trick, they wouldn’t exist. What I’m saying is that you should approach these results with a grain of salt, consider how their questions were framed, and whether or not you think their sample is representative – or if the people running the survey show a flicker of recognition that that’s something they need to worry about.

This sounds very strange coming from this blog, I know, but just because it has numbers doesn’t mean it’s right.

Enjoy what you read? Enjoyed that it was ad free? Both of those things are courtesy of our generous Patreon supporters. If you’d like more quantitatively driven thoughts on 40K and miniatures wargaming, and a hand in deciding what we cover, please consider joining them.

5 Comments


  1. Great article. Thanks for the moment of sanity before we enter the asylum :€)

    Reply

  2. Truth. While I never went past undergrad Stat, that taught me enough to know that good surveys and studies are much, much harder to do than they seem at first glance, and can provide some very misleading information when done badly.

    Reply

  3. There were a lot of issues with that survey. I remember thinking there weren’t enough identification questions, it was an online survey with all that entails, and there were a lot of “fill in this text field with whatever” questions as well.

    Nevertheless, I applauded them for doing something, anything to hear from the playerbase. As long as they’re not deciding long-term corporate strategy based on the results–sorry, all of you who put in “Plastic Sisters” in those text fields–it was a step in the right direction.

    Reply

  4. Great Article. I very much enjoyed the break down on the why and how of conducting a survey!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.