WordCamp Speaker Diversity, Blind Reviews, & Selection Criteria

  1. Generating a WordCamp Application Review Form in Google Forms
  2. WordCamp Speaker Diversity, Blind Reviews, & Selection Criteria
  3. It’s Time to Ditch WordCamp Tracks (and What to Do Instead)

2023 Update: The WordCamp Organizer handbook just announced a page for “WordCamp Speaker Selection” that prioritizes diverse content and diverse speakers. This page is excellent, and I’d encourage you to defer to the recommendations in it. I’m still proud of the work we did in 2017, but other people have done so much more since.

The title of this post was always a bit misleading since we did take considerations for non-anonymous speaker information after the review. In fact, the approach we took is actually quite similar to what is recommended today.

I’ll leave you with this very important quote from the handbook:

With a fully blind selection process, diversity is not prioritized at all. When diversity is prioritized, organizers make specific choices to ensure that a lineup contains a diverse group of individuals. Blind selection removes that ability.


Early in the year, my WordCamp Seattle 2017 speaker team co-lead and I planned out our responsibilities for the six months leading up to the conference. Among the early decisions was to start with a blind reviewing step as part of our speaker selection process. This post dives into what we did, the advantages of that approach, and some transparent details about our selection criteria.

Before I go any further, a huge thanks to my speaker team co-lead Nichole Betterley from N Powered Websites who did tons of work alongside me and also reviewed this post before publication.

Important Influences & the Folks Before Us

As part of this process, I leaned heavily on the past work and experiences of other people—notably all women to the best of my recollection—who shared these goals. Courtney Stanton’s article “How I Got 50% Women Speakers at My Tech Conference” has stuck with me in the years since I first read it, and Jill Binder, Morgan Kay, and others’ work on the WordCamp Speaker Training Workshops was fabulous.

A lot of these things seem obvious once you’ve read them, but plenty (most?) of technology conferences don’t use these types of practices. The people who cared about this early on and shared their experiences have clearly improved events that tens of thousands of people have attended.

Why A Blind Review?

This isn’t the only way to run speaker selection, but we felt quite strongly about using it, given our goal for a diverse speaker lineup.

Heading into the blind review, I was primarily interested in fairness and collecting application feedback without consideration of an applicant’s connection to the local community. I personally knew that judging talks when seeing a person’s name heavily influenced my rating of the speaker.

I feel confident that we succeeded in increasing fairness and reducing bias, but I also found that the blind vetting process was fabulous for helping us focus on the details of the talk that mattered most for being a successful speaker. (See “What Made a Successful Application” below.)

Recruitment Required

A blind review can only produce the results you want if the applicant pool reflects the diversity you seek. To get that, you have to hit the pavement and the Twitters. We took this seriously.

  • We promoted a form for recommending speakers and personally invited every single one to apply. Every organizer suggested folks to invite as well.
  • We ran 5 total workshops throughout the Puget Sound area to encourage new speakers and help them develop their pitches, including one at Seattle WordPress Meetup’s “Study Group for Women”. We know that at least five selected presenters (not just applicants) attended a workshop. ((I led one workshop, but my co-lead Nichole deserves most of the credit for making the rest of these workshops happen! Thanks to Kelli Wise who ran one in Olympia and Eric Amundsen who ran one in Gig Harbor!))
  • We got outside the WordPress world. Our speaker lineup included a job coach, a usability professional, and a web accessibility consultant, among others. These people brought important outside knowledge into our community!

This was time intensive, but it mostly worked! We recruited many women to speak; got the word out in Seattle, the Puget Sound Region, the Pacific Northwest, and across the US (that “bullseye” generally reflected our resulting speaker geography); and pulled in a wide variety of skillsets and job types which helped us put on a conference that appealed to the mind-boggling range of interests and skills among WordCamp attendees.

Room to Improve: Racial Diversity

Notably absent above is racial diversity. We didn’t ask people how they identify, but our speaker slate was about 90% white. We had a few superb speakers who happened to be people of color, and the racial breakdown of our applicant pool roughly aligned with the selected speakers, so in that way, the process “worked”: the selected speaker demographics aligned with the applicant pool. It just means we didn’t recruit enough. I really hope that next year’s speaker team is able to improve on this, starting with the following:

  1. Including a person of color on the speaker team (though not making them in charge of racial diversity)
  2. Doing in-person outreach to racially diverse local groups that include potential speakers

A racially-diverse speaker pool is absolutely out there, and we didn’t do enough to invite and welcome them into our community in the run up to WordCamp.

The Review Process

We collected over 170 applications from nearly 90 applicants which we sent to a panel of 20 reviewers selected from the community. ((Those applications ended up in nine Google Forms. If you missed the first post in this series, here’s how I took the WordCamp speaker submissions and automated the form-building process.)) Just like our applicant pool, we did our best to select experienced community members across a range of experiences, skillsets, and identities to review the applications.

We learned in previous years that people reviewing talks quickly fall into rating based on personal interest level. There’s no way to completely avoid this—it definitely happened—so that’s why we gave our reviewers very explicit criteria to judge applications.

Our review forms included the talk title, intended audience, intended format, and an anonymized talk description. Given this information we asked our reviewers to rate each talk on a 6-point scale from “Low Quality” to “Must Have” based on the following:

  • Use the full 6-point scale for your ratings. It’s easy to use lots of 3s and 4s, but please use at least some 1s, 2s, 5s, and 6s!
  • Does the description demonstrate knowledge of the topic and show attention to detail?
    • We know some descriptions are short; just do your best.
    • If you’re unfamiliar with the technical topic, you can skip the rating or review based on attention to detail.
  • Will the talk appeal to a significant group of people and contribute to having a rich spectrum of topics throughout the weekend?
  • Does the topic cater to the intended audience?
  • Is the stated format appropriate for the topic?
  • As much as possible, DO NOT rate talks based on your personal interest level in the topic

Feel free to steal that. We were happy with the results. 😁

The Results & How We Used Them

This process worked extremely well in producing a ranked list of speakers that was not influenced by the applicant’s identity. In fact, the top of the list included more women than men. Our final results: ~60% of speakers were women and just over 50% of sessions included one or more women. ((The difference in percentages comes from the fact we had two panels made up entirely of women, two presentations by two women, and lightning talk sessions with 5 presentations each including multiple women. If we had 80% women but only 25% of our sessions included women, that would have been less successful in my mind.)) We also had women speaking on technical topics, not just about community, design, or writing which are more common.

The nature of putting together an engaging speaker lineup can’t rely fully on a blind review. We had multiple highly rated topics that were too similar to include on the same schedule. We also had highly rated talks that we felt had been given at one too many WordCamps. ((On that note, we got compliments for the range of topics and number of unique topics and “takes” we put together for the final schedule.)) Each speaker that we tentatively selected was then vetted to confirm they were knowledgeable on their subject and could give a good presentation.

But I can honestly say that the rankings anchored our entire process. Nearly every highly ranked speaker who was available spoke at our conference. The blind review scores weighed heavily in every decision, and the only times we went outside this process was for recruiting the keynote, our WordPress 101 and JavaScript workshops, and talks that filled a few small gaps in content not addressed by the applicant pool.

What Made a Successful Application

One final reason I liked the blind review process so much is that I’m comfortable being extremely transparent about what we did and the ranking criteria we used. When people asked why they were turned down, we were able to share the rating criteria. To that, I added a list of common threads I observed on low-rated or otherwise not accepted talks:

  • Topics that seemed too large to easily fit in 25-30 minutes
  • Descriptions that were unclear about the key take-home points
  • Use of jargon that assumed too much knowledge of the stated audience and/or made it hard for us to judge what the topic exactly was
  • Descriptions that were too vague or too short
  • Topics that had too narrow of an audience
  • Topics that didn’t have a clear connection to WordPress or seemed out of touch with the WordPress community
  • Talks we couldn’t accommodate because other similar submissions (same topic or audience) were already selected

In retrospect I realized that our speaker application form rewarded preparation, attention to detail, and taking the application process seriously because we intentionally made it longer and more onerous as an initial screening device. Folks who put together a highly-rated application put thought into their presentations, practiced them before the conference, and came ready to share their expertise.

Go Read & Recruit!

I’m serious in saying I relied on the work and generous sharing of knowledge from people who did this type of work before me. They deserve all the credit and we mostly just repackaged what they had already done. For instance, the people who started the Seattle WordPress Meetup’s Study Group for Women made the space for us to recruit within that existing community.

If you thought this article was useful, seriously go read “How I Got 50% Women Speakers at My Tech Conference” and check out the WordCamp Speaker Training workshops. Then go work to improve your own conferences by making sure you include more women, people of color, and other underrepresented folks as organizers, attendees, and speakers!


By sharing my experiences, I hope I can help other people (hello, men!) understand the benefits of thoughtful speaker recruitment and show how these ideas really do work! I’m happy to answer any questions folks have in the comments, via email, or on WordPress.org Slack (I’m @MRWweb).

2 thoughts on “WordCamp Speaker Diversity, Blind Reviews, & Selection Criteria”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.