Choosing iGEMers — and fighting implicit bias

Third in an occasional series on iGEM.

iGEM recruiting season has ended. At MIT, we advertise and hold info sessions, then ask that applicants send us an up-to-date resume and answer three questions with a paragraph each:

  • Why do you want to be on the MIT iGEM team?
  • What project do you think would be appropriate and exciting for the MIT iGEM team?
  • What other non-science skills can you contribute to the team?

Along with the resume, these questions try to get at a student’s enthusiasm, creativity, maturity, and diversity. (Pre-existing lab skills are a bonus; domain-specific knowledge we can (and do!) teach.) In past years, I have read the applications and taken notes, then looked back over the notes and picked the 10-12 applicants that I liked the best.

And then, over the last few months, I went to two really eye-opening talks.  The first was an event as a part of Boston HUBweek 2016, about designing inclusive organizations. The second was sponsored by the MIT Institute Community and Equity Office on probing ones’ hidden biases, a fascinating workshop run by Harvard Prof. Mahzarin Banaji who studies implicit biases at Harvard. These workshops discussed recent studies in social psychology, whose upshot is that we like people who look like us, who think like us, who share our values. Well, that’s no surprise — but it’s a problem when you’re hiring.

Or, say, choosing iGEMers. I want a diverse, inclusive team, not a team that looks like me and thinks like me.  Diverse groups do better science and have better educational outcomes — and the teams we’ve had that do the best have students who are from different majors, different classes, different backgrounds, and have different interests. If I only pick the students that I “like”, I’m likely to end up with a team that … looks and thinks alot like me. Even if that’s not the intent.

(Side note — this is how we ended up with a tech industry full of white cis guys. Companies “hire for fit” and the people that get hired are the people that look like everyone else and think like everyone else.)

So what to do? There are a couple of things. Blind evaluations are a great start — this is why, for example, many top orchestras are having applicants audition from behind a screen. Unfortunately, that’s a pain in this case — I’d have to get someone to receive the applications, then edit the resumes to remove identifying information for gender, race, etc. I’m a bit of a one-man show at the moment. (Hearteningly, there is evidence that being aware of your own biases can help you account for them.)

Another way to fight implicit bias is to make your review structured.  If you’re interviewing job candidates, decide what is important to elicit from the candidates, then ask all the candidates the same questions. Not only will this make the evaluations of different candidates more comparable, but deciding up front what to ask helps make sure that the evaluation is actually relevant to the job you’re hiring for. (Ie, you can make sure upfront that the interview questions are relevant to the job’s requirements.)

I applied this strategy to the problem of choosing applicants for this year’s iGEM team by coming up with a rubric before I started reading applications. I scored each candidate on enthusiasm, experience, creativity, diversity, maturity, and availability. Each category was scored 1 to 3, with exemplars as follows:

  • Enthusiasm
    1. What’s this “synbio” thing? It sounds cool, but I don’t know enough about
      it to say.
    2. I know what synbio is and I’m pretty stoked. Maybe I even took a class
      on it or have done some independent research.
    3. I’ve been interested in iGEM since forever. Maybe I participated in a
      team in highschool, or I tried to start one. I think synbio is AMAZING
      and would REEEEALY like to join this year’s team.
  • Experience
    1. This will be my first research experience.
    2. I’ve had some research experience elsewhere.
    3. I’ve had extensive research experience, or some synbio research experience.
  • Creativity
    1. Ideas are poorly thought out or wildly impractical. Eg, “Let’s terraform mars!”
    2. Ideas are practical but a little ho-hum. Eg, “Let’s cure cancer!”
    3. Ideas are creative and impactful. Eg, “Let’s make a tunable timer for time-release drugs.”
  • Diversity
    1. Biology or bioengineering; no or few “extra skills”
    2. Electrical engineer or computer science; extra skills include organizing teams and planning events.
    3. Artists, musicians, architects, mathematicians; other engineering majors; other skills including web dev, design, etc.
  • Maturity (yes, I know, but it really does make a difference)
    1. Freshman
    2. Sophomore
    3. Junior
  • Availability
    1. I can only give you the summer.
    2. I can give you the summer, but I have regular conflicts spring or fall (or I’m gone IAP)
    3. I have no current conflicts

I was pleased to see a diversity of scores in each of these categories, indicating that they’re … measuring something, maybe? And when I summed them together, I got a nice range of “total” scores. All in all, I thought it worked well, and took a lot of the “I’m the only person reading these things what if I screw it up??” anxiety out of the process.

Also … I am really excited about our team this year.  I think it’s the most diverse, creative, interesting team I’ve ever been involved in helping choose, and I think it portends a great year for MIT iGEM!

One last thing.  I’m not sure that there’s even anyone reading this. If you are, and you are an iGEM mentor involved in choosing your team, drop a note in the comments about what you do, or what you do differently. (Or just to say hi (-; ).