Racism And Meritocracy

Comment

Editor’s note: Guest contributor Eric Ries is the author of The Lean Startup.  Follow him @ericries.

Unless you’ve been living under a rock, you can’t have missed the recent dust-up over race and Silicon Valley. Like almost every discussion of diversity and meritocracy in this town, it turned ugly fast. One side says: “All I see is white men. Therefore, people like Michael Arrington must be racist.” The other responds, “Silicon Valley is a colorblind meritocracy. If there were qualified women or minority candidates, we’d welcome them.”

I’d like to say a few words about this, but I want to do so under special ground rules.

I want to make an argument, step by step, that I hope will convince you to care about this issue, but that doesn’t presuppose that you already agree that diversity is important. And it will explain how it is possible for both sides to be mostly correct – and that we still have a problem.

So the rules are:

  1. No political correctness. Let’s speak the truth no matter where it leads.
  2. You don’t have to believe that diversity is an end in itself. In fact, I will argue that is important as a means to an end.
  3. Meritocracy is a good thing. Whenever possibly, people should be judged based on their work and results, not superficial qualities.
  4. We should use science, whenever possible, rather than anecdotal evidence.
  5. No hand-wringing. There’s no point discussing this problem if we can’t do anything about it.

So – no hippies, no whiners, no name-calling, and no BS. If you want to make Silicon Valley – and startup hubs like it – as awesome as possible, pay attention.

What accounts for the decidedly non-diverse results in places like Silicon Valley? We have two competing theories. One is that deliberate racisms keeps people out. Another is that white men are simply the ones that show up, because of some combination of aptitude and effort (which it is depends on who you ask), and that admissions to, say Y Combinator, simply reflect the lack of diversity of the applicant pool, nothing more.

The problem with both of these theories is that the math just doesn’t work.

It’s a fact that the applicant pool to most Silicon Valley startup schools and VCs is skewed. Could this be the result of innate differences between white men and other groups? The math simply doesn’t hold up to support this view. Think about two overlapping populations of people, like men and women. They would naturally be normally distributed in a bell curve around a mean aptitude. So picture those two bell curves. Here in Silicon Valley, we’re looking for the absolute best and brightest, the people far out on the tail end of aptitude. So imagine that region of the curve. How far apart would the two populations have to be to explain YC’s historical admission rate of 4% women? It would have to be really extreme.

There is some research on the differences between men and women, and it has shown some differences in both average aptitude and the standard deviation of aptitude (i.e. that men have more extreme outcomes in both the positive and negative direction). But these differences are extremely small, nowhere near large enough to suggest a region on this curve with all men and no women on it. If you’d like to examine the math involved, check out this excellent slide deck courtesy of Terri Oda:

What is true for aptitude is also true for interest. Some populations are more interested in science, in math, in business, and in taking risks than others. But all of the research I am aware of suggests that these differences are extremely small – not nearly big enough to explain what we’re observing in places like Y Combinator.

This is why I personally care about diversity: it’s the canary in the coal mine for meritocracy. When we see extremely skewed demographics, we have very good reason to suspect that something is wrong with our selection process, that it’s not actually as meritocratic as it could be. And I believe that is exactly what is happening in Silicon Valley.

There’s plenty of good research on the subject of team performance that shows that diverse teams outperform homogeneous teams on many different kinds of tasks. The problem is that this research doesn’t argue for demographic diversity, but rather for a diversity of perspectives. So, again, racial or gender diversity is not an end in itself. But we have to ask ourselves: if teams are consistently being put together with homogeneous demographics, what are the odds that they also will contain a diversity of perspectives? Shouldn’t we be worried that the same selection process that produces homogenous results in one area might be accidentally doing the same in the area that we care about (but that is harder to measure)?

Does that mean that the racism theory is necessarily correct? I don’t think so. I’ve certainly heard my share of sexist and racist jokes in Silicon Valley, but hardly enough to believe that people like Michael Arrington or Paul Graham are lying when they say that they are colorblind. I think that – in the absence of any counterevidence – we should take them at their word. Besides, we don’t need racism to explain these results. Now that we’ve clarified the question to be “how do we build a meritocratic selection process?” we can look at a wealth of research that has been done in this area.

And there’s good news here. Wherever selection processes have been studied scientifically, errors have been found. These errors are called “implicit bias” in the research literature, which causes a lot of confusion, because the word “bias” connotes malevolence. But let’s leave that connotation behind – we’re entrepreneurs, scientists and engineers, for goodness’ sake. We can talk about bias like grownups.

And what the grownups have discovered, through painstaking research, is that it is extremely easy for systems to become biased, even if none of the individual people in those systems intends to be biased. This is partly a cognitive problem, that people harbor unconscious bias, and partly an organizational problem, that even a collection of unbiased actors can work together to accidentally create a biased system. And when those systems are examined scientifically, they can be reformed to reduce their bias.

The most famous example of this comes from the world of musical orchestras. Until the 1970s, almost every professional orchestra in the world was all-male. All experts in the musical world agreed on the reason: male performers had superior aptitude to female performers. They gave all kinds of explanations for why, that had to do with men’s allegedly superior skill, hand-eye coordination, interest in music, and their willingness to sacrifice so much to become a professional musician. And yet, by the 1990s, these ratios had changed dramatically. No conductors went to political correctness anti-bias training camps. No hand-wringing was needed. They hit upon a solution – by accident – that practically changed orchestra selection overnight: they had performers audition behind a physical screen, so that the judges could not see their race or gender while they played. When rating performers anonymously, it turned out that men and women played equally well, on average.

If you’ve seen the movie Moneyball recently (or read the book), this should sound familiar. The whole premise of Moneyball was the triumph of science, data, and reason over the gut feelings and beauty contests of baseball scouts. Think of the famous scene in which the scouts are sitting around a table debating which prospects had “the right look” – and Brad Pitt and Jonah Hill are calling BS. Which side of the table sounds more like the admissions process to a Silicon Valley startup school, where they are often “looking for people like us?”

According to the research on implicit bias, our selection processes are making some huge, obvious mistakes. The Y Combinator partners conduct short ten-minute interviews where they make snap decisions about candidates on the spot – sometimes in as little as sixty seconds. This process, while efficient, is the exact opposite of musical performances happening behind a screen. They are even moving towards video interviews – which would bring this visual bias even earlier into the process.

Now think about the countless VC pitch meetings and “get to know you” mixers and coffees and lunches. These are all opportunities for VC’s to use their vaunted pattern recognition to try and spot promising entrepreneurs and companies early. But pattern recognition is just a fancy word for bias. And if you look at the research on implicit bias, you will find that bias is a necessary consequence of using pattern recognition, it’s part of how the brain works. We literally think faster when we see something that matches the pattern, and have to slow down to process something that doesn’t match. I think Michael Arrington provided a fascinating first-hand account of this cognitive process in action, when he described his experience struggling to name a single African-American entrepreneur. He couldn’t come up with one on the spot, but not because he’s a racist.

None of this is meant as a criticism of Y Combinator, VCs, or anyone else. It’s meant to point out that even though our current selection process is pretty good, and pretty meritocratic, it still contains bias. We can do better. And, if we do, we will make all of Silicon Valley more successful.

So how can we do better? I believe there are several relatively simple changes we could make right away.

I previously described on my blog one simple change I made to the hiring process at my last company. I asked all of our recruiters to give me all resumes of prospective employees with their name, gender, place of origin, and age blacked out. This simple change shocked me, because I found myself interviewing different-looking candidates – even though I was 100% convinced that I was not being biased in my resume selection process. If you’re screening resumes, or evaluating applicants to a startup school, I challenge you to adopt this procedure immediately, and report on the results.

Startup schools are an exceptionally good laboratory for testing these ideas. In fact, if anyone out there wants to put this idea to the test, I suggest the following experiment: for your next batch of admissions, have half of your reviewers use a blind screening technique and the other half use your standard technique, on your first screen (before you’ve met any applicants). Compare the outputs of both selection processes. I predict they will show different demographics.

Of course, this doesn’t address the whole problem. Remember, part of the defense against the racism theory is that the applicants are already skewed before any selection is done. Once again, this sounds like something you can only throw your hands up about: if it’s not a problem with innate differences, it must be a problem with our education system or some other “pipeline” problem.

So let’s take a look at this problem, too.

I once spent time with a promising entrepreneur who was not a white man. Because their startup sold a product that a lot of tech entrepreneurs buy, many of their customers were graduates of Y Combinator. So I asked if they were planning to apply. Their response: “oh, no, it’s a waste of time. Y Combinator doesn’t accept people like me.” Where did they get that idea? Surely not from YC’s partners, who as far as I can tell are scrupulously fair in their dealings with entrepreneurs. Rather, they got that impression by inferring that there is probably implicit bias in YC’s  admissions process, and that they’d be better off spending their time doing something else other than applying to YC.

We all know there is a huge gender gap in computer science. But that gap means that women receive only about 30% of degrees in CS. But 30% is a lot larger than 4% – and that’s a big math problem for advocates of the pipeline theory.

Imagine that you were a professional musician thinking about which orchestra to audition for. You have a choice between an all-male orchestra that conducts interviews out in the open, and a mixed-gender orchestra that conducts auditions behind a screen. Which would you choose to apply to? Wouldn’t your answer be different if you were a man or a woman?

I think thought experiments like this are helpful for suggesting an alternate hypothesis to the pipeline problem: that there are qualified minority applicants who are choosing – rationally – to invest their time and energy elsewhere. I am not aware of any scientific study that proves this hypothesis is correct. But I have seen enough existence proofs to believe it is likely.

For example, I have been a mentor for several years in the Founder Labs program, which was originally created by Women 2.0. It’s a pre-incubator program, that helps potential founders figure out if they should become entrepreneurs. They created it as a way of encouraging women to apply to startup schools and create companies. But they took a novel approach to this problem. They did not advertise the program as being about diversity. Instead, they adopted a minimal rule: each founding team had to have at least one woman, and they privately reached out to talented women in their networks and encouraged them to join.

I remember the first time I spoke to the Founder Labs teams. I kept asking: who are you and where have you been? It was unlike any other audience I’ve seen at any other startup school: 50/50 men and women, with a surprising amount of diversity. The participants included chip designers and hard-core engineers, the kind of people that have the aptitude but don’t apply to most startup school programs or pitch most VCs. I believe the reason they came to this program was that they believed its selection process would be more meritocratic.

Groups that make a conscious effort to become more meritocratic are able to make meaningful changes in the diversity of their participants. One of my favorite examples is the San Francisco Ruby Meetup, which spent a year making the effort to improve the number of women who participate. The steps they took required effort, but not rocket science. They didn’t have to get sixth grade girls interested in programming. You can read more about it here.

There’s one last piece to this puzzle that science can help us with. It goes by the rather unfortunate academic name of stereotype threat. But a confusing name doesn’t make it any less real. It turns out that when people are in a situation that defies stereotypes, reminding them of the stereotype diminishes their performance. In one study from NYU, students were given a math test. Asking men and women questions about their gender beforehand increased the performance gap substantially. “Priming” students with questions about other aspects of their identity, did not. This result has been replicated in many, many studies.

I think this helps explain why asking more minorities to apply to these programs doesn’t work. Consciously thinking about proving a stereotype wrong impairs performance. So it’s entirely possible that a completely objective assessment of the performance of candidates in an application process will show minority candidates doing worse, because they are literally cognitively impaired.

And this brings me back to the no hand-wringing rule. Most people interpret this finding as bad news, but I think they have it backwards. It’s actually really good news. If you look at the studies, what they show is that the performance gap between groups can be mostly erased, if candidates are primed in a merit-focused way. Explicit diversity programs have the solution exactly backwards. What we need to do is to build meritocratic selection processes, and then go our of our way to tell people about them. We should emphasize the objectivity of the selection process and our efforts to weed out all forms of bias. I believe this is why certain programs, like Founder Labs and 500 Startups, that boast of their meritocratic “moneyball” approach to admissions have more diverse applicants – and participants.

When it comes to meritocracy and diversity, the symbolic is real. And that means that simple actions that reduce bias, such as blind resume or application screening, are a double win: they reduce implicit bias and they help communicate our commitment to meritocracy. As a startup ecosystem we are in the meritocracy business. This is the path towards making Silicon Valley – and every other startup hub – even more awesome.

Photo credit: Daviniodus

More TechCrunch

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

2 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo