Words matter. We are committed to creating an anti-racist culture by using language that champions equity and inclusion. Here are guidelines to help you spot and remove hurtful language.
People come first.
Our decisions on racist language shouldn’t be purely intellectual. That’s a white privilege. In any rationale, the degradation of people is more important than the degradation of content. Our content should not hurt people.
We take the lived experiences of Black, Indigenous, and people of color into account.
When basing our decisions to use or not use words, we empathize with communities that have experienced disparate harm to improve how we communicate for all. We follow our Intuit value of Stronger Together.
If it’s harmful to one group, it’s harmful to all groups.
People sit at the intersection of overlapping social categories. Some groups experience oppression in ways that others don’t. If any one group is harmed by a term or phrase, we don’t use it. We believe everyone is better off that way.
We distinguish intent from impact.
We don’t defend choices based on our intended use of language. Well-intentioned choices can still cause harm. It’s not up to us to judge if or how much a word harms, but to believe people who tell us it does. We choose the most inclusive language for positive impact.
We strive for content that’s clear, concise, and accurate.
Many harmful terms are rooted in racist and anti-Black metaphors. They also don’t clearly convey meaning. We look for clearer words that are not only more inclusive, but also easier to understand.
We don’t use black, white, dark, or light as metaphors.
Language that puts a positive connotation on white/light and a negative or mysterious one on black/dark reinforces anti-Black and colorist stereotypes. We choose more direct language to get our point across. We only use these words as literal visual descriptors (such as dark mode), not value judgments.
We’re inclusive of other cultures, but we don’t appropriate them.
Intuit’s content doesn’t use language appropriated from groups that experience oppression. Black Vernacular English (BVE) is one example. We use language that speaks to everyone without taking away from underrepresented cultures.
We’re building an evolving toolkit to remove harmful language.
Words are used in a nuanced way. When fostering an anti-racist culture, we need to model the environment we hope to create. This involves using language that enhances the lives of others and moving away from language and phrases that exploit or shame people.
There are no shortcuts to building an anti-racist culture.
Unlearning racist language requires the work of everyone. We have a word list that guides us in removing harmful language, but this list isn’t prescriptive, and we don’t rely on it as an exhaustive resource. Instead, we commit to challenging our thinking to make sustainable, long-term changes.
Determining if a word is harmful
How do you know if a word is racist?
Part of engaging with anti-racist language means using your own judgement. But sometimes it can be difficult to guide yourself through the decision-making process. You can start by asking yourself these questions:
- Is the language working metaphorically?
- If so, what are the implications behind the metaphor? Does it place a positive connotation on whiteness and a negative one on something else (usually blackness)?
- Does the language imply “otherness” and exclusivity?
- Can it be substituted for something clearer or more literal? (The answer is often yes.) Think about what the term actually means and describe that.
- Are there any groups of people who could be harmed by this? Who and how so? Thinking about who is affected deepens your understanding of anti-racism.
- Does the language make you uncomfortable, even if you can’t quite articulate the reason?
These are terms with racist roots that we don’t use at Intuit. This list is evolving and by no means exhaustive:
- black hat (hacking)
- black box
- dark UX
- redline, redlining
- white glove
- white hat (hacking)