There’s a lot of conversation right now about AI.

How to use it. How to regulate it. How to keep up.

In many organizations, the focus has been on building AI literacy:

  • Understanding the tools.
  • Learning the language.
  • Developing guidelines for use.

All of that matters.

This isn’t an argument against AI.

It’s an argument for a different kind of critical awareness.

Because the question isn’t just how we use these tools. It’s how we understand the systems they are part of.

Shifting the Frame

From an equity perspective, AI literacy isn’t just about functionality.

It opens questions about power.

  • Who designs these systems.
  • Whose data is used to train them.
  • Whose knowledge is prioritized.
  • And whose interests are advanced.

AI systems are not neutral.

They are shaped by the same social, political, and institutional dynamics we see everywhere else.

Why the Idea of Neutrality Holds

There’s a strong pull to believe that AI can be neutral.

That with better data, better models, and better safeguards, we can remove bias.

That belief reflects something deeper in how we understand systems.

We often think of bias as:

  • individual
  • intentional
  • correctable

So it makes sense to assume that if we fix the inputs, we can fix the outcomes.

But much of what AI learns from is not just data. It’s history. And history is structured by inequality.

Which means bias is not simply an error in the system. It is part of the system.

This is where AI mirrors a broader gap in how we understand bias itself.

If we see bias only as something individuals hold, we miss how it is embedded in:

  • institutions
  • policies
  • decision-making patterns
  • and what gets recognized as legitimate knowledge

AI doesn’t introduce these dynamics. It reveals and, in some cases, amplifies them.

What This Looks Like in Practice

Inside the work, this is where the conversation starts to shift.

In a recent consultation, I was asked to think about how EDI factors into how we approach AI literacy. And what became clear very quickly is that these dynamics aren’t abstract. They show up in very real, very immediate ways.

  • In academic contexts, generative AI is often framed through academic integrity. But for multilingual students, AI tools can function as a form of linguistic access. The same use of a tool might be interpreted as misconduct in one case and as support in another. Without an equity lens, we risk applying policies in ways that overlook differences in access, language, and participation.
  • In workplace contexts, AI tools are often used to increase efficiency in hiring or performance evaluation. But if those systems are trained on historical data that reflect existing patterns of exclusion, they can quietly reproduce those same patterns. What appears as a neutral, data-informed decision can carry forward deeply non-neutral outcomes.
  • In knowledge production, generative AI can flatten voice. It often draws more heavily from dominant language patterns and widely available sources. Over time, this can narrow how ideas are expressed and which forms of knowledge are reinforced. For those already underrepresented, this is not just about style. It’s about visibility.

What EDI Asks Us to Consider

The more we’ve been working to “teach AI,” the more it’s become clear that the tools are only one part of the conversation.

An EDI lens doesn’t just add considerations. It changes the questions.

Because once you start looking through that lens, the scope widens.

Not as an add-on. But as a more complete picture.

EDI asks us to consider:

  • Who benefits from these systems, and who is burdened by them? Not just in immediate use, but in how impacts are distributed across different communities.
  • What histories the data carries. Whose knowledge has been included, excluded, or extracted, and what it means to build systems on top of that.
  • How decisions are being shaped. Not only by what AI produces, but by how much authority we give it and in which contexts.
  • What kinds of labour are being displaced, reshaped, or made more precarious. And for whom those shifts carry the most risk.
  • What environmental and resource costs are made invisible. And how those costs are unevenly experienced across regions and populations.
  • And critically: Who is accountable when harm occurs. Not in theory, but in practice.

These are not separate from AI literacy. They are part of it.

Because without this awareness, we risk narrowing the conversation to how to use the tool rather than how the tool participates in larger systems.

The Institutional Question

AI is often framed as a tool individuals use.

But in practice, it is something institutions adopt, integrate, and normalize.

Which means responsibility cannot sit only with individual users.

The questions become:

  • How are decisions about AI being made?
  • What values are shaping those decisions?
  • And how are impacts being assessed over time?

AI is not just a technological shift.

It is a shift in how decisions are made, how knowledge is produced, and how systems operate.

Approaching it with awareness is not about slowing progress.

It’s about ensuring that what we build, and how we use it, aligns with the values we say we hold.

Because without that awareness, we don’t just risk using AI poorly.

We risk embedding existing inequities into new systems, at scale.

-sd