← Back to blog

Transgender Man: We Noticed You Viewed This Item Last Week

Alibaba has been emailing me with the subject line “Transgender Man” for weeks.

The first time I saw it, I was confused but paid it little mind. Over the next few months, every few days, unfailingly, I got another Alibaba email. Same tag. Same subject line. “Transgender Man.” Eventually I decided to snoop. What exactly is this promotional email about?

The Basketball Gear

A few months ago, I bought a bunch of wholesale basketball gear from Alibaba for a youth basketball camp I run in Nigeria through Ready Leaders Foundation. Basketballs, backpacks, socks, etc.

These emails with the transgender tag were recommendations of basketball items, shorts, t-shirts, socks, and other things from my searches like “basketball backpack” and “basketball socks.” Somewhere along the line, Alibaba’s recommendation algorithm identified my profile as female-identifying and flagged the items I was purchasing as “male”. The output in their promotional email? “Transgender Man:”

In one of their many, many emails: “Transgender Man: we noticed that you viewed this item last week.”

It couldn’t possibly be a cis-gendered woman playing sports and purchasing basketball gear. No way. That story is not embedded into the algorithm.

The logic behind the curtain. Female profile + “male items” = transgender man. No nuance. No context. Just classification.

Race, Gender, Class, and Computing

Almost five years ago, I sat in a class at Duke taught by Dr. Nicki Washington called Race, Gender, Class, and Computing. I took that class the first year it was introduced, and I remember there being a lot of discourse on whether it should even be categorized as a second-year course since it was a humanities-style class that didn’t have coding. I’m glad I had the opportunity to take it though. I remember recommending some of the literature from that class to friends even all these years later.

Fast forward a few years and I’d place that class as a mandatory requirement for all computer science graduates, that’s how important I believe it is to the curriculum.

Dr. Washington pushed us to look at the real-world damage software was doing, not hypothetical risks but products that were already shipped, already in people’s hands, and already causing harm.

We studied Apple’s Memories feature on iPhone, how it would surface photos from traumatic moments of people’s lives, overlay happy music, and present them to users as a feel-good highlight reel. Nobody asked users whether they wanted to relive those moments, and the algorithm had no way of knowing what it was surfacing. It just decided what should make you smile. We also covered the misuse of computing systems as a source of truth by people who may not understand that technology is still just tools created by people. People who have inherent biases and can embed them into these systems.

The Alibaba problem could have been easily identified if there were people in the room who understood the dangers that come with embedding biases into computing systems and the issues with building products irresponsibly.

That class sowed a seed.

The Repo I Deleted

When ChatGPT first broke out as consumer facing, I was really excited about its potential. I almost pursued a graduate degree in Artificial Intelligence, so when it became mainstream, I was thrilled. AI and machine learning had always existed behind the scenes - in recommendation algorithms, in search, in places most people never thought about. But this was the first time AI was being used at a direct consumer scale, where everyday users were interacting with it and starting to imagine what else was possible.

I started thinking about all the ways generative AI could be used for social good. One of the first projects I pursued was a hackathon focused on using generative AI in policing, generating images of victims or suspects to support investigations.

After a few days of building, I started having concerns. The training data carried assumptions and the outputs carried biases, and the more I looked at what I was creating, the more I saw the same patterns Dr. Washington’s class had warned about. Decisions were being baked into the system that would impact real people, and there was no mechanism to catch them.

I deleted the entire repo.

Not because AI was bad, but because I’d been taught, in a class that people once called trivial, to stop and question before shipping.

The Problem Isn’t New, But It’s Scaling

This problem has always existed in computing. But in the age of consumer-facing AI, it has soared.

A few days ago, Facebook introduced a feature that animates your profile picture. I was staring at a three-second video of myself, what Meta thinks my teeth look like overlaid on my profile picture. A few years ago this would have been flagged as a deepfake. Now it’s a fun feature because we live in an AI-native world.

The line of data privacy is getting blurred. Every day, applications release new terms of service and privacy policies pushing the boundaries of what they can access. All of this happening with very little education to the consumer on what it means for them, what using the tool actually does, while expecting them to be responsible for their own protection.

Problem to Prototype in a Day

Don’t get me wrong. I am a huge proponent of computing and the boom of consumer-facing AI products. I have loved how empowering it is for the average person to build solutions, myself included. I’ve lost count of the many ideas I’ve been able to spin up from a thought in mere hours.

Furthermore, I’ve spent the last three years at Microsoft building AI tools, championing secure AI practices, and even publishing an open-source security sweep tool for AI agents. The barrier for building has never been lower. Problem to prototype in hours.

But now more than ever, responsible computing and awareness of its impact has become even more critical.

Believing in the tool doesn’t mean trusting it blindly. The same AI that lets me build faster is the same kind of AI that tagged me incorrectly in Alibaba’s system, the same category of algorithm that made Apple think happy music belongs over your worst memory, the same technology that lets Meta animate your face without asking. The capabilities have scaled and the problems have scaled with them.

The Question That’s Gotten Louder

This past week, I identified a security vulnerability in an internal tool that exposed PII and allowed me to impersonate another user. It was met relatively casually by the team because everyone wants to build fast and innovate without answering the tough questions about what their product is doing and what its unintended impacts might be.

It’s a new era. New ways of building, new ways of using the internet, new security problems we’re still writing the standards for. And the question Dr. Washington’s class raised five years ago hasn’t gone away, it’s gotten louder.

What happens when we hand off decision-making to tools that have access to the facts but not reliable judgment?

The Alibaba email is still pinging my inbox every few days, tagged with a label I never chose, generated by an algorithm that had just enough data to be dangerous and not enough context to be right. It’s a reminder of how important it is for builders to be cognizant of what is going into these tools, and for users to be aware of what we are signing up for.

My challenge to you is this: in the next few weeks, before building something, ask yourself what unintended impact it may have and what kind of responsible computing techniques you can put in place to build with protections in mind. If you’re a consumer, take a step back and ask how you are using this tool, and are there things it is telling you that can be questioned?