AI’s contact with humanity through social media brought people and organizations together. The flipside however was that our social identity was hijacked. Today, teens without Snapchat or Instagram accounts hardly exist in their circle of friends. Media houses without a social media window are not relevant.
AI’s contact with humanity now is about creating applications that help us eliminate tasks, speed up work, make us write faster, solve problems faster and make more money. In this second contact with humanity however, AI comes with a warning text, a warning that was missing the first time around: Transformers are pretrained.
OpenAI, a key players and inventor of ChatGPT, states the following about its own flagship product:
- ChatGPT is not free from biases and stereotypes, so users should carefully review its content. It is important to critically assess any content that could teach or reinforce biases or stereotypes.
- The model's dialogue nature can reinforce a user's biases over the course of interaction.
And this is the good news; AI handled with care and good sense, can paradoxically enough support us to become more human than we ever were.
The warning text about AI being pretrained puts light on the fact that we all are.
“It is important to critically assess any content that could teach or reinforce biases or stereotypes.”
When did we ever do that? As a routine? Very few of us look at texts or people with a healthy balance of distrust and compassion. On the contrary, to save time and perhaps ourselves from social embarrassment, we usually lock down our behaviors around 'what everybody else is doing’ and seldom object when individuals are systematically marginalized, interrupted or interpreted by others.
To deepen the ability to reflect and act inclusively, we have designed a five session interactive training with AI- and D&I-experts.
The ChatGPT inventor reminded us that AI is not transparent, but biased in judgements. Like most of us. As adults in the workplace, we are all ‘pretrained’ but we act as if we all have been through the same training, ie the same socialization process.
The socialization process from day one teaches us to fit in, to be appropriate. Social norms give individuals different scripts to go by which reproduces the social power structure. As we walk into a room, meeting, discussion we enter that space with (often unconsciously) a preunderstanding of what the deal is, what our role and position is in the system. We act appropriately according to expectations valid in the current social structure, rather than true to our talents and ambitions and care for others. Our default approach is not always to be critically reflective about our daily doings and sayings because we are often too stress-ridden and focused on just getting the job done that we do not allow ourselves to question around what social assumptions our workplace is organized and constructed.
Workplaces are fundamentally biased because we organize work according to socially acknowledged stereotyped schemes that are widespread to such an extent that we do not even notice them anymore. And this is why people with certain identity features (eg sex, age, race) end up relatively more often in certain roles than others.
AI researchers now promtp us to coordinate our behaviors so we can avoid a human tragedy. While waiting for authorities to write into the law new codes of conduct, we can all self-regulate and care for each other. Let’s bank on humanity and take OpenAI’s advice at face value: Reflect critically on any content that could teach or reinforce biases or stereotypes.
To deepen that ability we have designed a five session interactive training with AI- and D&I-experts.
Camilla Rundberg, PhD in Organization, leadership and gender, Senior Consultant Influence People