Chat GPT : The AI Bias Conundrum


Chat GPT is all abuzz now and of course, its popularity brings about trust and safety questions.

A recent LinkedIn post got me thinking about AI bias. This post talked about how a Chat GPT prompt to joke about Christianity/Islam returned an automated message refusing to make jokes about religion and stating that ‘making jokes about religious figures or beliefs is inconsistent with Chat GPT’s goal of providing helpful and accurate information to its users’.

This message is quite interesting when one considers the AI’s prior response to a prompt to joke about the Hindu religion. I will not be repeating that response here, however, it was described by the post author as mocking.

The entire exchange is a key lesson on bias and how easily it can translate from the AI engineers to the technology created. Artificial Intelligence is just that, artificial. The information it provides is based on the data fed into it. Where existing human biases are present in the data fed into these systems, they are simply what translates into the biased output it gives off. It is a classic case of ‘Garbage In, Garbage Out’. Undoubtedly, Chat GPT is an incredible invention. Nonetheless, user expectations of it to be free from bias are not unreasonable.

While this interaction with Chat GPT showcases the latest form of biased AI, it is in no way the first. AI bias has been noted in discriminatory hiring tools, algorithms for healthcare and recidivism AI. Studies have also shown the severe impact that content moderation carried out by biased AI can have on the social media experiences of minorities and people of colour.

The discussion above provides insight into AI content bias, however, it is important to consider what AI engineers can do better to improve all user experiences. One of the key considerations is building with everyone in mind. Every potential user should be considered, not just the ones that share the same perspectives or biases as the engineers. Another consideration is prompt iteration. When user feedback shows certain lapses and biases, AI engineers should take necessary steps to promptly address such issues. This shows a willingness to build and gain user trust as well as a prioritisation of user safety. Commendably, at the time of writing this, the latest version of ChatGPT no longer returns the offensive joke when fed with a prompt identical to that discussed above.

In summary, a key step to eliminating AI content bias is focusing on user safety while ensuring that user trust is not compromised.

*Image of futuristic humanoid mother and child generated by Dall-E 2


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: