When people are inspired, or called to political action (including going to the polls), by an AI-generated speech by a beloved leader, is that a good thing? While one can admire the technological prowess of a particular political group, especially under difficult circumstances, and recognise the speed of innovation, but just because something is possible through new technology does not automatically mean that it is a good thing. There are many issues here to consider — for example, the ownership of the message and its legal ramifications. What if the speech contains something deeply inflammatory, or prejudiced, or defamatory. What if it provoked a group of people to act unlawfully? Can the AI speech be taken to court? Who is responsible? Can it be proven that the person it was attributed to actually composed it? What if that person is not even around (e.g. incarcerated) and has nothing to do with it? Of course, there are other issues at play here as well, including empathy and emotions.

Recently, there has been a debate in religious circles in the US and Europe about an AI-generated sermon. This has generated a bigger debate about religion, spirituality, higher power and AI. Greg Epstein, a chaplain at Harvard who has a forthcoming book on the subject is troubled. “There’s a danger in projecting divine goodness, or some transcendent intentions onto what is ultimately an extraordinarily large economic force that wants to become ever larger and evermore influential,” he said in a recent interview to Harvard Gazette. Further, noting that the creators of these products have little to do with spirituality or human emotions, he said, “It wants to sell more products; it wants to dominate more markets; and there aren’t necessarily benign intentions behind that.” What makes us think that this is only going to be an issue in some specific churches in the West? What if we were to face similar questions about religious or spiritual leadership and chatbots in our midst?

The question, of course, is not simply about politics and religion. It is fundamentally about what makes us human, including recognising and embracing all our imperfections. Undergraduate institutions, across the US, announced their admission decisions in the last few weeks. High school students, not just in the country, but from all over the world who are interested in pursuing education in the US waited eagerly as they saw the admission rates plummet further. Many of them had worked hard for months on their applications, and had polished their personal essays (an integral part of the application) over and over again. But what if this entire effort was in vain? What if a small subset of students relied on AI-based tools, and not their own creativity, to write a perfect essay, therefore undermining all those who wanted to play the game by the rules? Recognising this, Duke University changed its policy and announced that it would no longer score the essays (as it had done for decades). The admission office noted, “Essays are very much part of our understanding of the applicant, we’re just no longer assuming that the essay is an accurate reflection of the student’s actual writing ability.” But what about other universities? Can someone who relies on AI to present an inaccurate, but a compelling, picture of themselves get ahead of someone whose essay is true and honest, but just not as polished as what a large language model would generate? How do we know that the AI-generated essay has any truth? Or is it a made-up story, written by a bot, to convince the admission officers? The AI-essay may indeed be better and faster to write, but is the student who used AI more socially and ethically conscious than the one who wrote it themselves?

There is an assumption that technology will create perfection. Maybe it will in certain areas — but that perfection is unlikely to lead to justice, equity, kindness or satisfaction.

Published in The Express Tribune, April 2nd, 2024.

Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.

QOSHE - Seeking perfection? - Muhammad Hamid Zaman
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Seeking perfection?

28 4
02.04.2024

When people are inspired, or called to political action (including going to the polls), by an AI-generated speech by a beloved leader, is that a good thing? While one can admire the technological prowess of a particular political group, especially under difficult circumstances, and recognise the speed of innovation, but just because something is possible through new technology does not automatically mean that it is a good thing. There are many issues here to consider — for example, the ownership of the message and its legal ramifications. What if the speech contains something deeply inflammatory, or prejudiced, or defamatory. What if it provoked a group of people to act unlawfully? Can the AI speech be taken to court? Who is responsible? Can it be proven that the person it was attributed to actually composed it? What if that person is not even around (e.g. incarcerated) and has nothing to do with it? Of course, there are other issues at play here as well, including empathy and emotions.

Recently, there has been a debate in religious........

© The Express Tribune


Get it on Google Play