Opinion | We must decolonise AI to overcome cultural bias in the classroom – Technologist
At Harvard University, for example, its introductory computer science course incorporates AI-based platforms to guide students in learning programming. A university pedagogy project guides educators to critically engage with AI in teaching, while workshops on harnessing the power of AI are offered to faculties and teaching fellows.
ChatGPT instantly gave us an outline divided by historical periods, starting from prehistory and the ancient world to medieval, early modern, modern and contemporary periods. It listed major ancient civilisations, including Mesopotamia, ancient China and Indus Valley, under the ancient period module.
However, when ChatGPT moved to the classical and early modern periods, the coverage of non-European history gradually diminished. ChatGPT’s outline of world history in the early modern period, dating roughly from the 16th to 19th centuries, was almost exclusively European.
Other cultures began to re-emerge in the modern and contemporary periods, but the events listed primarily revolved around engagements with the West. Despite our efforts to ask ChatGPT to generate different versions of the outline, it consistently focused on European history.
Equally impactful historical events that occurred in China, Japan, India and other parts of the world were absent in ChatGPT’s response, such as China’s Song economic revolution that preceded the European industrial revolution, literacy expansion during the Edo period in Japan, and the adoption of Persian language and heritage by the Delhi Sultanate and Mughal Empire in India.
The omission of Asian and non-Western history during the premodern period in ChatGPT’s outline is not coincidental. Eurocentrism emerged during the Renaissance period and was further entrenched during European expansion and colonisation. The exclusive Eurocentric lens historically led Europeans to assess and judge other groups based on their own world view.
Non-European histories, cultures and customs that were irrelevant to European affairs were often ignored, while those that did not align with European standards were deemed as uncivilised and inferior without critical examination. Domination over other cultures was naturally justified because other ways of life were unseen, unheard and knowingly or unknowingly labelled as unimportant, irrelevant or wrong.
Such Western-centric educational technology really concerns us. The spirit of humanities studies and critical thinking lies in diversity and contradiction. Yet, ChatGPT follows the single Anglo-American value system by default. It universalises Western knowledge that is de facto local.
If we used ChatGPT in the classroom, our students would naturally absorb whatever information ChatGPT fed them and evaluate the world based on a Western-centric lens. Since their peers would be educated by the same kind of chatbot, they might become less likely to challenge the status quo due to the prevalence of specific ideas and peer pressure.
One might say that ChatGPT knows East Asian and non-Western history, and all we need is “the right prompt”. However, it requires expertise to come up with the right prompt. How can we expect students without sufficient knowledge of world history to write a prompt outside what they know?
Rise of ChatGPT marks time for educators to rethink teaching and testing
Rise of ChatGPT marks time for educators to rethink teaching and testing
We are able to identify that this outline is strongly skewed towards Western-centric history because we are academically trained historians.
If such technology becomes the default mode of education for history, we risk inculcating generations to be indifferent to Asian and non-Western history. Efforts aimed at promoting diversity, equality and global history will become futile.
This would pave the way for moral indifference to global inequality and justify violence and suppression against people who have “no history”. Such a scenario mirrors the mode of thinking prevalent during colonialism.
AI has great potential to reduce education inequality and provide better access to knowledge. But significant flaws and limitations in current AI models can lead to serious negative consequences. A collective effort between technologists and the humanities is essential to build responsible AI for education.
Queenie Luo is a PhD candidate at the Department of East Asian Languages and Civilizations at Harvard University and holds a master’s degree in data science from Harvard
Michael Puett is the Walter C. Klein Professor of Chinese History and Anthropology at Harvard