This op-ed is written by Caspar Bos, a third-year Spatial Planning and Design student.
The rapid adoption of AI within the Faculty of Spatial Sciences and our society as a whole makes me question whether we have properly considered the implications of this rapid “progression” of humanity. Do not get me wrong; AI, LLMs (Large Language Models, such as ChatGPT) and their capabilities are nothing short of extraordinarily impressive. If you told me 8 years ago that robots would be able to do my homework, the rest of my time in high school would have been spent frolicking in fields. I would not have been able to resist the urge to try it out and see what happens, let alone have the discipline to keep doing my own homework. This fear of losing my work-discipline is something that I think about often and makes me fundamentally opposed to “just ask ChatGPT”, as so many people do nowadays. Something about that phrase irks me. Not the words themselves, but the underlying thoughts and implications are something I believe we as a society should be concerned about. I want to substantiate further upon two significant concerns raised by the usage of ChatGPT for streamlining one’s work or socializing, as well as critically evaluate the place of ChatGPT within the ecosystem of our faculty and the field of social geography and spatial planning.
AI in the educational context
The idea of “just ask ChatGPT” can derive from a variety of motivations; perhaps you hate the task of writing boring emails, summarizing texts or coding in RStudio. Maybe you find the idea of using a “robot” to “converse” with about all different kinds of topics novel. LLMs can be a quick solution for any of these activities. I have personally seen the usage of LLMs skyrocket in the past two years from both the perspective of the student and the teacher. One particularly nasty instance was during a group project, where through a team-member’s irresponsible usage of AI I almost ended up in a lot of trouble.
Failure of disclosing the usage of AI has become a noticeable pattern, particularly in my time grading submissions for the web-classes of Spatial Planning and Design. A multitude of students submitted work written by an LLM and had elected to not disclose this usage, as per the FSS guidelines. When confronted, most of them sheepishly admitted using LLMs to reportedly “check grammar”. I do not hold it against the high schoolers personally that they used AI; I am sure they had their reasons to use it. However, the lack of communication about needing either more time or assistance is symptomatic of larger problems in the education system, and LLMs help sweep these problems under the rug by absolving students of any agency and understanding of the topics at hand. Another problem that is often swept under the rug is the detrimental state of youth’s mental health. More and more young-adults and teenagers have started to use ChatGPT as a therapist, a choice strongly discouraged by professionals.
You cannot reasonably believe teenagers to be disciplined enough to resist the urge of having their work done in a matter of minutes instead of hours. Rather than taking a far more conservative approach whilst awaiting research of the impact of AI on the youth, the education system has been eager to let students use it for “brainstorming” and “grammar-checks”. The belief that the uses stay limited to these two or similar ideas is naïve, and fails to account for the fact that most teenagers seek to spend as little time on schoolwork as possible. More importantly, preliminary results of investigations into the consistent usage of LLMs show consistent underperformance at neural, linguistic and behavioural levels. The paper in question suggests that the usage of LLMs could harm learning.
By not completing the assignments as originally intended, a gap in knowledge and skills will grow between the younger, future generations of academia and the workforce on the one hand and the older, more experienced generations on the other. The high schoolers and students of today will be your scientists, lawyers and doctors in 10, 20 years from now, using AI in contexts they themselves are not experienced in. Despite incredible leaps of AI in the field of medicine, I would not want to grow dependent on a piece of software that can fail us at any time, for any reason.
Vulnerability of AI
We become incredibly vulnerable to the whims and wants of Goliathan corporations controlling these LLMs and their servers by maintaining the widespread adoption and normalization of the usage of AI. Having a faculty, a university, or even a society functionally operate on AI seems unwise in times of rising political tension, where dependence becomes a weakness. AI and LLMs by extension are not infallible and objectively correct, either. Quite the opposite, in fact, as shown by the phenomenon of “AI-hallucinations” or censorship of the AI by the controlling entity. Examples of this include ChatGLM or DeepSeek, both of which are being controlled by the Chinese authoritarian government.
By normalizing the idea of “asking ChatGPT” for advice or factual information, we slowly let the idea of a societal truth slip through our hands, only caught in the bucket of the next megalomaniac tech-mogul to be manipulated toward their favour, whether that is economic or political in nature. Rampant misinformation campaigns are already a wide-spread problem on the internet, an issue only being exacerbated by the exponentially rapid “humanization” of LLMs. Venezuelan state media ran an extensive AI-based propaganda campaign on national television, presenting itself as independent, Western-based journalism. This is not the only example either, as similar AI-based propaganda has been used in Burkina Faso in support of a coup. Those who are not well-versed in the digital realm will and already have fallen for misinformation, scams and lies. Those who are well-versed in the digital realm, also.
AI as a tool in social geography and spatial planning
The Faculty of Spatial Sciences holds in essence the same stance as the RUG; disclose the usage of AI, only for select purposes. Plagiarism is still strictly forbidden. I call for re-evaluation of this stance, however. Artificial intelligence is, by its very nature, non-human. It cannot listen to stories. It cannot make a place better, for it is in no place and does not have the human experience of a place. It does not share in our goals, merely focussed on profitability for its shareholders. It fundamentally cannot understand human culture, only mimic its training data. If it could do any of these aforementioned actions, the entire point of social geography is lost.
Social geography (and spatial planning, by extension) have made significant efforts over the past decades to be viewed as a less distant, abstract science and far more interpretative. We take pride accounting for “the human factor”, to listen to those overlooked by society. Simply put, we do our job to give a voice to those who cannot be heard and prioritize accessibility and inclusivity as one of the cornerstones of society. I find it impossible to take this sentiment seriously when considering our stance on AI. This new tool’s perspective on what is important is completely out of our hands and cannot meaningfully carry the field forward.
Shortcomings in the education system and society as a whole make people resort to LLMs, making it easier to ignore the issues in society. Uncritically adapting AI at the current rate seems to be slowly corrupting the interpersonal connections and individual ideas we should foster at the FSS.
The application of LLMs in social geography is morally questionable at best and undermining everything we stand for at worst. Terms like “asking”, “conversing” or “understanding” with AI are misnomers and do not reflect the inner workings of these highly complex, often misunderstood tools. The immense capabilities of LLMs are precisely why a far more cautionary, careful evaluation of this toolkit is necessary. We as a faculty should ask ourselves whether we really want to grow dependent on the capricious nature of corporations like OpenAI or if we should change course and find alternatives. I believe we should not wait too long on making that decision.