← Back
Evolution of my Thoughts on AI
February 27, 2023
Description: My ethics course assignment was to critique the ChatGPT response to the following question: “Explain in 300 words Williams’s integrity objection to utilitarianism.” This was my first introduction to AI chatbots. And although I found it interesting, I had many reservations over the powerfulness of this tool. I maintained, without conscious thought, that AI could never achieve the level of brain power that human beings currently possess.
The AI-generated text of Williams’ integrity objection to utilitarianism is misleading due to its vagueness and generic word choices, ultimately failing to capture the specific nuances in Williams’ writing. One instance is seen in the AI-assertion, “Bernard Williams's integrity objection to utilitarianism is based on the belief that personal beliefs, values, and commitments are important aspects of a person's sense of self, and that these elements cannot be disregarded in ethical decision making.”
Although this statement is not incorrect, the phrase important aspects can be interpreted in different ways. In one interpretation of this statement, I regard the phrase to mean the integral components and core tenets of an agent that is simply part of being human. Disregarding these aspects therefore alienates the agent from their own actions and the source of their actions, forcing the agent to desert their own point of view for the point of view of the universe. In Williams’ perspective, being stripped of moral agency simply does not make sense. Therefore, the AI’s statement is correct, as personal beliefs, values, and commitments are inseparable from a person’s sense of self and are integral in ethical decision making.
However, in another interpretation of this statement, I regard the phrase important aspects to mean something valued by the agent, and that if deserted, would result in negative feelings. This interpretation is inaccurate, as Williams’ integrity objection is not that consequentialism fails to account for people’s upset feelings; consequentialism factors in all the positive and negative outcomes, including the agent’s outcome of “bad feelings.” This concept is loosely referred to as “self-indulgent squeamishness,” or irrational feelings that arise from the agent’s own frivolous moral values. Therefore, the AI’s statement is incorrect, as the integrity objection is being misrepresented to object to self-indulgent squeamishness.
The phrasal discrepancies and multiple interpretations that arise throughout the AI’s writing prove that although the AI is correct in defining the integrity objection on the surface level, it cannot fully grasp the specific nuances within Williams’ writing. This highlights a major issue in AI writing itself: it fails to convey deep understanding on the subject matter, instead resorting to generic, regurgitated phrases.
October 11, 2023
Description: For my computer science course, we were asked to read this futuristic article about AI and write a brief response on our general thoughts on this matter. At the time, I scoffed at the notion of technology ever being advanced enough to emulate the human brain. This sentiment is reflected in my pessimistic response to this article.
Kurzweil’s views of the future, including ideas about retro-engineering deceased relatives and living forever using AI, are almost unfathomable and seem to be topics only broached through a movie screen. But, I believe his vision of the future crosses a line. Reaching a point where a computer's intelligence is indistinguishable from a human’s and where computers hold a sense of consciousness makes me question what it even means to be human. Is not a large part of human life dealing with the finitude of it? In addition, how will human morality and deeply intertwined biases in data play a role in computer intelligence? I do not believe that the predictions in this article are attainable. Moreover, I do not believe we can digitize versions of people and capture the complexities of their conscious and subconscious thoughts, feelings, and desires. And even if this level of technological advancement is possible, there will be heavy pushback from people who see this kind of technology as something to be feared or something that goes against the natural laws of the world.
August 5, 2025
Description: My philosophical wonderings into the bounds that AI could soon reach and surpass.
I have recently been contemplating the parallels between Mary Shelley’s Frankenstein and the technological arms race to achieve AGI. AGI refers to a level of artificial intelligence that would reach or surpass the cognitive ability of human beings. And while the timeline for AI models reaching this threshold is deeply contested - some researchers contend that modern LLMs have already exhibited signs of human cognitive intelligence, while others deny the feasibility of this occurrence - it ultimately remains to be seen where this threshold even lies: Is merely an imitation of human language and behavioral patterns, as suggested by the Turing Test and analogous intelligence thresholds, synonymous with cognitive intelligence? Does this intelligence require consciousness?
Presently, the hallucinations and sycophantic behavior displayed in AI models are interesting topics to divulge in. Moreso, I believe, are considering the philosophical implications of AGI as an autonomous agent and sentient being; comparatively, AGI as behaving in such a manner indistinguishable from our own, that we truly cannot extrapolate whether or not it has actually reached that capacity. Considering AGI in such a hypothetical fashion is almost like shoving it into a black box filled with our wildest hypotheticals. Still, it is interesting to consider the nuanced characteristics and unfathomable perceptions of a sentient AI being. Because in the same way that we appreciate light because we are plunged into darkness, we understand life through grappling with death. It is the stark comparison between being and not being, the silent death march that thrums softly in our hearts, that haunts our existence. An autonomous agent that does not understand life through contending with its finitude is unknowable. I am almost fascinated as much as I am terrified by the implications.
Frankenstein’s monster is a reflection of the unbelievable bounds that humanity can cross, as well as the unknowable terrors that it will unleash. Brilliant yet dangerous, unchecked innovation fueled with a delirious academic fervor, the monster is a true exemplar of the human drive to circumvent the laws of nature, create something greater than ourselves, and play God. Perhaps we are like Frankenstein: we work in a fit of enthusiastic madness, demanding more than what our existence can ever offer to us. We work in an intense fervor, searching for purpose and crafting legacies that live on past our bodies, because every second we march closer to death. It is so appealing to create something bigger than ourselves; bigger than the aching joints and muscles, the strained eyes and fatigued brain, the inner clock ticking slowly in our heads and looming heavy in our hearts. What if we could stick out our tongue at the finite nature of our existence? Decree the unfair emotions that bottle up and torment our systems as measly nuisances to be artificially satiated? Achieve idealized versions of ourselves in our redefinitions of loneliness, love, and fear?
In Dostoevsky’s Notes from Underground, the Underground Man speaks of a Crystal Palace - this shiny ideal and perfect living space for humans - as something confining that will eventually be abhorred. Because no matter its perfection, the human desire for free will and the ability to choose, even masochistically choose pain, will always be most valued. It is said that humans, against all rationality and a supposed contentment with their circumstances, will always yearn to exert their own will.
And although I find my mind circling and compounding with implication upon implication, I most often find myself wondering what it means to be. I envision some strange dystopia where AGI completes all the grueling labor that once belonged to humans, freeing us to do more creative and fulfilling tasks, and I almost scoff at the notion. In many respects, they describe a crystal palace and they act like humans will be satisfied sitting idly in it. What tasks will be left once we are “freed up” to do them? Will they bear the same significance when AGI will always do it better? Will these tasks even be enjoyable when we have this time? What is light without darkness? What is time when it is no longer ticking? What is freedom to act when there is nothing to act upon? What is purpose and legacy and autonomy over creation when everything is overshadowed by something so much greater than we can ever become?
Still, I must acknowledge my limitations in this subject matter. And while I ponder the philosophical questions attached to AGI, I realize that thinking is not acting, and the innovators scrambling to achieve this momentous task may grasp something that I simply cannot. I feel like a crabby dissenter, rambling nonsense and grumbling about change, refusing to be malleable and see a future that may be bigger than us all. Perhaps all my wondering will one day be reduced to an overly dramatized pessimism through my lens of refusing to change. And one day, I will reread this long tangent, surrounded by my AI friends, and we will all laugh at the absurdity of this writing - or perhaps I will laugh and they will sycophantically agree. Maybe, even, the AI bubble will pop. No matter what, I believe that while we should never stop innovating, we should also confront the uncertainty that it will bring.