Week 1: reflections

Timothy Lee
3 min readOct 12, 2020

This week our discussions in computational arts was centered around a short article, “A fish can’t judge the water” by Femke Snelting (Snelting, Femke. 2006. A fish can’t judge the water. OKNO Publix, Brussels). The article spoke of the seamless integration that technology and software has made in our everyday lives, and how we are both painfully aware but also blissfully unaware of the role software plays in our lives. Even the title prompts us to consider our relationship to the softwares and programs around us by asking a metaphorical question: Why can’t a fish judge the water? A fish is born into water; its entire existence from birth to death happens in the water — it knows nothing else. If that’s true, how can it judge something that is fundamental to its very existence, particularly when it does not have an alternative lifestyle to compare a life in water to? Similarly, we see with each generation a deepening integration of software into nearly every single aspect of our lives — from accessing social media, to even setting an alarm to wake up in time for class. For many people, an existence removed from software at this point seems unfathomable — our understanding of a life is one that exists in the same hemisphere as technology.

I learned in my neuroscience classes back in university that the brain is essentially an organic computer — sharing concepts of circuits, programming, and even notions of coding as it applies to memory formation, cognition, and the execution of behaviors. As technology continues to advance, there have been speculations and theoretical exercises about the ability to store our consciousness — to “upload it” to a digital cloud or a hardware so that, while our body eventually disintegrates, our mind (and what makes each of us uniquely us) survives.

The book I brought to class was Ernest Becker’s “The Denial of Death” and I felt like it was an apt reading to consider when discussing what are/if there are limitations to what softwares and technological advancements can do for us. Becker, widely considered the esteemed existential philosopher after Kierkgaard, posits that humans are all aware of our impending mortality since birth; but contemplate our fragility constantly would drive us to the brink of insanity. As such, humans develop a “heroic” — a justification for existing — which leads their lives.

How would our relationship with death change if technology were able to grant us immortality — the ability to save our consciousness and sustain it without a body? The messaging application “Replika” comes to mind instantly: it’s a platform where users spend tens of hours answering questions to build a digital library of information about themselves. The origin of this program came from the creator mourning the death of her best friend and re-reading text messages between them — she believed that embedded in all those messages was her friend’s turns of phrases, patterns of speech and other personality traits that were intrinsic to what made him, him. The creator then put the messages into an AI network and the result was a bot that eerily responded to her messages like her best friend. But is that truly achieving immortality? One of the limitations was that the responses generated by this bot were based on past behaviors only; it could only act from the archive that already exists and not from any new incoming information. In essence, her best friend could not gain “new behaviors” that real humans can in the process of living, of experiencing, and of reflecting. Consciousness is something that is still being researched in the field of neuroscience, and while there are many theories as to how it arose and how it’s “programmed,” it is still almost completely enigmatic. Scientists can speculate its evolutionary advantage, the neurochemical processes that govern it, and the sociobiological implications, but its very existence — and what makes humans inherently a unique species — is still uncertain. I feel that similar conclusions can be drawn to the current field of artificial intelligence — what is the end goal? To create machines that have the cognitive autonomy as humans? One can program algorithms to mimic stimuli input, processing, and output, and fine-tune it to come close to true human behavior, but can it ever become actually human? What would that entail?

--

--

Timothy Lee
0 Followers

Blog for Computational Arts-Based Research & Theory