Anatomy of a Conversation

7 min read

(This post is a work in progress. Your feedback is welcome as I refine it further.)

"Restlessness inherent to screens, the inability to ever linger or pause or catch your breath... It’s a strangely disembodied experience, a sense of ceaseless, rustling motion when nothing is moving at all: electrical pulses flash and gasp beneath the oceans, your mind strains to catch up, your body remains still save for a few twitching digits, the shell that’s left behind when your spirit evacuates for the mirage of higher ground. We become as smooth and reflective as the screen itself, all glassy surfaces and metallic edges obscuring the hollowness within. No need to fantasize about what it might be like to upload your consciousness to the machine—most of us are already there." ⏤ Coming Home, Mandy Brown

We are experiencing a shift in how we perceive and engage with the world. In an era where our focus grows ever narrower and our memories are entrusted to machines, we work to maintain our place in this new reality.

The relationship we have with technology is becoming increasingly transparent, but this transparency is not like the mutual openness we share with a close friend. Technology offers a one-sided transparency; it answers every question we ask, guides us to places we’ve never been, and retains details about us that we may forget. In this sense, it works much like a black box, storing our data while keeping its inner workings completely hidden from us.

When you try to recount a vacation you had with your family 20 years ago, there may be gaps in the story; some details may have been forgotten or misremembered. However, a photograph from that day remains unchanged as ever. Technology, by preserving and freezing the past, challenges the nature of human memory. Yet, navigating this fixed reality—whether to lose oneself in it or come to terms with it—is far from easy.

The word “wired” carry a meaning of being tightly connected, bound, or intertwined. It evokes the image of something intricate, like the nervous system threading through the body, or a web of cables binding one thing to another. It’s like your mind is on a wild ride, racing thoughts that won’t stop. It’s like a squirrel caught in a whirlwind of excitement, but instead of nuts, it’s all about ideas and thoughts.

In an era where technology increasingly shapes human interaction, the conversation — our most fundamental way of perceiving the world and staying wired — is being redefined.

When we consider the anatomy of a conversation, we encounter a form of communication that embodies both symmetrical and asymmetrical characteristics. (This, in itself, carries an inherent asymmetry.)

A conversation typically begins with a greeting. One party speaks while the other listens and starts to formulate a response in their mind. When the response is given, the roles reverse, and this cycle continues for a while. If we were to illustrate this process with speech bubbles, we could describe it as having a certain symmetry. However, this symmetry often fluctuates depending on the roles of the those who talk, the depth of the topic, and the flow of the conversation.

  • Yo

Press to Send

Each bubble represents the process of exchanging information, ideas, or thoughts between participants. A conversation carries a rhythm until it concludes. Small anecdotes may be introduced, and the length of responses often converges over time. Of course, this can vary depending on the topic of the conversation; informal discussions tend to have a steadier rhythm, while formals often involve asymmetries and disruptions in rhythm.

Interaction with an LLM is inherently asymmetrical, unlike human-to-human communication. Long, detailed responses to short questions and the rapid generation of these responses disrupt the natural flow of a conversation. In human dialogue, anecdotes, pauses, and mutual reflection shape the rhythm of interaction, whereas LLMs often fail to capture this natural dynamic. Their responses are provided in an isolated manner by algorithms, and contextual continuity can sometimes break down.

An LLM does not independently share anything unless prompted or asked a question. It lacks the human-like initiative to start or guide a conversation. Once engaged, it never feels emotions like “boredom”, removing the natural pauses and re-engagement moments found in human conversations. Moreover, the model gives obvious misinformation (hallucinations)1 and fails to recognize or correct these errors. It lacks of self-reflection skills (reasoning)2, which are essential for evaluating and revising its outputs.

Timing is one of the key elements in the flow of a conversation. The model should sometimes be able to interrupt you during a conversation or naturally include the thinking process when providing a long response that requires thought. Instead of delivering responses instantaneously, Delivering responses at the pace of average human speech can lead to improvements. The conversation should maintain an asymmetric structure when necessary by generating multiple alternative responses or even asking you questions to keep the dialogue going while you are thinking. These questions should be asked with the intent to learn something new, as acquiring new knowledge requires desire and intent.

    Francois Chollet, the creator of Keras, describes LLMs as interpolative databases.3 As Chollet points out, LLMs lack the ability to extrapolate; they memorize patterns to answer questions rather than genuinely understanding or reasoning.

    At their core, LLMs function as external storage for us. These models, which have ingested a large portion of the internet, not only preserve information but also gather new sources of data through their interactions with users. But, their training capacity differs fundamentally from the human trait of educability. Unlike humans, acquiring new information does not make them smarter or more adaptable. Instead, their cognitive “growth” halts after their training phase—it’s as though their minds freeze on graduation day.4

    LLMs, despite their limitations, have the potential to hold a meaningful place in our lives. Instead of viewing them merely as tools for answering simple questions, generating interfaces with a few prompts, or producing text, we can approach them as mind-openers.

    Expecting them to produce an undiscovered chemical formula or to reach conclusions without conducting any experiments would be an approach that disregards the importance of experience and the passage of time. Instead, we can use LLMs in a way similar to how Bob Ross, with every brushstroke, inspires thought clouds within us—an approach that enriches our ideas and nurtures our creative processes.

    "It is up to us to decide what human means, and exactly how it is different from machine, and what tasks ought and ought not to be trusted to either species of symbol-processing system. But some decisions must be made soon, while the technology is still young. And the deciding must be shared by as many citizens as possible, not just the experts. " ⏤ Tools for Thought, Howard Rheingold

    ...


    1. Hallucinations are confident but incorrect or fabricated responses, arising from reliance on patterns in training data rather than factual understanding.

    2. Reasoning is the simulation of logical processes using patterns in training data. They mimic deductive, inductive, and abductive reasoning by predicting text based on learned relationships, but lack true understanding.

    3. Pattern Recognition vs True Intelligence

    4. What Does It Really Mean to Learn? by Joshua Rothman