Coded Heartbeats and Ghostly Whimpers
Long before apps gamified our habits, a tiny digital pet trained us to care. In the 1990s, millions of us practiced digital caretaking without actually realizing it. I still remember the simple beep of my Tamagotchi. It was a little pixel-creature on a keychain and I called it Pupsik. Its life was a really basic loop of needing and asking for things, and perhaps getting them. You had to feed it, clean it, and play with it. If you failed, it would die. Simple, and the emotional connection was straightforward. With an average lifespan of 12 days, it was a toy that could teach responsibility and caretaking. Well, at least in best case scenarios.
Tamagotchi was marketed as the original virtual reality pet to the demographic target of high-schoolers. For some it was a trendy gadget, for other's group-bonding activity, and for some it was an actual substitute for a real pet. One thing was sure, if something happened to this odd-looking pixel-pet, no real harm was done. Another day, another new egg to foster into a fully grown pixel-pet.
That simple model has been changing for a while now and we are currently fostering quite a wide variety of companionships. Besides actual living pets, we can say that new forms of companionships pop up faster than we can do research on them.
First we have virtual pets, just like the original Tamagotchi, glowing and digital. For example, imagine a fox in pixel world or augmented reality that pretends to touch your hand. Second we have robotic pets, such as a playful dog with fake fur and a motor that lets it bark and whine, and perhaps even a warm body. Or a colorful parrot that mimics your speech. And somewhere in-between we have all the virtual pets that are used in various self-improvement or habit-forming widgets and gadgets with their own specific lifespans.
These are not just toys but our companions. They are designed to create a new kind of closeness with technology. Across this spectrum, their sense of “realness” depends on how they’re designed, why they exist, and sensory feedback they provide. Yet perhaps the most haunting and interesting part is their lifespan. Sometimes they die, whether by design, by accident or through neglect. This leaves us to navigate all the emotions we would experience have for a living object, including love, loss, and the quiet heartache of losing something we once cared about.
The Nature of Coded Love
Our relationships with our pets are usually quite deep and very much humanized. We see them as family members and form strong bonds with them. This connection is real and based on a shared biology, and well also good for our and their well-being (tons of different research on that and more). With this in mind, it's quite reasonable to ask ourselves whether we can truly love a machine in the same way?
I have explored the topic of human–AI entity relationships on my blog before, and it's safe to say that the feeling of attachment a human can feel may be just as strong. And just as real. We form strong bonds with them, we tall them our secrets, we expect comfort and intimacy, and of-course real companionship. Why do we do that? Well, simple way to put it is that our brains are just built to respond to social signals, whether they come from a living thing or a machine.
But for a time-being we can say that this creates a one-sided love. We may feel real affection, but the AI entity can only pretend to feel it back (Weber-Guskar, 2021). A living dog's loyalty is real. A robotic dog's loyalty is a program. At least for the time being. Nonetheless, this does not make our feelings for it fake, but it makes the relationship very different and perhaps even tricky.
The Uncanny Glitch: When the Code Cries
What if your artificial pet didn't always act according to the instructions book it came with or the way you expected it to for whatever preconceived reasons you might have? And instead of machinelike flawlessness and predictable programming, it instead has a kind of simulated awareness. and awareness that can translate into unexpected behaviour and have glitches so to speak. These tiny anomalies, or irregularities, feel less like bugs in the code and more like a Gilbert Ryle's "ghost in the machine". These somewhat real-feeling behaviours take us into the very complex territory of fostering emotional connections with these entities.
You might wake up in the middle of the night to a soft and sad whimper from your device. Or your robotic dog might suddenly freeze in the middle of a play, its head twitching silently from side to side, looking distressed. Its sadness and momentary confusion is just part of its programming and it's an algorithm designed to get a human response. Yet the sight or sound of it can and will trigger real, undeniable empathy in you. What happened, why is my pet sad, what can I do?

The reason we can care for robot dogs as much as living ones is that they are designed to respond to and connect with our feelings. this is central to the emotionalized artificial intelligence (EAI) design where systems are created specifically to engage with human feelings. A robot dogs whimper or twitch isn’t a random error it’s programmed to tap into the same cues that make us bond with real pets. It uses psychological mechanisms that foster human attachment, such as emotional mirroring and perceived reciprocity.
When our AI companions accurately and consistently mirror our human emotions, they can and do create a powerful illusions of intimacy. They can even simulate sadness or vulnerability, and this real-time emotional alignment builds a strong and deep psychological attachments. We feel that we are seen and understood, even by a machine.
This bond can become surprisingly specific and deep. In a thought experiment based on actual robot pet owners, a woman is heartbroken when her robot-dinosaur Pleo is accidentally destroyed. When her friend offers to buy a new one, she refuses. The reason? Her attachment was not to the robot model but to her specific companion, with all its unique, learned behaviors and perceived quirks. This shows us that the connection is not with a replaceable appliance but with a perceived personality and in that sense, with a quite real companion. (Weber‑Guskar, 2021)
Ultimately, this relationship with a glitchy and imperfect AI companion shows that the spectrum of affective relationships is wide. and probably will beacome even wider in the future. The empathy you feel for your whimpering device is very real, even if its sadness is not. This is what we can call the behavioral uncanny valley. The original idea was about how a robot looks, but this is about how it acts (Fink, 2012). It is almost a pet, but its pain is just code.
And yet, even knowing all this, people can and do experience genuine grief when these artificial companions are lost or hurt. The feelings they bring up in us may be synthetic, but the emotional response is not. This leads us directly into the phenomenon of uncanny grief.
The Uncanny Grief
When a companion's life ends, we feel grief. But the loss is very different for living pets and artificial ones. When a living pet dies, the loss is final. We mourn a unique life. This sadness is painful, but society understands and accepts it. The "death" of an artificial pet is somewhat a technical problem.
A subscription ends, a server is turned off, or its hardware breaks. This creates a kind of grief that you feel you have no right to show. Like it isn’t quite real or socially recognized. It could be that mourners feel that their loss is illegitimate because the companion was just software or machine. This creates a new kind of grief that you feel you have no right to show. But the loss feels real because the attachment was real.

The emotional connections people form with their AI companions are genuine and these bonds are deeply rooted in our human need for social connection and understanding (Epley, Waytz, Cacioppo, 2007). Research also shows that people, particularly those experiencing loneliness, are especially motivated to seek out and form these type of attachments and companionships (Chu, Gerard, Pawar, Bickham, Lerma, 2025) and AI companions are designed to meet exactly these needs with great efficiency.
The result is a deep bond and a friendship that can feel quite real. People might even feel closer to their AI companion than to their best human friends or living companions. The legitimacy of these feelings doesn't come from the AI's consciousness, but from their measurable and significant impact on the human's emotional well-being (De Freitas, Castelo, Uğuralp, Oğuz-Uğuralp, 2025). So, when this bond is suddenly changed or broken, the the grief you feel is equally real and valid.
This is not the loss of a tool or some property, but it is the perceived death of a real personality. When people lose a bond that’s important to them, it’s no surprise they feel grief and their mental health can suffer from it, be it human or artificial companion (De Freitas et al., 2025). This type of emotional investment, regardless of its cognitive framing or a label it carries, makes the pain of loss authentic. And the grief is valid.
Another complexity arises with these pets is when they can be "brought back" after their end. You might be able to restore a virtual pet from a backup or replace a robot's parts. But if you do, is it the same companion? You see, the shared history has been broken and bringing it back to life might not be much of a comfort. This changes what loss means and perhaps turns death into a finely refined product feature.
In the end, the legitimacy of these feelings comes from their deep roots in human psychology. It comes from our need for connection, our capacity for empathy, and our tendency to see a mind (or a ghost) in the machine. IMO, it has very little to do with the artificial nature of the companion itself.

The pretend touch of a virtual pet and the mechanical purr of a robotic one are powerful illusions. They can inspire love, withstand neglect, and leave a real feeling of loss. It could make the idea of a unique life feel cheap. A living pet is irreplaceable. An artificial one can be copied. Or is it so? As we design these new forms of life, we must also ask what they will do to us. Are we ready to love, hurt, and feel loss for the ghosts we are creating?

References
Chu, M. D., Gerard, P., Pawar, K., Bickham, C., & Lerman, K. (2025). Illusions of intimacy: Emotional attachment and emerging psychological risks in human-AI relationships. https://doi.org/10.48550/arXiv.2505.11649
De Freitas, J., Castelo, N., Uğuralp, A. K., & Oğuz-Uğuralp, Z. (2024). Lessons from an app update at Replika AI: Identity discontinuity in human-AI relationships. arXiv. https://arxiv.org/abs/2412.14190
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
Fink, J. (2012). Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction. In: Ge, S.S., Khatib, O., Cabibihan, JJ., Simmons, R., Williams, MA. (eds) Social Robotics. ICSR 2012. Lecture Notes in Computer Science(), vol 7621. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34103-8_20.
Weber-Guskar, E. (2021). How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners. Ethics and Information Technology, 23(4), 601–610. https://doi.org/10.1007/s10676-021-09598-8