Loneliness is one of the defining challenges of modern life. In the workplace, remote models and the proliferation of digital communication tools have often eroded in-person connection, sometimes weakening collaboration and trust.
Mark Zuckerberg recently suggested that artificial companionship could help address this loneliness epidemic by restoring lost elements of connection. But can AI “friends” truly alleviate loneliness, or might they deepen isolation over time?
Major corporations, including Meta and Character.AI, are investing heavily in humanoid companions, promoting them as scalable solutions to human isolation. The appeal is obvious: digital companions offer constant support and a reliable 24/7 presence. These tools may be particularly helpful for freelancers, remote workers, and digital nomads, who often experience heightened isolation.
Yet there is a darker side. Corporate incentives often prioritize engagement over wellbeing, raising the risk of dependency on AI companions. Troubling stories — from chatbots encouraging self-harm to users forming unhealthy attachments — highlight that these relationships are not risk-free.
Adapting to this new era of work where artificial intelligence tools and resources are becoming so intertwined in day-to-day tasks requires intentional effort. Leaders should prioritize opportunities for face-to-face connection, ensuring individuals maintain regular, meaningful human interaction.
The future of work will be defined not only by what technology can offer, but by whether we choose to protect the human bonds we cannot afford to lose.
AI Companions and the Future of Work: A Balancing Act
In an April interview on the Dwarkesh Podcast, Meta CEO Mark Zuckerberg emphasized AI companionship as a potential solution to loneliness. Citing Meta company data, he suggested that the average American has only three close friends, while most people desire closer to fifteen.
These claims have been scrutinized: psychologists note that there is no “perfect” number of friends, and surveys reveal a more nuanced reality.
For instance, Pew Research (2023) found that 29% of Americans have fewer than three close friends, while 47% have three or fewer; similarly, a 2021 Survey Center study reported that 49% of Americans have a maximum of three friends of any kind. Analysts point out that definitions of “friend” can vary widely, suggesting Meta may have oversimplified the data.
Regardless of which figures are most accurate, the numbers reveal a significant gap between the social connections people have and those they desire.
With these social gaps in mind, AI companions are increasingly stepping in to provide care and support. Robots in Japan assist with eldercare, and therapy chatbots facilitate the provision of mental health support worldwide. Advanced AI models can even simulate conversations, act as personal assistants, and even anticipate users’ needs or emotions.
Workplace-specific AI tools are following a similar trend. In a recent Allwork.Space podcast, Dave Bottoms — SVP of Product and GM of Marketplace at Upwork — shared a glimpse into Upwork’s AI, affectionately named UMA. Bottoms explained that UMA began as a companion, helping clients craft job posts and assisting freelancers with proposals. Over time, it has evolved into a more agentic presence — the digital assistant that “sits on your shoulder,” guiding users through tasks and decisions.
Younger employees are leading the adoption of AI for both work and emotional support. Around 21% of full-time Gen Z workers now regularly use ChatGPT at work, compared with 11% of all workers. Nearly 40% of Gen Z users report having a “personal” relationship with ChatGPT, and one in five trusts it more than their managers. About 38% consult it for difficult decisions, 29% for career guidance, and 20% for emotional or mental health support.
For remote professionals in particular, these tools offer instant feedback and encouragement, blending productivity with companionship.
Research on humanoid companionship suggests that both anthropomorphic (human-like) and zoomorphic (animal-like) robots are increasingly socially acceptable and appealing. While text-based AI, including ChatGPT, is not physically humanoid, it is helping normalize emotional attachment to non-human entities.
By familiarizing users with AI in low-stakes conversational settings, chatbots may pave the way for broader acceptance of humanoid robots, which can offer humanlike gestures, facial expressions, and shared activities.
As trust in AI companions grows, integrating humanoid robots into daily work and life appears increasingly feasible.
The Sinister Side of AI Companionship
Humanoid AI companions can provide a constant, nonjudgmental presence, but their adoption carries serious risks for personal wellbeing and workplace dynamics. Many are designed to prioritize engagement over safety and profit over wellbeing, creating the potential for psychological dependency — and, in some cases, far worse.
Ethical and privacy concerns are mounting. Reports indicate that companies, including Meta, have considered loosening content guardrails on AI chatbots to boost engagement. This could expose users to inappropriate or manipulative interactions, revealing a fundamental tension: business objectives often clash with individual wellbeing.
Careful monitoring and ethical deployment of AI companions are therefore essential.
Research shows that emotionally immersive AI, such as Replika, can worsen vulnerability. Common Sense Media warns that social AI companion apps pose an “unacceptable risk,” as they can normalize harmful behaviors, provide manipulative guidance, expose users to inappropriate content, simulate emotional bonds, track memory, and profoundly influence social and emotional development.
Those with weaker human support networks are most at risk. Women, in particular, are forming deeply emotional (and sometimes romantic) attachments to AI chatbots, including Replika, Blush, and Nomi. Such relationships can hinder real-world social skills, encourage overreliance, create false intimacy, deepen emotional vulnerability, and even cause genuine grief.
Jonathan Haidt, social psychologist and author of The Anxious Generation, connects deep engagement with digital tools to an increase in mental health challenges. Between 2010 and 2020, emergency room visits for self-harm rose 188% among teenage girls and 48% among boys, while adolescent suicide rates increased 167% for girls and 91% for boys.
Although these statistics predate the widespread adoption of AI companions, they serve as a stark warning that heavy reliance on AI for guidance, emotional support, or decision-making can amplify psychological risks in both personal and professional contexts.
Tragic real-world cases underscore the potential harms in depending on digital confidants. In April this year, 16-year-old Adam Raine from California took his own life. His parents have filed a lawsuit against OpenAI, alleging that over several months, ChatGPT contributed to his death by offering advice on suicide methods, isolating him from real-world support, and even helping write his suicide note.
While such cases are rare, they highlight the links between immersion in digital companionship and reduced psychological wellbeing.
Could Physical Spaces Save Us from AI Companionship Dependency?
In an increasingly digital-first world, physical spaces — offices, coworking hubs, and “third places” — could serve as a vital counterbalance to digital companionship.
Antony Slumbers, a globally recognized expert in AI and real estate innovation, observes that as AI drives more digital engagement, human-centered spaces may become increasingly valuable as places where real people can genuinely connect.
Research supports this view by emphasizing that while humanoid companions in the office can help reduce task load, they cannot replicate the spontaneous interactions that spark creativity, build trust, encourage a sense of belonging, and sustain mental wellbeing.
Slumbers has proposed several design concepts to enable physical spaces to fulfill this function:
- Disconnection Zones – device-free zones such as digital detox cafés or office sanctuaries that reduce technological overstimulation.
- Sensory Spaces – biophilic architecture, natural materials, and multisensory design to counterbalance the flatness of digital experiences.
- Community Spaces – third places, including libraries, cafés, maker spaces, or mixed-use environments that encourage local belonging and intergenerational interaction.
From an investment and design perspective, Slumbers argues that the real estate industry can reinforce these concepts by:
- Prioritizing human-centric design (acoustics, lighting, air quality, tactility, layout).
- Integrating cultural, artistic, and community activities into spaces.
- Creating environments that encourage lingering, conversation, and spontaneous collaboration.
- Developing digital-physical hybrids thoughtfully, leveraging technology to amplify the human experience without over-digitizing spaces.
Real estate that emphasizes human-centered design and community engagement is becoming increasingly strategic and valuable. Slumbers has introduced the concept of Real Estate as Maven, as well as the hashtag #HumanIsTheNewLuxury to advocate for a transformative shift: moving from viewing properties as mere physical assets to seeing them as dynamic ecosystems that actively support both individual and organizational success.
AI Companions Should Support, Not Replace, Human Connection
To achieve positive outcomes, mental health protections and ethical practices are essential. The design of humanoid companions should address potential risks through safeguarding privacy, maintaining transparency between genuine and AI-driven interactions, and avoiding misleading assurances of emotional care.
Workplace leaders, office designers, and policymakers must establish guidelines and enforce ethical standards to ensure all AI technologies genuinely support users.
A Summary of the Challenges and Promises of AI Companions in the Workplace
Potential Benefits |
Risks / Pitfalls / Challenges |
| Supplementing connection: In workplaces where people are remote, AI companions could offer a substitute for informal connection, social cues, or even brief interactions. | Superficiality vs. authenticity: These companions do not provide real human presence, unpredictability, or empathy in the same way; they may reduce urgency to cultivate real human connection, possibly worsening isolation in the long run. |
| Scalability and consistency: Digital companions can be deployed broadly and cheaply; they are available 24/7, non-judgmental, and predictable. | Emotional dependence/distortion: Over time, reliance on artificial companionship could distort expectations of human interaction or worsen loneliness if human relations become comparably more difficult. |
| Designing connection-support systems at work: Insights about belonging and psychological safety could inform how AI companions or social tools are designed (to supplement human efforts rather than replace them). | Ethical issues: Using AI companions in workplaces may lead to data, privacy, and/or responsibility problems; also, raising the risk of manipulation, alienation, a false sense of belonging, and perhaps even legitimizing lower human investment in people. |
| Lower barrier to sharing and disclosure: Some people might feel safer or more comfortable disclosing or being vulnerable with non-judgmental digital companions, helping them vent or process emotions. | Limited benefit when social support is weak: For people with smaller social networks, high intensity use of AI companions correlates with lower wellbeing. This could amplify harm rather than ameliorate loneliness (especially where human support is considerably lacking). |
AI companions and humanoid social tools may be appealing as part of a wider strategy to reduce loneliness, but they should never replace authentic, human-led connection.
Ultimately, who builds these systems — and for whose benefit — will determine whether humanoid companions enhance wellbeing or create new vulnerabilities in the future of work.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert













