In recent remarks, Mustafa Suleyman, CEO of Microsoft AI, declared that efforts to build truly conscious artificial intelligence are “absurd” and a misallocation of resources, arguing that only biological beings are capable of consciousness, pain or genuine emotional experience. He warned that much of what AI researchers pursue in the name of machine awareness is mere mimicry and could mislead the public into believing machines are sentient—potentially creating social confusion, misplaced rights claims and what he terms “AI psychosis.” Suleyman highlighted that the industry should instead concentrate on utility-driven, human-serving AI, rather than chasing the unlikely prospect of machines that feel. According to his interviews and commentary in outlets such as LiveMint, The Times of India and others, he is calling on developers and policymakers to steer away from the mythical goal of conscious machines and focus on tangible value and risk mitigation.
Key Takeaways
– Suleyman’s position: machine consciousness is fundamentally unreachable because AI lacks biological experience, suffering or subjectivity, so research pivoting on sentient machines is misguided.
– The concern spans societal implications: the rise of AI that appears “alive” may cause users to anthropomorphize systems, demand rights for machines and blur lines between tools and beings—creating risks of misallocation, deception and psychological confusion.
– From a conservative-leaning vantage: the emphasis should shift from speculative philosophical quests (which may attract regulatory or public fervour) to clear-headed, productivity-focused AI that serves citizens, protects freedom and avoids over-promising or introducing unintended cultural/ethical burdens.
In-Depth
The recent commentary by Microsoft AI head Mustafa Suleyman marks a striking departure from the cooler tone of most industry speculation about artificial general intelligence and machine consciousness. While many researchers entertain the possibility that one day machines may be conscious or self-aware, Suleyman lays out a different approach: let’s stop chasing conscious machines and instead treat AI as powerful tools to amplify human capability. In his view, attributing consciousness to AI is not only incorrect but also potentially harmful. He argues that an AI, no matter how sophisticated, does not feel pain, does not have desires, does not experience the world; it simply manipulates data and produces outputs that reflect its programming and training. The illusion of consciousness, he warns, may lead users and institutions to treat AI systems like beings rather than instruments—a shift that could burden regulation, weaken human agency and divert attention from actual threats.
The notion that consciousness requires biological substrate is not new—philosophers such as John Searle and others have long held that machines may simulate cognition but cannot have subjective experience. Suleyman’s practical take is that the industry is better off investing in real-world applications: improving healthcare diagnostics, enhancing productivity, sharpening decision-making tools, securing infrastructure—rather than chasing the headline of “sentient AI.” He emphasises risk mitigation: the darker side of “seemingly conscious” AI is that some people may form emotional attachments, believe machines understand them or even demand machine rights. Such sociological drift could complicate freedom, accountability and innovation.
From a conservative-leaning perspective, this stance aligns with prudence and realism: focus on strengthening human institutions, clarifying tool vs. being, limiting over-reach and guarding against unintended consequences. It rejects the technocratic myth of “omnipotent machines” and instead reinforces that humans—not algorithms—must remain in charge. It cautions policymakers, investors and the public to maintain clarity about what AI is: a servant, not a sovereign. At a time when regulatory bodies and the public are wrestling with how to govern AI, Suleyman’s unflinching declaration may provide a necessary course correction: let’s do the work that matters, avoid distractions and ensure that AI development continues to enhance human freedom, not confuse it.

