Tech News

Stay updated with the latest tech news that shapes our world. Discover trends, innovations, and insights in the tech industry.

Columbia University has begun testing Sway, an AI-powered debate facilitator developed by philosophy and psychology researchers, which matches students with opposing viewpoints—on topics like Israel-Palestine, abortion, and racism—for respectful one-on-one conversations. The tool guides discussions by suggesting rephrasings and probing questions, and even gauges shifts in participants’ openness, with nearly half of users reportedly reporting changed views. This move follows years of escalating campus tensions, administrative crackdowns on protests, federal scrutiny, and a $200 million settlement mandating structured dialogue. While Sway’s pilot at Teachers College is generating interest across the university for expansion by fall 2026, critics argue it risks flattening nuanced, politically charged debates into overly clinical exchanges. Concerns also linger over ties to the intelligence community via postdoctoral funding.
Sources:
The Verge
,
The Guardian
Key Takeaways
– New AI for Dialogue—but Does It Sustain Depth?: Sway’s structured pairing and guided phrasing may reduce tensions—but risk stripping context from complex issues.
– Campus Climate Remains Fractured: The AI initiative emerges against a backdrop of harsh disciplinary action, deportation threats, and student activism over Palestine.
– Debate Over Technology and Free Expression: While AI may offer a novel conflict-mitigation tool, concerns persist over academic integrity, political oversight, and who controls the narrative.
In-Depth
Columbia University’s decision to pilot Sway, an AI-powered debate mediator, underscores both innovation and controversy in academic conflict resolution. At its core, Sway connects students holding opposing viewpoints—whether on Israel-Palestine, abortion, or racial justice—and guides them through respectful discourse by offering rephrasing suggestions and probing questions. Encouragingly, nearly half of those participating reported adjusted views—though the measure of success is debated, especially when ideological shifts may drift toward inaccuracy rather than understanding.
This initiative doesn’t exist in isolation. Columbia is still reeling from a fraught period marked by pro-Palestinian protests, disciplinary purges, and high-profile deportation cases. For instance, student activist Mohsen Mahdawi—detained and nearly deported over his advocacy—now has returned, asserting: “They have failed to silence me.”
At the same time, a public letter lambasting the university’s punitive response to peaceful Gaza solidarity protests paints a broader picture of distrust and disillusionment with administrative tactics.
Academics and critics voice concern that Sway commodifies dialogue, reducing historical and political nuance into sanitized interaction—and could be used to deflect deeper systemic critiques. Some argue this mirrors a “crisis-response” management style rather than a genuine recommitment to critical scholarship. With Columbia eyeing wider rollout by fall 2026, the broader question arises: Can AI mediate real understanding in spaces fraught with power dynamics, policy pressures, and lived trauma? The answer may hinge on whether the university remains committed to cultivating discourse—or merely quelling dissent.

The rapid adoption of generative AI is leading to concerns that workers are becoming overly dependent on tools like ChatGPT, at the cost of eroding their own expertise and latent cognitive skills. Experts warn that this mirrors worries in education, where students relying heavily on AI may lose critical thinking and problem solving ability. A 2024 paper argues that AI assistants might accelerate skill decay by reducing opportunities for practice and growth. Another study found increased AI use is tied to measurable declines in critical thinking performance among young users. Meanwhile business/economics research highlights that AI’s effects on skill demand will favor human–AI complements (e.g. digital literacy, ethics) even as more routine or substitution-prone skills shrink in value.
Sources:
Live Science
,
NIH.gov
Key Takeaways
– Widespread use of AI tools can lead to skill atrophy, as humans delegate tasks and reduce hands-on practice.
– Overreliance on AI is linked to declines in critical thinking, decision making, and independent reasoning, especially among younger or less experienced users.
– The future of work demands complementary skills (e.g. digital literacy, oversight, ethics) rather than those tasks AI can easily substitute.
In-Depth
In the evolving landscape where AI is no longer a novelty but a daily companion in many workplaces, the danger of overreliance is becoming clearer. The Epoch Times reports that generative AI is causing “skills decay” in workers who lean too heavily on tools like ChatGPT in service of efficiency. As routine tasks and basic composition are offloaded to AI, humans may stop exercising those muscles of reasoning, analysis, or domain-specific practice.
A theoretical treatise from Macnamara et al. argues that when AI assistants take over functions humans once performed, those humans face diminished opportunity to practice and refine their skills—and over time that leads to erosion. (Indeed, the paper warns that dependency can accelerate skill decline.) Empirical work reinforces the concern: a Microsoft/Carnegie Mellon survey found that AI users who trust the tool’s conclusions tend to engage less critically with them, and thus display reduced critical thinking. In short: if you stop questioning AI, you might stop thinking.
But the picture is not entirely bleak. Research in economics and labor suggests a more nuanced dynamic. A working paper analyzing AI’s effects on skill demand finds that AI doesn’t just substitute for human tasks—it also complements certain human capabilities. Skills in ethics, interpretation, judgment, adaptability, teamwork, and digital fluency are rising in value even as more mechanistic tasks decline. In that sense, the right response is not to resist AI, but to recalibrate what we cultivate in ourselves.
The challenge for organizations, educators, and individuals is to avoid passive reliance. Business leaders must design AI deployment that encourages human oversight, skill refreshment periods, and fallback practice. Educators should integrate assignments that resist AI shortcuts and reward original thinking. Workers themselves should carve out “AI-free” time to stretch muscles the machine tends to atrophy—solving a problem without leaning on AI, reviewing AI’s output critically, and seeking tasks that force you to reason from first principles.
In sum, AI doesn’t necessarily doom human capability—but it does demand more intentional stewardship of our own skills. That way, we stay the architects of progress, not passive passengers.

Gartner projects that AI‑capable PCs will make up about 31% of the global PC market by the end of 2025, with shipments exceeding 77 million units, and could reach 55% market share in 2026 before becoming the norm by 2029, driven by enterprise and consumer demand, hardware refresh cycles, and stronger edge‑AI integration—although adoption is slightly tempered by trade‑induced uncertainty and tariffs. Meanwhile, HP reports that AI‑PCs already make up over 25% of its product mix, boosting its revenue and driven by partnerships with software players like Adobe and Zoom that expand on‑device AI capabilities, pointing to a growing ecosystem around these machines. Finally, Intel highlights that the biggest barrier to wider AI‑PC adoption is not hardware, but lack of understanding and training: while many decision‑makers see their value, only 35% of employees understand AI’s benefits, and concerns around security and insufficient training remain obstacles.
Sources:
IT Pro
,
Windows Central,
TechRadar
Key Takeaways
– AI‑PCs are rapidly moving from novelty to mainstream, with expectations they’ll dominate the market by 2029, led by refreshed enterprise fleets and consumer upgrades.
– Manufacturers like HP are already benefitting, with AI‑PCs driving significant portions of revenue and buoyed by AI software ecosystems.
– Awareness and training lag behind hardware advances—even with adoption growing, many users don’t fully grasp AI’s value, posing an adoption barrier.
In-Depth
The rise of AI-capable PCs marks a prudent and forward-looking turning point in personal computing, and the forecasts are compelling.
Analysts at Gartner anticipate that nearly a third of PC units shipped globally in 2025 will feature built-in AI capabilities—surpassing the 77 million mark—and expect this trend to accelerate even further in 2026, capturing roughly 55 percent of the total market. The firm asserts that by around 2029, AI PCs will become standard in both enterprise environments and consumer households.
The momentum is fueled by scheduled hardware refresh cycles, the appeal of local “edge AI” processing, and broadening demand from businesses and individuals alike. Yet, some headwinds remain—specifically, tariffs and cautious buying behavior amid economic uncertainties have caused minor delays in adoption. Still, businesses seem resolute, investing now to ensure they’re ready for an AI-integrated future.
HP’s recent financials underscore the growing strength of AI-PC adoption. The company reports that AI PCs now account for more than a quarter of its product mix, contributing significantly to a year-over-year revenue uptick. This success is buoyed by partnerships with software providers like Adobe and Zoom that are embedding AI capabilities directly on the device—making local processing not just a novelty but a driver of productivity.
At the same time, Intel’s own research reveals a crucial, often-overlooked barrier: the human side of adoption. While most IT leaders understand the strategic value of AI PCs, just about a third of employees truly grasp their benefits—leaving adoption potential constrained by gaps in awareness, training, and security concerns. As AI-enabled devices proliferate, organizations that proactively educate and secure their users will likely gain the greatest payoff.