At the heart of every powerful system lies a driving ideology — a belief system that justifies growth and expansion, even when the consequences contradict the stated mission. In the modern technology landscape, artificial general intelligence (AGI) has become that ideology, with promises of “benefiting all humanity” guiding the strategy of leading AI companies. Among them, OpenAI stands as the chief evangelist, framing the industry’s trajectory, consolidating resources, and influencing global norms around AI development.

The concept of AGI envisions a highly autonomous system capable of outperforming humans in most economically valuable tasks, potentially elevating humanity through increased abundance, scientific breakthroughs, and economic acceleration. Yet the practical pursuit of AGI has come at significant costs. The industry’s hunger for speed and scale has driven massive consumption of data, energy, and computational resources, alongside the rapid deployment of untested systems. These practices raise ethical, environmental, and social concerns, even as the promised AGI future remains uncertain.
Speed Over Safety and Efficiency
The AI sector’s emphasis on AGI has prioritized speed over safety, efficiency, and scientific exploration. By scaling existing algorithms with ever-larger datasets and supercomputers, companies like OpenAI have optimized for rapid advancement rather than sustainable innovation. This approach has influenced the broader industry, as other tech giants align their strategies with the perceived AGI race, capturing top AI researchers and diverting talent away from academia. Consequently, much of AI research is now shaped by corporate agendas rather than independent scientific inquiry.
The financial stakes reflect this aggressive growth strategy. OpenAI projects \$115 billion in expenditures by 2029, while Meta anticipates \$72 billion in AI infrastructure spending this year and Google estimates up to \$85 billion for 2025. Such astronomical investments underscore the scale of ambition but also highlight the opportunity costs, including environmental impact, labor exploitation, and societal disruption.
Human and Societal Costs
Beyond infrastructure costs, the pursuit of AGI has real human consequences. Workers in developing countries, such as Kenya and Venezuela, are often paid minimal wages for content moderation and data labeling roles, exposed to disturbing material as part of AI training processes. The broader rollout of AI technologies has contributed to job displacement, wealth concentration, and the proliferation of chatbots capable of causing psychological harm.
Despite these challenges, other forms of AI have delivered tangible benefits without incurring comparable societal costs. For instance, Google DeepMind’s AlphaFold, an AI system trained on protein sequences, accurately predicts 3D protein structures, transforming drug discovery and disease research. Unlike large-scale generative models, systems like AlphaFold require far less computational infrastructure, avoid exposure to harmful datasets, and generate measurable real-world benefits.
AGI Evangelism and Global Competition
AGI advocacy has also been intertwined with geopolitical narratives. Silicon Valley often frames its AI ambitions as part of a race to surpass China, promoting the notion that American-led AI development will liberalize global technology norms. However, evidence suggests the opposite. The competitive drive has accelerated consolidation of technological power within a few U.S.-based companies, while potentially reinforcing illiberal practices and narrowing global AI governance options.
The quasi-religious commitment to AGI shapes not only corporate strategy but also the perception of societal impact. OpenAI and similar firms frequently frame the adoption of products like ChatGPT as fulfilling their mission of benefiting humanity. However, the blurred lines between for-profit incentives and non-profit ideals complicate accountability and measurement of actual societal gains. Agreements with major partners, such as Microsoft, further entangle corporate growth objectives with claims of public benefit, raising questions about the alignment between stated missions and real-world outcomes.
The Dangers of Ideological Blindness
One of the central risks of the AGI-driven model is ideological entrenchment. By constructing a belief system centered on the inevitability and supremacy of AGI, companies risk losing touch with observable realities, including harmful societal effects. As systems scale, the mission can become a justification for practices that produce environmental degradation, psychological harm, or inequitable labor conditions. The rhetoric of “benefiting all humanity” can overshadow ethical oversight, safety considerations, and the potential for alternative AI approaches that balance progress with responsibility.
Critics argue that AI development does not need to follow the path of unrestrained scaling. Incremental improvements in algorithms, efficiency, and targeted applications can yield meaningful technological advancement without the massive ecological and social costs associated with high-speed AGI pursuits. These approaches emphasize measured, responsible innovation over ideological fervor, potentially offering a more sustainable trajectory for AI’s integration into society.

Reframing AI Priorities
The lessons from the current AGI-driven model suggest a need to reconsider priorities in AI development. Systems like AlphaFold demonstrate that focused, domain-specific AI can deliver transformative benefits without the collateral damage associated with large-scale generative models. By shifting the emphasis from speed and scale to practical impact, efficiency, and human-centered outcomes, the AI industry could achieve real-world improvements while minimizing harm.
In conclusion, the pursuit of AGI has created an “empire” in the tech world: powerful, ideologically driven, and capable of shaping research, infrastructure, and geopolitics. Yet this empire comes with substantial costs — environmental, social, and ethical — that challenge the notion that its expansion inherently benefits humanity. Future AI development must reconcile ambition with responsibility, balancing innovation with safety, sustainability, and equitable outcomes, to ensure that technological advancement serves both present and future generations.