ASI won't just fall out of the sky one day
Let's unpack the market forces and feedback loops that will shape the trajectory of AI, AGI, and ultimately ASI. Feedback from consumers, enterprises, government, military, and science will shape it!
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc94e5fa-8f31-4b28-aeba-e1d9d61089ff_1456x816.png)
Introduction
Even the most diehard AI safety advocates concede that we have at least 5 years before ASI arrives. Personally, I’m putting it at 10 to 20 years as we lead up to the singularity. We’re presently seeing an exponential rise in cost of training frontier models with the next generation of GPT-5 and GPT-6 expected to cost around $100B*.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c97d3c-b6c1-4236-971f-62758082c68d_1329x764.png)
Given this timeframe, plus the economic headwinds Big Tech is facing, it’s safe to assume that we won’t have ASI (artificial super intelligence) any time soon. This means that we have some runway left. Philip, over at AI Explained, released a fantastic breakdown of the economics and news yesterday.
In reality, the future of AI isn’t solely in the hands of developers and researchers toiling away in labs. Instead, its a dynamic interplay between innovation and the demands of various sectors, each exerting its own pressures and preferences on the direction of AI development. From the individual consumer deciding whether to adopt a new AI-powered app, to governments grappling with regulatory frameworks, every stakeholder plays a part in this technological symphony.
There are five primary domains that will influence AI’s developmental trajectory (aside from Big Tech itself!)
Consumer Market - Driven by individual preferences, usability, and perceived value.
Enterprise Sector - Characterized by cautious adoption and stringent business case requirements.
Government Bodies - Shaping AI through legislation, regulation, and public sector adoption.
Military Applications - Demanding unparalleled reliability, control, and predictability.
Scientific Community - Fostering innovation through research, peer review, and collaboration.
Each of these domains uniquely influences AI development in specific ways. We’ll explore how consumer choices can rapidly shift the focus of AI applications, how enterprise skepticism can drive improvements in reliability and efficiency, and how government regulations can set the boundaries within which AI must operate. We’ll also examine the military’s stringent requirements for AI systems and how the scientific community’s commitment to open inquiry and peer review helps ensure the integrity and safety of AI advancements.
Understanding these market forces is crucial not just for developers and policymakers, but for anyone seeking to grasp the full picture of AI’s future.
Consumer Influence
The power of consumer choice in shaping AI development cannot be overstated. Each download, subscription, or purchase sends a clear signal to developers and companies about what works and what doesn’t in the real world. This constant feedback loop drives innovation, pushing AI technologies to become more user-friendly, efficient, and aligned with consumer expectations.
Consumer choices play a pivotal role in shaping the development of AI systems, particularly in terms of safety, alignment, and corrigibility. The multifaceted and often nuanced criteria by which consumers select and retain AI products will significantly influence the direction of AI development. As consumers demand certain features and reject others, they effectively steer the evolution of AI technologies, potentially pushing for safer, more aligned, and more corrigible systems.
Usefulness - AI must save time, money, or help make money (or be very entertaining)
Value - This includes entertainment value, emotional resonance, and other intangible benefits
User Experience - Good interface design and friendly interactions are crucial
Values - Consumers tend to reject products that conflict with their moral or ethical beliefs
Reliability - Consistent performance and clear communication of AI capabilities build trust
These selection criteria form a complex matrix that consumers, often subconsciously, use to evaluate AI products. Successful AI applications in the consumer market must navigate this matrix, balancing functionality with user experience, ethical considerations with practical benefits.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49f9793d-af0a-444b-9f69-6702ec62d8bd_1456x816.png)
Moreover, these criteria are not static. As AI technology evolves and becomes more integrated into daily life, consumer expectations shift. What was once considered a groundbreaking feature may become a basic expectation. This constant evolution keeps developers on their toes, driving continuous improvement and innovation.
The consumer market’s influence on AI development extends far beyond individual products, potentially reshaping entire trajectories of AI research and application. As AI systems become more sophisticated, invasive, and omnipresent in daily life, consumer preferences will dramatically impact how Big Tech develops and deploys these technologies. This influence will manifest through various channels: price signals, customer reviews, public relations, and the court of public opinion.
Consider, for example, how Facebook’s reputation was severely damaged by revelations about its algorithmic manipulation of user feeds. Such public backlash serves as a powerful corrective force, compelling tech giants to recalibrate their AI strategies. As AI continues to permeate various aspects of our lives, the collective voice of consumers will play an increasingly crucial role in steering its development. This could push the field towards more transparent, ethical, and human-centric applications, not merely for market success, but as a fundamental requirement for societal acceptance and trust. In essence, consumer preferences may become a key driver of AI safety, alignment, and corrigibility.
Enterprise Adoption
The adoption of AI in the enterprise world follows a well-established pattern known as the technology adoption lifecycle. At the forefront are innovative early adopters, eager to gain a competitive edge through cutting-edge solutions. However, a significant challenge lies in bridging the gap—often referred to as the “chasm”—between these early adopters and the early majority of more conservative businesses. This chasm represents a critical juncture in AI’s journey into the enterprise world, where it must prove its worth beyond novelty and demonstrate tangible, sustainable benefits to skeptical business leaders.
Enterprise risk aversion is a significant factor in AI adoption. Unlike startups or tech-focused companies, traditional enterprises often have complex, established systems and processes. The potential disruption caused by integrating new AI technologies can be seen as a substantial risk. This cautious approach manifests in demands for case studies, proven track records, extensive pilot programs, and rigorous security assessments. While sometimes criticized for slowing innovation, this risk-averse culture serves an important purpose. It pushes AI developers to create more stable, secure, and reliable solutions, ultimately benefiting the entire AI ecosystem.
When CEO’s read news articles such as this entry in Futurism, which states that “AI wildly underperforms humans” they mentally note “okay, AI isn’t ready yet” and then move on with their day.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc17f428-c62e-4ffd-9893-bc533c113360_700x394.png)
For AI to gain traction in the enterprise sector, it must present a compelling business case. This goes beyond mere technological impressiveness; AI must demonstrate clear, quantifiable benefits that align with business objectives. The need for a robust business case drives AI development towards practical, results-oriented solutions rather than merely impressive technological showcases.
The impact and adoption rate of AI vary significantly across different industry sectors. Some industries, such as finance, healthcare, and e-commerce, have been quick to explore and implement AI solutions. Others, like traditional manufacturing or public sector organizations, may be more hesitant or not even need AI at all. This variance is influenced by factors such as regulatory environment, data availability, existing technological infrastructure, and industry-specific challenges.
Here are the key factors influencing enterprise adoption of AI:
Chasm - A significant gap exists between early adopters and the early majority in businesses
Risk Aversion - Enterprises are highly cautious, especially with unproven technologies
ROI Focus - Clear financial benefits through cost savings or revenue generation are crucial
Scalability - Ability to grow and adapt as the business evolves
Compliance and Security - Meeting industry regulations and robust security measures
This cautious yet influential approach of the enterprise sector plays a vital role in shaping AI development. It drives the creation of more mature, reliable, and business-oriented AI solutions, ensuring that as AI continues to evolve, it does so in a direction that brings tangible value to businesses across various sectors.
Government’s Role
State and federal lawmakers will have a lot to say in the shaping of AI policy. These legislators are tasked with the challenging responsibility of crafting policies that can keep pace with rapidly evolving AI technologies. Their decisions lay the groundwork for how AI can be developed and utilized across various sectors of society. The policies they create must strike a delicate balance between fostering innovation and ensuring public safety, privacy, and ethical use of AI.
Alongside policy-making, the government itself emerges as a major customer and adopter of AI technologies. However, this adoption comes with stringent requirements and standards that significantly influence AI development. Government agencies often demand high levels of security, reliability, and transparency in AI systems, pushing developers to create more robust and accountable technologies. Not to mention the Byzantine vendor onboarding processes that exist in the government!
Regulation forms another critical aspect of governmental influence on AI. A mix of existing regulatory bodies and newly created entities will be tasked with overseeing the implementation of AI across various domains. These regulators must grapple with complex issues such as data privacy, algorithmic bias, and the societal impacts of AI-driven automation. From the FDA to OSHA, every governmental agency will have their own say in how AI must be used, secured, and adopted.
The role of lobbyists and special interest groups in shaping AI policy cannot be overlooked. These entities work tirelessly to influence lawmakers and regulators, representing the interests of various stakeholders including tech companies, civil liberties organizations, and industry associations. Their efforts can significantly impact the direction of AI policy and adoption decisions. Sam Altman’s closed-door meetings with Congress might have been the beginning, but he certainly won’t be the only (or the biggest) voice anymore.
As AI becomes an increasingly prominent issue in public discourse, the voice of voters gains more weight in shaping AI-related policies. Public opinion on issues such as AI ethics, job displacement due to automation, and the use of AI in public services can sway political decisions and drive policy changes. We’re already seeing some voters sour on AI. A slim majority (52%) are more anxious than excited about AI’s influence on their lives.
Here are the key factors through which government influences AI development:
Legislators - Federal and state lawmakers are instrumental in creating AI policies
Adoption - Governments adopt AI but require strict compliance with their standards
Regulation - A mix of new and existing regulatory bodies govern AI implementation
Lobbyists - Interest groups work to influence AI policy and adoption decisions
Voters - Public opinion becomes increasingly influential as AI becomes a key issue
Given the dog-and-pony show with Sam Altman and Gary Marcus being hauled before the Senate in 2023, it’s safe to say that the government is aware of the situation. I don’t see any evidence of anyone being asleep at the wheel.
Military Specifications
The military’s approach to adopting any new technology is characterized by a set of stringent requirements that will significantly influence the development and deployment of AI systems. Unlike other sectors, the military’s primary concerns revolve around control, reliability, and predictability, reflecting the critical nature of its operations and the potential consequences of technological failures.
At the core of military AI adoption is the concept of absolute control. The military insists on having complete command over any AI system it deploys, with no room for autonomous decision-making that could potentially contradict or override human directives. This requirement for total control is embodied in the demand for reliable killswitches and failsafes. Every AI system integrated into military operations must have dependable mechanisms to halt its functions immediately when required. This non-negotiable feature ensures that human operators can maintain ultimate authority over AI-driven systems, especially in high-stakes scenarios where unforeseen AI behavior could have severe consequences.
Predictability is another crucial factor in the military’s AI requirements. The unpredictable nature of combat and crisis situations demands that AI systems behave in consistently foreseeable ways. Military standards for AI behavior will be extremely strict, requiring extensive testing and validation to ensure that AI responses remain within expected parameters across a wide range of scenarios. This emphasis on predictability will force AI developers to create more robust and thoroughly tested systems. Consider that OpenAI has never earned a military contract. They don’t have the Military Industrial Complex chops that Raytheon and Boeing do, and if they want to sell their services to the military, they’ll have to mature significantly.
“The military likes off buttons!” ~ Quotation from a USAF Colonel in a documentary I watched years ago about AI
The military’s demand for reliability in AI systems goes beyond mere functional consistency. Given the often harsh and challenging environments in which military equipment operates, AI systems will have to demonstrate exceptional durability and consistent performance under extreme conditions and numerous failure modes. This requirement for ruggedness and unwavering reliability sets a high bar for AI developers, encouraging innovations in hardware integration, error handling, and system resilience. Hardline AI safety advocates’ fears of AI “making a mistake” that could wipe out humanity are a bit absurd in this context.
The process of integrating AI into military systems is further shaped by the stringent nature of military contracts. These contracts are subject to intense scrutiny and oversight, often involving Congressional review, budgetary hearings, and of course the court of public opinion. This rigorous procurement process ensures that only the most thoroughly vetted and reliable technologies make it into military applications, AI included. While this can slow the adoption of cutting-edge AI, it also serves as a powerful quality control mechanism, pushing developers to meet exceptionally high standards. If you want to suckle at the teat of the military industrial complex, you gotta play by their rules.
Here are the key factors shaping military adoption of AI:
Killswitch - All military tech must have dependable failsafes and off-switches
Predictability - AI systems must meet strict military standards for predictable behavior
Reliability - Durability and consistent performance are key military requirements
Contracts - Military contracts follow stringent processes with congressional oversight
Command - AI must integrate into and respect the rigid military command hierarchy
The military’s exacting standards for tech adoption will play a crucial role in shaping the development of advanced AI systems. While these requirements may seem restrictive, they drive innovations in AI reliability, control mechanisms, and predictability that often find applications beyond the military sector. As AI continues to evolve, the military’s influence will likely remain a significant factor in pushing for more controllable, dependable, and predictable AI systems.
Scientific Community’s Influence
The scientific community plays a pivotal role in shaping the development, safety, and research directions of artificial intelligence. Unlike other sectors influenced by market demands or political considerations, the scientific approach to AI is characterized by a commitment to objectivity, rigorous methodology, and the pursuit of knowledge for its own sake. This unique perspective brings invaluable contributions to the field of AI, ensuring its growth is grounded in solid theoretical foundations and empirical evidence.
Central to the scientific community’s influence on AI is the process of building consensus. Scientific progress in AI, as in other fields, relies on the synthesis of diverse viewpoints and the critical evaluation of ideas. This consensus-building approach ensures that advancements in AI are not driven by singular perspectives or isolated breakthroughs, but rather by a collective understanding that has withstood scrutiny from various angles. The robust debates and discussions within the scientific community serve to refine AI concepts, identify potential pitfalls, and chart promising research directions.
The incentive structure within the scientific community differs significantly from that of the corporate or political world. While businesses are driven by profit, and politicians by voter approval, scientists are motivated primarily by the pursuit of knowledge and the recognition of their peers via publications in prestigious journals. This focus on objectivity and the advancement of the field as a whole leads to research that may not have immediate commercial applications but could be crucial for the long-term development and safety of AI systems. The scientific community’s emphasis on foundational research and theoretical understanding provides a necessary counterbalance to market-driven development.
Universities are key stakeholders in the scientific community. These institutions serve as hubs of innovation, bringing together diverse expertise and fostering an environment of intellectual exploration. The academic setting allows for the pursuit of research directions that may be too speculative or long-term for corporate R&D departments. Moreover, universities play a crucial role in training the next generation of AI researchers and practitioners, shaping the future of the field through education and mentorship.
The peer review process is a cornerstone of scientific rigor in AI research. This system of evaluation ensures that published research meets high standards of quality, methodology, and reproducibility. Through peer review, the scientific community collectively validates new findings, identifies potential flaws or oversights, and builds upon established knowledge. This process is essential for maintaining the integrity of AI research and providing a solid foundation for further advancements.
Here are the key factors through which the scientific community influences AI development:
Consensus - Scientific progress relies on building consensus from different viewpoints
Incentives - Scientists prioritize objectivity, with different incentives from businesses and politicians
Universities - These institutions will be major contributors to AI research and development
Peer Review - Essential for reproducibility and validation, peer review ensures rigorous standards
Collaboration - International and institutional cooperation will promote AI safety and accessibility
The scientific community’s approach to AI development stands in stark contrast to that of for-profit enterprises or governmental bodies, creating fundamentally different perspectives and outcomes. While businesses are driven by market demands and profit margins, and governments by policy objectives, the scientific community is propelled by the pursuit of knowledge, rigorous peer review, and open inquiry. This divergence becomes increasingly significant as the field of AI expands at an unprecedented rate, with the number of machine learning and artificial intelligence research papers growing exponentially. It’s not as thought the Ivory Tower isn’t paying attention! The scientific process, with its commitment to transparency, replicability, and ethical consideration, provides a crucial counterbalance to other sectors’ approaches. As AI continues to advance, this ever-expanding scientific foundation serves as both a catalyst for innovation and a safeguard, ensuring that progress in AI is not solely driven by commercial interests or political agendas, but also by a deep, shared understanding of the technology’s potential and pitfalls.
Conclusion
As we’ve explored the various forces shaping the trajectory of AI, a compelling picture emerges—one that should give us confidence in the future of this transformative technology. The interplay of market forces, from corporate boardrooms to military war rooms, from consumer apps to scientific labs, creates a powerful ecosystem of checks and balances that will guide AI development toward safety, reliability, and alignment with human values.
Let’s be clear: corporations and militaries, two of the most influential adopters of advanced technologies, will never embrace AI systems that are unpredictable or uncontrollable. It’s not just about preference; it’s about necessity. These organizations operate in high-stakes environments where the margin for error is razor-thin. This reality alone creates a powerful incentive for AI developers to tackle the thorny issues of safety, alignment, and corrigibility head-on. To put it in more visceral terms: the military ain’t gonna use a loose cannon. This demand for reliability and control will ripple through the entire AI industry, pushing for solutions that are not just powerful, but also trustworthy and manageable.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6c119-b4da-4fbd-bdcf-33449fde74d6_1456x816.png)
But it’s not just about the big players. Everyday consumers are already wielding significant influence through their wallets. Each subscription, each cancellation, each choice to use or abandon an AI tool sends a clear signal about what we value and what we expect from these technologies. We’re seeing people vote with their feet, ditching tools that don’t meet their needs or align with their ethics. This constant feedback loop of consumer choices is a powerful force, pushing AI companies to create products that are not just useful, but also ethically sound and aligned with human values.
Meanwhile, the government and scientific community are providing crucial oversight and direction. Regulators are working to establish guidelines that protect public interests. Voters are making their voices heard on AI-related issues. Universities are pushing the boundaries of what’s possible while also critically examining the implications of these advancements. This combination of governmental, scientific, and public input ensures that multiple perspectives are constantly being considered in the development and deployment of AI systems.
When you step back and look at the big picture, it becomes clear just how many fingers are in this pie. And that’s a good thing. The development of AI isn’t happening in a vacuum or under the control of a single entity. Instead, it’s a collective endeavor, shaped by a diverse array of stakeholders, each bringing their own priorities, concerns, and expertise to the table.
In essence, solving the alignment problem—ensuring that AI systems behave in ways that are beneficial and aligned with human values—intrinsically involves everyone. It’s not just a technical challenge to be solved in a lab; it’s a societal challenge that we’re all participating in, whether we realize it or not. Through the forces of free market economics, the court of public opinion, and the democratic process, we’re all playing a role in steering AI toward a future that works for humanity.
This collective approach to AI alignment gives us reason for optimism. Yes, the challenges are significant, and the stakes are high. But with so many eyes on AI, so many diverse interests invested in its success, and so many feedback mechanisms in place, we’re well-positioned to create AI systems that are not just powerful, but also safe, beneficial, and aligned with our collective values.
The path forward for AI is not a lone journey of isolated geniuses or faceless corporations. It’s a shared endeavor, guided by the collective wisdom, values, and interests of society as a whole. And that, more than anything else, is why we can look to the future of AI with hope and excitement, knowing that we’re all playing a part in shaping a technology that has the potential to profoundly better our world.