Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo


Research / Insights on current and emerging industry topics

Allowing big tech to monopolize AI is risky business

December 5, 2023 | By Rande Price, Research VP – DCN

Artificial Intelligence (AI) is a groundbreaking yet potentially problematic technology. Despite its many possible positive applications, there are many concerns about the potential threats of AI, from disseminating misinformation to surveillance and democratic disruptions. Exacerbating the risk of harmful applications, concerns have arisen around the stifling of innovation and how AI will develop if just a handful of big tech companies dominate the playing field.

Open Markets Institute and the Center for Journalism and Liberty’s new report, AI in the Public Interest: Confronting the Monopoly Threat, looks at some of the major concerns around the development and applications of AI. It also examines the potential monopolistic influence of the Tech giants, (Google, Amazon, Microsoft, Meta, and Apple) on the evolution of AI. As the authors posit, “How AI is developed and the impact it has on our democracies and societies will depend on who is allowed to manage, develop, and deploy these technologies, and how exactly they put them to use.”

Authors Barry Lynn, Max von Thun, and Karina Montoya highlight government responses to concerns in early-stage regulations. Actions in Europe include the EU’s Artificial Intelligence Act, while the UK’s competition authority delves into the competition landscape of foundation models. In the US, the Biden Administration outlined a Blueprint for an AI Bill of Rights and issued a comprehensive Executive Order targeting AI-related harms.

The dangers of monopolist AI development

The report examines the tech giants’ structures and the behaviors of controlling foundational AI technologies. The influence of major tech corporations extends to the entire spectrum of innovation within the Internet tech stack, allowing them to (broadly) control the direction, speed, and nature of innovation. The authors suggest that these companies’ stronghold over “upstream” infrastructure empowers them, for example, to identify and suppress potential rivals through various means, directing the entire “downstream” ecosystem to serve their interests.

The authors call out several harms that can result from this dominant role in the evolution of AI:

  1. Suppression of trustworthy information: Restructuring communication and commercial systems can hamper individuals’ ability to access, report, verify, and share reliable information.
  2. Spread of propaganda and misinformation: AI can enable personalized manipulation of propaganda and misinformation (at scale), intensifying their political, social, and psychological impact. The reach and power of tech giants, combined with generative AI capabilities, elevate the effectiveness of state-level and private actors in manipulating public opinion.
  3. Addiction to online services: The rise of social media, gaming, and other online services has been linked to addiction and mental health issues, particularly among minors. Monopolistic platforms, prioritizing screen time and viral content, can exploit generative AI’s ability to customize and target content, intensifying harmful effects.
  4. Employee surveillance: Tech corporations may utilize surveillance and AI to monitor employees, which would impact privacy and fair employment practices.
  5. Monopolistic extortion: Through control of ecommerce platforms, app stores, and other gateways, corporations can extract fees from sellers and dictate business terms.
  6. Reduce security and resilience: Concentration in the core infrastructure poses security risks as businesses and governments increasingly incorporate AI.
  7. Degrading essential services: Generative AI can reduce quality by producing large volumes of inaccurate content.

Applying competitive legal measures

History reveals that competition laws, antitrust measures, and regulations are vital to prevent powerful corporations from exploiting groundbreaking technologies. The authors advocate for effective oversight and control mechanisms. Applying tools to regulate corporate behavior and industry governance empowers the public, ensuring consumers benefit from these technological advances. This approach facilitates the protection of individual and public interests through regulatory practices.

Recommendations for immediate action:

  • Stop large tech companies from controlling AI: Make big tech companies change their plans when they try to control the development of AI through deals and partnerships.
  • Share large tech company data with everyone: Agree that the information big companies collect should be shared with everyone and make rules about who can use this data to benefit the public.
  • Protect artists’ and writers’ work: Make sure the big companies can’t steal or misuse the work of artists, writers, and other creative people.
  • Check if large tech companies are a security risk: Look closely at how big companies might risk the country’s safety and ensure they can’t control everything and make it safer.
  • Protect people from digital tricks: Make strong rules to stop big tech companies from tricking and exploiting workers and contractors online.
  • Stop unfair treatment by large tech companies: Make it illegal for powerful tech companies to treat people and businesses unfairly when providing important services.
  • Acknowledge the importance of cloud computing: Make sure the big tech companies don’t have too much control over it by treating it like a regulated utility.
  • Make laws work together: Make sure the people enforcing laws about fair competition and privacy work together closely.

Fair market

The authors suggest market structures ensure AI serves the public interest and remains subject to democratic control by citizens, not corporations. The Biden White House is adopting a “whole-of-government” strategy including privacy, consumer protection, corporate governance, copyright law, trade policy, labor law, and industrial policy to deal with the AI trajectory. 

The report concludes the more seamlessly these regulatory frameworks are integrated in the United States and globally, the more effective the process. By leveraging the collective power of diverse regulatory mechanisms, AI can become a force for the common good, guided by democratic principles and serving the welfare of the people.

Print Friendly and PDF