Organizations and individuals around the world are becoming increasingly reliant upon AI. However, adoption of AI methods by organizations is outpacing risk mitigation, and consumers are increasingly apprehensive about its use. While geographic regions have different approaches to regulating AI use, research indicates that increased data security measures and regulation positively influence customer confidence.
User concerns grow globally
Recent research found low trust in AL combined with high support for increased regulation. Trust, Attitudes, and Use of Artificial Intelligence: A Global Study 2025, led by the University of Melbourne in collaboration with KPMG, surveyed over 48,000 people across 47 countries. The study found that most people report using AI regularly. And even more support AI regulations:
- 66% of people use AI regularly, and 83% believe AI will bring significant benefits.
- Despite this, only 46% of people globally report trust in AI systems.
- 70% of respondents support national and international AI regulation.
As international consumers are increasingly relying on AI, they are also expressing increased trepidation over issues of trust and transparency. Tech engagement, while strong across markets, is especially robust in emerging markets such as India, Brazil, and China, according to Kantar Media’s Global Digital Media and Tech Trends Report. The report analyzed data derived from 80,000 respondents in 37 countries. 86% of media users from India answered yes to “I try to keep up with developments in technology” as did 76% of those in China and 73% of those in Brazil – compared to 62% of U.S. respondents. A large majority of media users in India (78%) agreed with the statement: “Artificial intelligence has had a significant impact on my daily life.” More than half of Brazilians agreed (52%), compared to less than half (46%) of U.S. respondents.
According to the Adobe 2025 Digital Trends Report, key issues around AI adoption include trust and the use of AI include balance, transparency, and data security. The balance between innovation and trust is an ongoing challenge: both privacy concerns and governance complexities remain significant hurdles. Of the 8,301 consumers surveyed by Adobe, close to half (45%) claim to prioritize visibility and control over their data, while a third (33%) say they demand clarity on how AI is used to generate recommendations. As organizations move beyond pilot programs to scale AI initiatives, they must focus on clear disclosure and ethical AI practices to maintain credibility.
Trust erosion and AI adoption
A recent study by Thales reveals that consumers’ trust in organizations to use their data responsibly in the age of AI is rapidly waning. However, increased regulation alleviated some of the distrust.
- In 2024, 47% of consumers questioned whether companies used AI responsibly. By 2025, this concern rose to 57% – a marked leap in just one year.
- The 2025 global trust index found global trust rates stagnating or declining, with no sector achieving more than 50% “high trust” ratings. However, industries (such as banking and healthcare) and geographies with the most regulations had higher trust rates.
- Trust in news media hit a new low, with news organizations trusted by only 3% of consumers in 2025, a decline from 6% in 2024. Some of this drop was attributed to slackening oversight (particularly from social platforms).
- In contrast, Government services saw improvement, increasing from 37% trust in 2024 to 42% in 2025, an improvement possibly driven by enhanced regulatory frameworks like the EU’s Digital Operational Resilience Act (DORA).
These studies indicate increasing awareness on the part of consumers about the risks inherent in AI technology. This underscores the need for media executives to demonstrate strong governance and proactive leadership.
AI data risk found to be almost universal
Consumer trepidation is far from unfounded. The 2025 State of Data Security Report by Varonis quantifies the data risk entailed by AI usage based on data obtained from 1,000 companies. Findings confirm that AI adoption is leaping ahead of risk mitigation. Among the findings: 99% of organizations have had sensitive data exposed to AI tools.
The report also indicates 88% of organizations evaluated had old but still enabled user accounts, which are potential entry points for attackers. The report also reveals that 90% of the organizations studied have exposed sensitive cloud data, and 98% were found to have employees using unsanctioned apps, including shadow AI. The study underscores an urgent need for stronger data governance frameworks in the age of AI.
As previously reported by DCN, a plan that includes transparency, balance, and education can offset some AI concerns. However, the conundrum remains that while most users claim to want transparency around the use of AI, revealing such origins can undermine trust. In addition to the very real risks of data exposure, AI use by organizations risks turning off consumers who perceive a lack of human connection and oversight. Media Pulse points out that human creator content, even when flawed, feels authentic and drives stronger engagement. Thus, AI tools must always be orchestrated with human creators to maintain community trust.
Global governments differ on AI regulation
The impact of AI adoption and data security concerns will likely spur increased regulation in many locales, so companies will be wise to get ahead of future requirements. The EU Artificial Intelligence Act (AI Act) – the world’s first comprehensive AI regulation – officially went into force in August of 2024. Designed to ensure safe, ethical, and transparent AI development and deployment across the European Union, it requires AI content and deepfakes to be clearly labeled by 2026. Failure to disclose can lead to legal penalties.
Other countries are likely to follow suit as the impact of AI-related data risks become increasingly apparent. Brazil and Peru are currently working on AI governance frameworks based on the EU model. Canada has established the Artificial Intelligence and Data Act (AIDA), which focuses on transparency, accountability, and risk management for AI systems. China’s strict AI regulations include content moderation laws and licensing requirements for AI models. Meanwhile, India is developing a techno-legal approach to AI regulation. The United Kingdom leans towards a more pro-innovation approach, relying on existing regulators rather than creating new AI-specific laws. Australia boasts a comprehensive “AI assurance framework” at federal, state and territory levels.
Meanwhile, AI governance in the U.S. has been fragmented so far, with state-level regulations and sector-specific guidelines. Until 2025, a more nationwide approach to shaping AI governance seemed likely, but recent executive orders have been aimed at repealing AI regulations. As of this writing, The House of Representatives has passed the Budget Reconciliation Bill, which includes a 10-year moratorium on state and local laws regulating the use of AI technologies. This repeal of AI regulations, however, may be the opposite of what the majority of the public wants: More than half of U.S. adults (58%) say they are concerned that government regulation won’t go far enough in managing AI risks. Only 21% fear it will go too far, according to a recent Pew Research poll.
Different attitudes towards AI risks mean that a policy acceptable in one region might not be elsewhere. Media companies operating internationally may have to tailor AI-driven strategies to align with local regulatory expectations. The Global AI Regulation Tracker offers an interactive real-time comparison of how various countries are responding to the explosion of AI use with regulations, laws and policies
Given the research linking increased AI governance with customer confidence, being proactive about AI policy is wise from a customer service perspective. Whether or not their region requires it, media leaders will want to establish clear guidelines for AI use within their organizations to ensure practices that align with user expectations. Regulations aren’t just about compliance; they are about setting standards that align with public trust. Media leaders who responsibly integrate AI can gain a strategic advantage, with ethical AI use as a key differentiator among market competitors.