Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

From Present to Future: Exploring the Impact of AI Regulation on Trust & Safety Teams

In discussions about AI regulation, there has been relatively little of direct relevance to trust & safety. Most of the focus is on use by regulated organizations rather than misuse by platform users. However, as large platforms play increasingly significant roles in providing access to AI models, we might expect future regulatory efforts to more directly impact T&S.

Tim Bernard

Table of contents

A fairly small proportion of the discourse about safety and AI are squarely relevant to the established discipline of Trust and Safety. As I’ve discussed elsewhere, more widely discussed topics are long-term or theoretical risks of AI and associated concerns, such as alignment (ensuring that AI systems’ goals properly align with human interests), and immediate socio-ethical concerns, largely focussing on inequities that can be propagated and perpetuated by AI systems. There are, however, Trust and Safety teams at companies like OpenAI and Anthropic, that do important front-line and behind-the-scenes work, setting policies and developing systems to prevent harmful misuse of their products by users.

What laws do generative AI T&S teams have to worry about?

Although the content policies for generative AI products like ChatGPT or Midjourney are much like those on any site or app that hosts user-generated content, there may be a significant difference in the legal underpinnings. Most user generated content (UGC) hosting is protected by intermediary liability immunity—the famous Section 230 in the US and similar frameworks in the EU and elsewhere. This means that content moderation is generally at the discretion of each platform, and, with some exceptions for narrow categories of illegal material, generally after the platform has been warned of its existence, the tech company hosting the content bears no liability.

It is an open question, however, whether content generated by AI following a user prompt should be similarly covered. Expert opinion varies on this question, though it is worth noting that Ron Wyden and Chris Cox, the lawmakers responsible for writing Section 230, do not believe that it should protect generative AI products. This raises a very much non-theoretical concern: ensuring non-violative outputs is much higher risk for generative AI companies than for social media and other online services that host UGC.

What are the current trends in regulating AI?

Reflecting the broader discourse about the dangers of AI, legislative and regulatory efforts have focussed largely on long-term and equity-related problems. Some examples:

  • The OECD principles point to obligations to keep AI in accordance with human values, highlight the importance of transparency and accountability, and encourage risk-awareness.
  • The UK’s White Paper on regulating AI incorporates most of these principles, adding the important aspect of enabling individuals impacted by automated decisions to seek and receive redress where appropriate. The UK’s approach attempts to regulate each use of AI on its own merits, rather than drawing up broad categories based on technology or sector.
  • The White House Blueprint for an AI Bill of Rights (technically only applicable to the use of AI by the US federal government itself), includes several of the focusses discussed above and introduces data privacy concerns and the mandate to present human alternatives to AI decision-making.
  • The draft EU AI Act, which is likely to set standards as the first major piece of legislation to be enacted, defines use-case categories that are prohibited (such as predictive policing) and those which are high-risk, and therefore subject to significant requirements.

What about multipurpose AI systems?

Very broadly, one could say that these laws deal with uses of AI, whereas Trust and Safety is about preventing misuse of platforms, as defined by the platform’s management. Recent versions of the EU’s legislation first introduced clauses about “general purpose” systems (Title IA in a draft from late last year), and most recently added more comprehensive rules in this most recent version (especially Article 28b) for “foundation models” that are intended for a wide range of uses, such as large language models—and these are where significant overlap with the Trust and Safety field emerges.

Michael Veale, Kira Matus and Robert Gorwa, in their forthcoming “AI and Global Governance: Modalities, Rationales, Tensions”, describe multipurpose AI products as “dual-use technologies”, a term usually applied in the context of arms control to refer to “goods, software and technology that can be used for both civilian and military applications”. Governments apply special regulations to these products, and the same may well be done for AI systems, likelily at least for foundation AI models in the EU.

Determining responsibility or accountability for harmful uses of AI can be incredibly difficult, due to the contemporary industry models of Software-as-a-Service, cloud computing, and modular systems. This complexity is examined in a new paper, “Understanding accountability in algorithmic supply chains” by Jennifer Cobbe, Michael Veale and Jatinder Singh. Those creating certain algorithms may have limited visibility into how they may be used, making suitable risk assessments very challenging. Similarly, those applying the outputs of AI decision-making may be unaware of who created them, what their limitations and dangers are, and how they evolve.

The new clauses for foundation models in the EU’s AI Act do attempt to “expand the accountability” by placing significant responsibilities on the providers of these models to inform users further along the supply chain about the features of their products. Beyond this, determining how to divide accountability amongst the various actors in the supply chain (who are typically located in differing jurisdictions) is no simple task. As Cobbe, Veale and Singh point out, GDPR takes on similar challenges in regulating data flows through comparably complex networks of companies.

How might this expand the role of T&S?

Cobbe, Veale and Singh identify an increasing centralization whereby many AI supply chains include Microsoft, Google, or Amazon through their control of leading models and their dominance in cloud services. (This is examined in greater depth in Cobbe and Veale’s “Artificial Intelligence as a Service: Legal Responsibilities, Liabilities, and Policy Challenges”). Veale, Matus and Gorwa identify such platforms as chokepoints that could be an effective locus of regulatory efforts. “In the future,” they write, “global AI governance seems likely to become highly enmeshed with platform governance.” Enforcement of platform governance, then, often becomes a task for Trust and Safety teams.

Much of Trust and Safety work focuses on the platform-user relationship, where the user is a private individual or a (usually) small company for marketplace platforms, and core legal departments oversee trust relationships between businesses through contracts, and if necessary, lawsuits. Trust and Safety does exist in business-to-business contexts though, mostly for providers of cloud services, like Dropbox, AWS, or Salesforce, as well as for internet infrastructure services like Cloudflare. In a comparable model, investigating policy breaches  and enforcing restrictions on uses of an AI model by API subscribers or cloud computing customers may well be an emerging Trust and Safety function, irrespective of whether the policies originate in law or from the platform itself.

How might future laws about AI impact the role of T&S?

Generative AI and Liability: Jurisdictions around the world will have to clarify whether or not existing intermediary liability regimes cover generative AI. If any determine that less permissive rules apply, services that wish to still operate in those areas may have to greatly enhance their Trust and Safety operations to ensure that potentially illegal content is not produced.

KYC: Cobbe, Veale and Singh, with their focus on the “accountability horizon,” suggest implementing Know Your Customer regulations for AI systems. Common in financial services, these could require customers to reveal to their suppliers their intentions for their own usage of the system, as well as who they may in turn provide services to. This would then enable the uses to be approved or not by a Trust and Safety specialist with expertise in permissible AI usage. Suspicious responses would also be within the purview of Trust and Safety teams to investigate.

Comprehensive Rules for Platforms: The conventional wisdom is that AI will play an increasingly central role in multiple aspects of business and government in the near future. If platforms are to fulfill a central role in restricting the use of AI, as suggested above, how they do so may well require regulation. With regards to content moderation, significant bodies of legislation like the EU’s Digital Services Act and the UK’s draft Online Safety Bill are dedicated to laying out detailed substantive and procedural requirements for these platforms.

Last year’s draft of the EU AI Act exempted providers of General Purpose AI systems from the high risk requirements if they merely tell customers not to use the systems in high risk contexts “in good faith” and, following instances of misuse, “take all necessary and proportionate measures to prevent such further misuse.” Although the newer draft goes much further, it appears to only apply to providers of foundation models rather than platforms involved with the use of a variety of model types at any stage of the supply chain. As the industry matures, we can expect regulators to start taking a much closer look at when and how platforms allow access to AI systems, and Trust and Safety teams assuming responsibility for enforcing platform policies regarding acceptable AI usage.

For more deep dives on Trust & Safety and AI, read about users perceptions of AI vs. human content moderation.