One of the Biggest Problems in Regulating AI Is Agreeing on a Definition

06.10.2022, 20:00, Разное
  Подписаться на Telegram-канал
  Подписаться в Google News
  Поддержать в Patreon

In 2017, spurred by advocacy from civil society groups, the New York City Council created a task force to address the city’s growing use of artificial intelligence. But the task force quickly ran aground attempting to come to a consensus on the scope of “automated decision systems.” In one hearing, a city agency argued that the task force’s definition was so expansive that it might include simple calculations such as formulas in spreadsheets. By the end of its eighteen-month term, the task force’s ambitions had narrowed from addressing how the city uses automated decision systems to simply defining the types of systems that should be subject to oversight.

New York City isn’t alone in this struggle. As policymakers around the world have attempted to create guidance and regulation for AI’s use in settings ranging from school admissions and home loan approvals to military weapon targeting systems, they all face the same problem: AI is really challenging to define.

Matt O’Shaughnessy

Matt O’Shaughnessy is a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his technical background in machine learning to research on the geopolitics and global governance of technology.
More >
@matt_oshaug

Subtle differences in definition—as well as the overlapping and loaded terminology different actors use to describe similar techniques—can have major impacts on some of the most important problems facing policymakers. Researchers typically refer to techniques that infer patterns from large sets of data as “machine learning,” yet the same concept is often labeled “AI” in policy—conjuring the specter of systems with superhuman capabilities rather than narrow and fallible algorithms. And some technologies commercially marketed as AI are so straightforward that their own engineers would describe them as “classic statistical methods.”

In attempting to better define AI for legislation or regulation, policymakers face two challenging trade-offs: whether to use a technical or human-based vocabulary, and how broad of a scope to use. But despite the difficulty of these tradeoffs, there’s often a way for policymakers to craft an AI definition well-suited to the specific application in question.

The first trade-off pits definitions based on humans against ones based on specific technical traits. Human-based definitions describe AI with analogies to human intelligence. For example, a U.S. Department of Defense strategy defines AI as “the ability of machines to perform tasks that normally require human intelligence.” By contrast, capability-based definitions describe AI through specific technical competencies. One influential definition describes a “machine-based system” that produces “predictions, recommendations, or decisions.”

Human-based definitions naturally accommodate advances in technology. Take the AI research community, which has little need for legal precision: its vague definitions of AI have attracted funding to a broad set of problems and maintained a coherent research community, even as notions of which approaches are most promising have evolved dramatically. And by de-emphasizing specific technical traits, human-based definitions can better focus on the sociotechnical contexts that AI systems operate in. Considering this broader context—rather than just particular technical aspects—is necessary for regulators to understand how AI systems impact people and communities.

By contrast, the specificity of capability-based definitions can better support legal precision, an important consideration for incentivizing innovation and supporting the rule of law. However, these definitions will quickly become outmoded if not carefully targeted to very specific policy problems.

Consider the rapidly advancing field of generative machine learning, which has been used to produce AI-created artwork and artificial but realistic-seeming media known as “deepfakes.” The definition of AI used in a recent EU policy draft explicitly includes systems that generate “content,” in addition to “predictions, recommendations, or decisions.” But the slightly older OECD definition that the legislation was based on mentions only systems that make “predictions, recommendations, or decisions,” arguably excluding content-generation systems. Though lacking precision, human-based definitions can more easily accommodate these kinds of developments in technological capabilities and impacts.

The second trade-off centers on whether AI definitions should be tailored to complex modern systems, or whether they should also include classical algorithms. A key distinction of modern AI systems, such as the deep learning techniques that have driven recent advances, is their complexity. At the extreme, recent language models like OpenAI’s GPT-3 contain billions of parameters and require millions of dollars’ worth of computation to train. Policymakers must consider the unique risks and harms posed by these systems, but the impacts of classical algorithms and statistical techniques should not be exempt from regulatory attention as a result.

Complex AI systems raise significant concerns because of the way they derive outputs from tangles of data. They can fail unexpectedly when operating in settings not reflected in their training data—think autonomous vehicles skidding to a halt, or worse, when faced with an unrecognized object by the side of the road. The exact logic that complex AI systems use to draw conclusions from these large datasets is convoluted and opaque, and it’s often impossible to condense into straightforward explanations that would allow users to understand their operation and limitations.

Restricting the scope of an AI definition to only cover these complex systems—by excluding, for example, the kinds of straightforward computations that mired New York City’s task force in debate—can make regulation easier to enforce and comply with. Indeed, many definitions of AI used in policy documents seem to be written in an attempt to specifically describe complex deep learning systems. But focusing exclusively on AI systems that evoke sci-fi use cases risks ignoring the real harms that result from the blind use of historical data in both complex and classical algorithms.

For instance, complex deep learning systems asked to generate images of corporate executives might return mostly white men, a product of discriminatory historical patterns reflected in the systems’ training data. Classical algorithms’ blind use of historical data to make decisions and predictions can produce the exact same discriminatory patterns as complex systems can. In 2020, UK regulators created a fracas by introducing an algorithm to derive grades from teacher assessments and historical data after the pandemic-related cancelation of secondary school exams. The algorithm relied on decades-old statistical methods rather than sophisticated deep learning systems—yet it presented the same stable of bias issues often associated with complex algorithms. Students and parents raised transparency and robustness concerns, which ultimately resulted in the resignation of high-level civil servants. The biased outputs produced by complex and classical algorithms alike can in turn reinforce deeply embedded inequalities in our society, creating vicious cycles of discrimination.

It is not possible to create a single, universal definition of AI, but with careful thought, policymakers can spell out parameters that achieves their policy goals.

When precision is not essential, policymakers can sidestep the use of a definition altogether. An influential UNESCO document forgoes a precise definition in favor of a focus on the impacts of AI systems, leading to a more future-proof instrument that is less likely to need to be updated as technology evolves. Imprecise notions of AI can also be supported by common law, where definitions have some flexibility to evolve over time. Legislators can support this gentle evolution process by including language describing the objectives of AI-related policy. In some settings, liability-based regulatory schemes that directly target anticipated harms can also avoid the need for a precise definition.

In other contexts, legislators can create broad legislation by using an AI definition that is human-based and encompasses both classical and complex AI systems, then allowing more nimble regulatory agencies to create precise capability-based definitions. These regulators can engage in nuanced dialogue with regulated parties, more rapidly adapt rules as technology evolves, and narrowly target specific policy problems. When done properly, this approach can reduce the regulatory compliance costs without sacrificing the ability to evolve with technological progress.

Several emergent regulatory approaches take this tact, but to be successful, they need to ensure that they can be easily updated as technology evolves. For example, the EU AI Act defines a suite of regulatory tools—codes of conduct, transparency requirements, conformity assessments, and outright bans—then applies them to specific AI applications based on whether their risk level is deemed “minimal,” “limited,” “high,” or “unacceptable.” If the list of applications in each risk category can be easily updated, this approach preserves both the flexibility of broad legislative definitions of AI and the precision of narrow, capability-based definitions.

To balance the trade-off of restricting attention to complex AI systems or also including classical algorithms, regulators should default to a broad scope, narrowing only when necessary to make enforcement feasible or when targeting harms uniquely introduced by specific complex algorithms. Harms caused by simple algorithms can easily pass unrecognized, disguised behind a veneer of mathematical objectivity, and regulators ignoring classical algorithms risk overlooking major policy problems. Policymakers’ thoughtful attention to AI—starting with the form and scope of its definition—is critical to mitigating the dangers of AI while ensuring its benefits are widely distributed.

End of document




Смотреть комментарии → Комментариев нет


Добавить комментарий

Имя обязательно

Нажимая на кнопку "Отправить", я соглашаюсь c политикой обработки персональных данных. Комментарий c активными интернет-ссылками (http / www) автоматически помечается как spam

Политика конфиденциальности - GDPR

Карта сайта →

По вопросам информационного сотрудничества, размещения рекламы и публикации объявлений пишите на адрес: [email protected]

Поддержать проект:
ЮMoney - 410011013132383
WebMoney – Z399334682366, E296477880853, X100503068090

Выборы президентов России, Украины, США и Олимпиада в Париже

18+ © 2021-2024 Ryb.Ru

Яндекс.Метрика