I’m on the META Oversight Board. We need AI security now. Suzanne Nossel

by
0 comments
I'm on the META Oversight Board. We need AI security now. Suzanne Nossel

TeaThe speed at which AI is changing our lives is astonishing. Unlike previous technological revolutions – radio, nuclear fission or the Internet – governments are not leading the way. We know AI can be dangerous; Chatbots counsel teenagers about suicide and may soon be able to provide instructions on how to make biological weapons. Still, the Federal Drug Administration has no equivalent in testing new models for safety before public release. Unlike the nuclear industry, companies often Hazardous violations or accidents are not required to be disclosed. The lobbying power of the tech industry, the crippling polarization of Washington, and the sheer complexity of such powerful, fast-moving technology have kept federal regulation at bay. European officials are facing a pushback against rules that some claim will hurt the continent’s competitiveness. Although several US states are piloting AI laws, they operate temporarily and Donald Trump has attempted to invalidate them.

The leaders of AI platforms like OpenAI’s ChatGPT and Google’s Gemini say they care about security. But capturing the future of AI means pouring billions of dollars into models that even their creators don’t fully understand, and making choices like adding advertising — and the Pentagon’s capabilities. Now demanding from Anthropic – Which increases the risk. Anthropic, which bills itself as the most honest frontier AI company, Its model says “Imagine that a thoughtful senior anthropologist staff” is trained in how to evaluate aid against potential harm. The directive echoes criticisms made years ago of Silicon Valley companies that shape the lives of users around the world from insular boardrooms. Consumers do not feel confident that they are in good hands. Fully 77% American Last year, the survey said that AI could pose a threat to humanity.

We are not caught between strong government regulation and the elusive hope of the most powerful companies in history being their own police. At least until legislators act, independent oversight provides the ability to adjudicate between AI’s potential and its dangers. By adopting independent oversight, AI companies can demonstrate that they care about the public trust enough that they are willing to fight for it.

The logic behind independent monitoring is simple. No matter what the good intentions of corporate executives, their duties to shareholders and investors dictate how they strike a compromise between costs and safety, stimulating revenues and profits. While long-term considerations of corporate reputation, customer loyalty and ethics can act as speedbumps, winning the AI ​​race requires an appetite for risk. The belated reckoning on how social media can fuel murders, interfere with elections and degrade the mental health of youth shows how the intoxicating power of technology can obscure flashing warning signs.

Independent oversight of AI provides the ability to uncover, analyze, and address its risks, giving advocates and communities a little more control over how these technologies reshape society. Social media provides an example. In 2020, hit by allegations it helped fuel the Rohingya crisis in Myanmar, Meta (then Facebook) created an oversight boardThe company is expected to be taken out of the hot seat. Early the following year the company adopted a policy Committed to complying with human rights laws. While the board, now five years old, has fallen short of some people’s expectations that it could serve as “the Supreme Court of Facebook,” its record offers important lessons about the prospects for effective independent oversight of AI, and why it matters.

Inspection demands diverse approaches. Like other leading AI companies, Meta has users on every populated continent. Deciding what they could and couldn’t post from the security of the Menlo Park campus left blind spots and sparked outrage. The 21 members of the Oversight Board bring broad cultural and business expertise to adjudicate sensitive questions of content moderation (such as whether a violent video should be shared as news or removed as an insult to the dignity of the victim). Board, whose members have Lived in more than 27 countriesIt includes conservatives and liberals, journalists, legal scholars, a former Prime Minister of Denmark and a Nobel Peace Prize laureate.

The oversight board uses Meta’s own “Community Standards” to assess whether posts violate rules, including bans on bullying or support of terrorists. The Board holds META to its pledge to uphold international human rights law, including Article 19 of the International Covenant on Civil and Political Rights, which guarantees freedom of expression. AI companies should make the same commitment and establish oversight to ensure they stick to it. Unlike the First Amendment or the EU’s online “right to be forgotten”, human rights law provides a common currency across borders. Its criteria provide ways of reasoning for making decisions on AI, such as whether a bot’s refusal to answer a question unreasonably denies a user’s right to information, or whether reusing user data violates privacy rights.

Access, consultation and transparency are key. The Oversight Board accepts appeals from the public, announces the cases it selects for review, invites public comments, and convenes sessions with experts and concerned communities. It has issued more than 200 decisionsS In detailed written opinions that have been cited by courts around the world.

A voluntary oversight body is only as strong as the powers vested in it by its parent company. While the oversight board wants broader powers, he has given Meta credit for going far beyond the lightweight advisory councils that other tech players have periodically convened and disbanded. Meta’s oversight board has the jurisdiction to decide whether a specific piece of content stays up or comes down, though exercising that authority on individual posts can feel like fighting a wildfire by dousing the embers. Its more consequential impact lies in selecting emblematic cases of erroneous content, presenting public rationale for decisions, and issuing recommendations to which Meta must respond. According to the report, META has implemented 75% of the board’s more than 300 recommendations. in decemberWhich led to significant changes for billions of users.

These include providing notifications about what policy the user is accused of violating when content goes missing, ensuring that rhetorical taunts and satire are not removed as threats, and ensuring that the company increases resources in crises such as natural disasters and armed conflicts. The board also issues detailed advisory opinions on larger policy issues, such as whether Meta should extend leniency for policy violations by high-profile posters, or how much Covid-related misinformation should be removed after the pandemic ends. Although the board acts independently in making its decisions and recommendations, it relies on Meta for important information such as whether specific content was determined by humans or automation, and what exactly went wrong when content was accidentally removed. AI companies must provide at least that much visibility for oversight to have any meaning.

As always, money matters. META places the Oversight Board’s funding into a trust over a period of time so that it cannot be cut overnight. But more diverse and assured resources would enhance the independence of the board. Monitoring state-of-the-art technology costs money. This requires funding for specialist staff and consultants who bring specific cultural and linguistic expertise to assist in analysis and decision making. However, given the hundreds of billions of dollars invested in AI, the cost of even strong supervision is negligible.

AI is taking over our classrooms, colleges, and corporations. The least that AI companies can do is provide independent oversight to ensure that they do not knowingly or unknowingly encroach on our rights.

Related Articles

Leave a Comment