Meta allegedly replacing humans with AI to assess product risks
According to new internal documents review by NPR,关键字1 Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.
Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews.
But in the near future, these essential assessments may be taken over by bots, as the company looks to automate 90 percent of this work using artificial intelligence.
You May Also Like
SEE ALSO: The DeepSeek R1 update proves it's an active threat to OpenAI and Google
Despite previously stating that AI would only be used to assess "low-risk" releases, Meta is now rolling out use of the tech in decisions on AI safety, youth risk, and integrity, which includes misinformation and violent content moderation, reported NPR. Under the new system, product teams submit questionnaires and receive instant risk decisions and recommendations, with engineers taking on greater decision-making powers.
While the automation may speed up app updates and developer releases in line with Meta's efficiency goals, insiders say it may also pose a greater risk to billions of users, including unnecessary threats to data privacy.
In April, Meta's oversight board published a series of decisions that simultaneously validated the company's stance on allowing "controversial" speech and rebuked the tech giant for its content moderation policies.
Related Stories
- Anthropic CEO warns AI will destroy half of all white-collar jobs
- In copyright fight, artists use white-hot AI report as weapon against Meta
- Google's Veo 3 AI video generator is unlike anything you've ever seen. The world isn't ready.
- Google AI Overviews still struggles to answer basic questions and count
- Is AI porn the next horizon in self-pleasure — and is it ethical?
"As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them," the decision reads. "This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts."
Earlier that month, Meta shuttered its human fact-checking program, replacing it with crowd-sourced Community Notes and relying more heavily on its content-moderating algorithm — internal tech that is known to miss and incorrectly flag misinformation and other posts that violate the company's recently overhauled content policies.
(责任编辑:我的修仙生活)
- รมว.กต.ประชุมเพื่อติดตามสถานการณ์การประชุมกรรมาธิการเขตแดนร่วมไทย
- 又性情了!C罗夺冠后喜极而泣 斩获国家队生涯第三冠
- 'นิพิฏฐ์'เผยกัมพูชาไม่ได้ถอนทหาร สรุปกัมพูชาหลอกไทย หรือไทยหลอกไทย
- ESG Watch: การลงทุน ESG ในอาเซียน จากหลักปฏิบัติทางธุรกิจสู่โอกาสทำกำไร : อินโฟเควสท์
- H.E.R.O.全地图怎么过
- Ông Trump ủng hộ bắt Thống đốc California, 700 lính thủy đánh bộ tới Los Angeles
- 又性情了!C罗夺冠后喜极而泣 斩获国家队生涯第三冠
- 《女神异闻录4:重制版》正式公布 登陆多个平台
- ‘เก๋ไก๋ สไลเดอร์’โพสต์ IG หลังสูญเสียคุณพ่อเป็นรูปอิโมจิหัวใจสีดำ
- Why is Apple letting its App Store run wild?
- 新月同行无前试炼670通关思路
- 僵尸猎手小明角色与芯片选择攻略
- 封神幻想世界法宝合成方法 封神幻想世界法宝怎么合成
- win10快速访问怎么关闭?win10快速访问去除不掉怎么办?