Friday, November 22, 2024
nanotrun.com
HomeNewsAsiaFacebook trained its AI to block live streams of violence

Facebook trained its AI to block live streams of violence

In the wake of the Christchurch terrorist attack, Facebook trained its AI systems to detect and block any future attempts to broadcast live shooting sprees with "police/military surveillance footage" and other violent material.
The emergency exercise, details of which were revealed in company documents leaked by whistleblower Frances Haugen, was internally described as a "watershed moment" for Facebook's live video service.
The attacker could broadcast the attack on the two mosques' lives for 17 minutes, but it was not detected by the company's systems, allowing it to be quickly replicated online. In the next 24 hours, Facebook deleted 1.5 million uploads.
At the time, Facebook admitted that its artificial intelligence systems had failed to stop the video, which was only removed after New Zealand police alerted it. “It was clear that Live was a vulnerable surface which can be repurposed by bad actors to cause societal harm,” the leaked review stated. “Since this event, we’ve faced international media pressure and have seen regulatory and legal risks increase on Facebook increase considerably.” It also details how Facebook is tackling the problem in an attempt to improve its cutting-edge technology. One key element is retraining the company's AI video detection system to provide it with a data set of harmful content to determine which should be highlighted and blocked.
"The training data set includes police/military body camera footage, entertainment shots, and simulations," as well as "military videos" obtained from the company's law enforcement outreach team, the internal document said. It also includes video clips from first-person shooters as examples of unblocked content.
As a result of these and other efforts, Facebook believes it has reduced detection times from five minutes to 12 seconds, the documents show. Christchurch's video now has a score of 0.96 for internally violent images, well above the intervention threshold.
Elsewhere, the set of leaked documents shows how eager Facebook is to repair its tarnished image. The company acknowledged that it had "taken minimal restrictions". In May 2019, the company announced a "one strike" policy, banning accounts from Using Live for 30 days for just one terrorist violation.
The change, announced to coincide with the Christchurch summit in Paris, is aimed at removing terrorist content from the web. New Zealand's prime minister, Jacinda Ardern, "used Facebook Live to update her fans after the announcement," which Facebook called "a major public relations win."
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Recent Comments