Facebook uses a technique of “demoting” to limit problematic content on its platform, but what if the system goes down?
According to The Verge, the company’s engineers first discovered the problem in October 2021 when false news suddenly increased dramatically in the News Feed. Instead of cracking down on articles from fake news publishers, the News Feed went viral, causing a 30% increase in views globally. Because the root of the problem could not be found, the engineer continued to stand by the incident until the problem was fixed on March 11.
In addition to third-party-authenticated posts, Facebook’s internal investigation also found that, during the error, Facebook’s systems were unable to downgrade violent, nude content. Facebook internally assigns a level one SEV label to incidents – a label only for high-priority technical crises, such as Russia’s recent blockade of Facebook and Instagram.
Facebook spokesman Joe Osborne confirmed the issue with The Verge. According to the document, the technical problem appeared in 2019 but did not have a noticeable impact until October 2021. “We’ve been able to track down the software bug and apply the necessary patch,” Osborne said, adding that it doesn’t affect the social network’s measurements in the long run.
For years, Facebook used downgrades to improve News Feed quality. It is used in situations such as war, political controversy. However, the company is not public about its impact on what users see in their News Feed.
Facebook leaders regularly “show off” how the AI system has improved over time when proactively detecting content such as hate speech, emphasizing the importance of technology in large-scale content management. In 2021, Facebook said, it will begin demoting all political content in news feeds. This is part of CEO Mark Zuckerberg’s plan to bring the Facebook application back to the way it was.
at Blogtuan.info – Source: genk.vn – Read the original article here