Irwin said Musk told the team they shouldn’t worry too much about how their actions might affect user growth or revenue, and that safety is the company’s top priority. He stresses it many times a day, every day,” she said.
According to a former employee familiar with Twitter’s work, Irwin’s approach to safety was, at least in part, an acceleration of changes already planned from last year regarding Twitter’s handling of hateful conduct and other policy violations. It reflects the
One approach to the industry slogan “free speech, not free reach” is to leave certain Tweets that violate company policies and prevent them from appearing in places like your home timeline or search. must be
Twitter had long deployed such “visibility filtering” tools against misinformation, and had already incorporated them into its official hateful conduct policy prior to the Musk acquisition. Allows for more free speech while reducing the potential harm associated with viral abusive content.
According to the Center for Countering Digital Hate, the number of tweets containing hateful content spiked on Twitter in the week before Musk tweeted on Nov. 23 that impressions and views of hate speech were declining. . Against the prevalence of such content, Musk emphasizes reduced visibility.
That week, tweets containing anti-black language were triple the number seen in the month before Musk took office, and tweets containing gay slurs increased by 31%.
“More risks move faster”
Irwin, who joined the company in June and previously worked as a security officer at other companies such as Amazon.com and Google, countered suggestions that Twitter had neither the resources nor the will to secure its platform. did.
She said the layoffs didn’t significantly affect full-time employees and contractors, which the company calls its “health” department, including “key areas” like child safety and content moderation. I got
More than 50% of the health engineering division has been laid off, according to two sources familiar with the layoffs. Irwin did not immediately respond to a request for comment on the allegations, but previously denied that the medical team was seriously affected by the layoffs.
She added that the number of people working on child safety hasn’t changed since the acquisition, and that the team’s product manager is still there. but she declined to provide specific figures on the extent of the turnover.
He said Musk is focused on leveraging more automation, arguing that the company has made mistakes in the past on the side of using time-consuming and labor-intensive human reviews of harmful content. .
“He encouraged the team to take more risks, act fast, and secure the platform,” she said.
Regarding child safety, for example, Irwin said Twitter has moved toward automatically removing tweets reported by trusted individuals with a track record of accurately flagging harmful posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said Twitter recently responded to some reports just 30 seconds after reporting without acknowledging receipt of the report or confirmation of its decision. It states that it noticed that it was removing content.
In an interview on Thursday, Irwin said Twitter worked with cybersecurity group Ghost Data to remove about 44,000 accounts implicated in child safety violations.
Twitter also restricts hashtags and search results that are often related to abuse, such as those aimed at searching for “teenage” porn. She said past concerns about the impact of such restrictions on the term’s permitted use are gone.
Regarding the use of “trusted reporters,” Irwin said, “We’ve discussed it on Twitter in the past, but we were a little hesitant and frankly just a little late.”
“I think we have the ability to really go ahead with that sort of thing now,” she said.
Twitter moves to automated moderation as hate speech surges
Source link Twitter moves to automated moderation as hate speech surges