Twitter has removed more than 1,100 ‘tweets’ from the platform for having misleading and potentially harmful content, following the latest expansion of the definition of damage that includes what goes directly against what is indicated by authorized sources of global public health information and local in the middle of the Covid-19 pandemic.
Social networks have increased the number of Active users in recent weeks, due to confinement measures and the need to keep social distance due to the outbreak of the new coronavirus, affecting globally.
In this context, the global conversation about Covid-19 and product enhancements have fueled users monetizable newspapers (mDAU) on Twitter, with a quarterly average of 164 million mDAUs, 23 percent more than the 134 million in the first quarter of 2019 and 8 percent more than the 152 million of the room quarter of 2019, as collected in its official blog.
The technology company has also seen an increase in 45% of the use of the page with authorized and truthful information, and a 30% increase in the use of Direct messages since March 6.
In mid-March, the company updated its security policies, to allow the removal of those contents that could put users at risk of contracting the Covid-19, the disease that causes the corononavirus.
In this sense, users will have to delete contents denying guidelines provided by experts, encouraging other users to follow diagnostic techniques and false or ineffective or shared treatments misleading content supposedly attributed to experts or authorities.
Since its application on March 18, more than 1,100 posts have been removed from the platform, as they contain misleading and potentially harmful content. Furthermore, automated systems have challenged more than 1.5 million accounts they were talking about Covid-19 with manipulative or ‘spam’ behaviors.
These systems automated They allow you to review reports more efficiently by displaying content that is most likely to cause harm and should be reviewed first. They also help proactively identify content that violates the platform rules before it is reported, although the human team intervenes when it is necessary to review the content that requires additional context, as in the case of misleading information related to the Covid-19.