The European Commission has launched formal procedures to review the compliance of the Twitter (X) platform with the provisions of the new EU Digital Services Act, which came into force in autumn 2023. This information is reported by European Truth, referring to the announcement of the European Commission on December 18.
Twitter (X) is suspected of violating regulations related to risk management, content moderation, incomprehensibility of algorithms, transparency of advertising and access to data for researchers.
Based on the preliminary investigation and the information that the company submitted during the fall to a request from the European Commission, the EC decided to launch an official investigation.
The focus will be:
— compliance with the provisions of the Digital Services Act to prevent the distribution of illegal content in EU countries, in the context of assessing the risks and applied safeguards;
— the effectiveness of the measures that the company uses to combat disinformation, in particular the so-called 'Community Notes', and related measures that should reduce risks to electoral processes;
- transparency of the platform's policies, in particular in terms of data availability for researchers; - suspicions of deceptive interface design in connection with “blue checkmarks”.
The company could potentially be found in violation of eight articles of the Digital Services Act.
This is the first such investigation by the European Commission in connection with the new legislation.
At the next stages, it is possible to introduce temporary measures or a decision on non-compliance with the law. The EC may also accept the platform’s obligation to correct some of the existing problems.
Let us recall that earlier there were reports that Elon Musk was allowing the service to be blocked in Europe in response to the new regulation of Internet platforms by the EU.
The European Union's Digital Services Act, which came into force for major social networks on August 25, requires large platforms to assess the risk of misinformation, prevent algorithms from promoting harmful content and subject their activities to audit.