Twitter has begun labelling tweets that spread misinformation about Covid-19, including some from Mark Steele, who posts about links between 5G and coronavirus.
But tweets from other 5G conspiracy theorists, such as David Icke, remain unchallenged.
MPs have asked Google, Twitter and Facebook to return to Parliament to answer their questions about content.
Only Facebook's head of global policy Monica Bickert has agreed to attend.
Chairman of the DCMS (Digital, Culture, Media and Sport) select committee Julian Knight said MPs had been "very disappointed" by the standard of evidence given by the three firms about coronavirus misinformation at their last meeting.
"The failure by Twitter, Facebook and Google to give adequate answers in writing to our outstanding questions have left me with no alternative but to recall them to Parliament," he said in a statement.
Image copyright Twitter
The Twitter alert links direct users to notices debunking conspiracy theories
It has specifically requested the attendance of Google's Ronan Harris, managing director of UK & Ireland and Dara Nasr, managing director of Twitter UK.
The committee wants clarity on a range of issues including:
In an interview with the BBC, Facebook founder Mark Zuckerberg said the social network had removed a post from Brazilian president Jair Bolsonaro that claimed there was a coronavirus cure, as well as content from groups claiming the rollout of the 5G network was a cause of the spread of the virus. It has also removed posts from the conspiracy theorist, David Icke, which Twitter has not done.
Twitter recently introduced a labelling system on tweets that could potentially cause harm and promised that world leaders such as President Trump would not be above the rules.
But its enforcement appears patchy, with some from 5G conspiracy theorists going unchallenged.
It has sent the DCMS committee a document outlining what it is doing to fight misinformation. It states: "Our aim with harmful Covid-19 misinformation is to rapidly identify and remove tweets that pose the greatest risk of causing harm."
Google has not responded to the BBC's request for comment.
The first peak of viral misinformation appears to have passed - but whether that's down to decisions made by social media platforms is difficult to ascertain.
Misinformation spreading online has evolved over the course of the pandemic and so have social media policies in a bid to keep up.
In the past, elections or terror attacks have resulted in some localised change to policies about tackling misinformation. But the unprecedented threat of the pandemic, affecting people in countries across the globe in a matter of weeks, has left social media sites with little choice but to tighten regulations more quickly than ever before.
It did take a number of weeks for platforms - operating with remote and reduced work forces - to get on top of that initial avalanche of dodgy medical tips and speculation.
However, the adoption of stricter policies, especially when it comes to conspiracies that could cause harm, does appear to have been somewhat effective. Especially since false claims linking 5G to coronavirus or about vaccinations have dominated the misleading conversation online in recent weeks.
But it's difficult to tell whether the slow-down of this "infodemic" is down to social media sites changing their policies - or just a case of timing. When messages tell you tanks will be rolling down your street and they never appear you start to get sceptical. So you might stop forwarding on those dodgy WhatsApps - even if WhatsApp hadn't made it a bit harder to do that.
Plus there's the debate as to whether removing conspiracies is always effective. It often leads to cries of censorship or establishment cover-up. But nonetheless it does stem the spread to some extent.