A call by Prime Minister Theresa May to regulate tech firms in the wake of the London attacks has been criticised.
She said areas of the internet used by terrorists must be shut down.
“We cannot allow this ideology the safe space it needs to breed,” she said. “Yet that is precisely what the internet, and the big companies… provide.”
Open Rights Group said more regulation could push “these vile networks into even darker corners of the web”.
“The internet and companies like Facebook are not the cause of hate and violence, but tools that can be abused,” the digital rights group said, while condemning Saturday’s attack on the London Bridge area.
Tech companies such as Google, Twitter and Facebook, which owns encryption messaging service WhatsApp have faced calls for greater regulation to prevent terrorist recruitment and planning.
On Sunday morning Home Secretary Amber Rudd told ITV that an international agreement was needed for social media companies to do more to stop radicalisation on their platforms.
“One is to make sure they do more to take down the material that is radicalising people. And secondly, to help work with us to limit the amount of end-to-end encryption that otherwise terrorists can sue to plot their devices”.
Facebook said it did not allow groups or people that engage in terrorist activity or posts that support terrorism.
“Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement. “
Google said it was “committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online”.
It said it was already working on an “international forum to accelerate and strengthen our existing work in this area” and had invested hundreds of millions of pounds to fight abuse on its platforms.
Analysis: Joe Lynam, BBC business correspondent
Calling for technology companies to “do more” has become one of the first responses by politicians after terror attacks in their country. Theresa May’s comments on that subject were not new – although the tone was. She has already proposed a levy on internet firms as well as sanctions on firms for failing to remove illegal content, in the Conservative party manifesto published three weeks ago.
Given that 400 hours of videos are uploaded onto Youtube every minute and that there are 2 billion active Facebook users, clamping down on sites which encourage or promote terror needs a lot of automatic detection using software as well as the human eye and judgement.
Technology companies such as Microsoft, Google, Twitter and Facebook are all part of an international panel designed to weed out and prevent terror being advocated worldwide.
That involves digitally fingerprinting violent images and videos as well as sharing a global database of users who may be extremist.