William Chapman, who represents a number of grooming and abuse victims, called on the companies to better police their platforms. “In the real world, a playground created for children would have to be safe…It would not be sufficient to put up a notice which said: ‘Enter at your own risk’,” he said. Of course, real-world analogies don’t always mesh well with a digital space. A playground, for example, has far less traffic than Google or Bing, which must try to keep an eye on millions of users and trends simultaneously. Generally, though Chapman’s point is that these platforms have left abusive content discoverable for too long. Earlier this year, Bing was found to be displaying and recommending underage content for basic terms like “porn kids”. Perhaps more of a concern, though, was its display of the content for non-sexual terms like “omegle kids”. Search engines like Bing and Google often implement ‘safe’ modes to prevent children from accessing content above their age range. That’s a separate issue to child pornography, though, which shouldn’t surface at all. Chapman believes tech giants have the resource to not just remove such content, but actively detect child sexual abuse. He says features like end-to-end encryption will make it more difficult to eradicate. It’s unclear exactly what evidence Google, Microsoft, and Facebook will present. Speculatively, the inquiry could be looking for statistics about the amount of content the company has removed, how its detection algorithms work, and more. Whatever the case, as the role of tech giants in daily life becomes more and more prevalent, it’s getting harder to pinpoint where the line between privacy and protection should be drawn.