Meta, the parent company of Facebook, Instagram, and Threads, has announced plans to introduce technology that can detect and label images generated by other companies’ artificial intelligence (AI) tools. Currently, Meta labels AI-generated images created by its own systems and hopes that the new technology will encourage the industry to address AI fakery. However, an AI expert has warned that such tools are easily evaded. Meta intends to expand its labelling of AI fakes in the coming months but acknowledges that its tool will not work for audio and video. The company is asking users to label their own audio and video posts and may apply penalties if they fail to do so. Meta’s Oversight Board recently criticized the company’s policy on manipulated media, calling it incoherent and lacking justification. The Board recommended updating the rules to address synthetic and hybrid content. Meta’s CEO, Sir Nick Clegg, agreed with the ruling and admitted that the existing policy is not fit for purpose in an environment with more synthetic content.