Tech News : 1000+ ‘Deepfakes’ On LinkedIn
Tech News : 1000+ ‘Deepfakes’ On LinkedIn
Researchers from the Stanford Internet Observatory have reported finding more than one thousand ‘virtual’ employees on the LinkedIn platform.
Faces Created By Artificial Intelligence
Virtual employees in this case are AI-generated photos of employees, the photos of whom don’t belong to any person (living or dead) and are made from mixtures of faces mashed together and featured in 1000+ fake LinkedIn profiles. In this research, it has been alleged that the ‘virtual employees’ have been set up by sales firms to generate more business for themselves. The researchers have suggested that some 70 businesses, listed as the employers of the ‘virtual employees’, may have been using the profiles to send out mass messages without needing to hire more staff or to breach LinkedIn’s messaging protocols. However, despite going against LinkedIn’s policy that every LinkedIn profile must represent a real person, there’s no hard facts that the virtual employee fake profiles have broken any laws.
Problems With Fake Accounts – AI Fakes
On other platforms in recent times, computer-generated faces in fake accounts have been linked to state campaigns, e.g. to distribute pro-Chinese propaganda to discredit opponents of China’s government, in trolling and the harassing of activists, to exert political influence (e.g. posing as Americans supporting Trump, and spreading pro-Kremlin propaganda on Facebook and Instagram to undermine trust in the Ukrainian government).
Easy or Difficult To Spot?
In this case, researchers reported discrepancies in the photos that aroused suspicion, e.g. a woman wearing a single earring and strange hair, unusual eye placement, and blurry backgrounds.
However, a recent study (Feb 2022, Sophie Nightingale, Hany Farid) found that AI-synthesised faces can be indistinguishable and more trustworthy than real faces. The study demonstrated that people only have a 50 per cent chance of correctly guessing whether a face has been created by a computer.
Fake, computer-generated photos are just a part of the wider threat posed by ‘deepfakes’. Deepfake videos, for example, which are banned from Facebook, use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to.
Examples of how these have been used recently are the deepfake videos posted online of both Russian President Vladimir Putin and Ukrainian President Volodymyr Zelensky. The fake video of President Zelensky talking about surrender had been posted on Meta (Facebook), Instagram, Meta, and its Russian counterpart VKontakte. Meta quickly removed the video from its platform for breaking its rules relating to “manipulated media”. The deepfake video of Putin, posted on Twitter, showed the Russian president announcing the surrender of Russia and asking Russian troops to drop their weapons and go home. A close examination of the video revealed that it had been made using clips from an address delivered by Putin on Feb 21.
What Does This Mean For Your Business?
Although not illegal, these recently discovered deepfake photos would appear to have been designed to perhaps unfairly gain a competitive advantage and to influence the perceptions of other businesspeople. Businesses should be vigilant, therefore, when contacted by (and when interacting with) other profiles online. Although social media platforms already check for and remove many fake profiles (LinkedIn removed more than 15 million fake accounts in the first six months of 2021), more effort and investment needs to be made to stop many of them from slipping through the net and they also need to develop faster and improved detection and take-down systems. Also, as deepfakes improve in quality and become more difficult to distinguish from the real thing, plus become targeted and weaponised, social media platforms are likely to need their own sophisticated AI systems to stay ahead of fakers, some of whom may be state backed with plenty of resources at their disposal.