Tech News : Protect Kids from War Content
It’s been reported that some schools, in the UK (as well as Israel and the US) have advised Jewish parents to delete social media apps from their children’s phones over fears that they may see distressing hostage videos or videos of civilians being killed in the Israel-Hamas-Gaza conflict.
In Israel, schools and parents are reported to have been asking children to delete their social media apps over fears that they may see images and videos, made and posted online by Hamas, showing Israeli citizens being shot (e.g. at the Tribe of Nova Festival near the Gaza-Israel border), children being abducted, and captives of Hamas pleading for their lives. The fear is that children could be subjected to psychological terror and long-lasting psychological damage by witnessing the videos and images, which it’s been reported have been shared on Instagram, ‘X’ (Twitter), and TikTok, and forwarded on WhatsApp.
In the US
In the US, it’s been reported that a New Jersey school emailed parents, asking them to tell their children to delete their social media apps, and that another New York school advised parents to monitor their children’s social media usage, and to talk to them about what action to take if/when they encounter such images or videos.
In The UK
A similar approach is being taken in the UK with Jewish schools asking parents to ask their children to delete social media apps and/or talk to their children about the kind of content they are seeing.
Social media’s role generally over the Israel-Gaza conflict is now under the spotlight, particularly over how it has been used to spread misinformation (false or incorrect information shared without harmful intent), disinformation (false information shared with the specific intent to deceive), and confusion, and to fan hatred. For example:
– A misleading video was shared across platforms, wrongly connecting a 2015 Guatemala event to Hamas (a video of a girl being set on fire by a mob).
– A Hamas leader recently reacted to a fake news story from an Israeli TV channel.
– False claims that Qatar had threatened to cut off gas exports.
– Allegations that Hamas “beheaded babies” which was even published on tabloid front pages, and was referenced by President Joe Biden in a speech.
With factors like mistrust of mainstream media allowing falsehoods to be spread instantly by social media, a surge in the amount of falsehoods being spread, challenges in verifying and fact checking, a lack of moderation guardrails on some platforms, intense emotions about the conflict, and third-party agendas, social media is playing a part not just in shaping opinion, but also perhaps affecting the thinking, attitudes, and decisions of key players in the war.
Facing Criticism and Investigations
Examples of how the social media platforms and secure apps are facing scrutiny in relation to the conflict include:
– X, Telegram, and TikTok being criticised by regulators for not doing enough to stop the deluge of misleading information being spread via their platforms.
– The EU launching an investigation into ‘X’ (Twitter) over the spread of disinformation and violent content relating to the Israel-Hamas conflict.
– The Atlantic Council’s Digital Forensic Research Lab reporting that Telegram is the primary means of communication for disseminating statements by Hamas to its supporters.
– The UK’s technology secretary (Michelle Donelan) holding a virtual meeting with bosses at Google, Meta, X, TikTok, and Snapchat and asking the platforms to clearly set out what action they were taking to remove illegal material that breaches their terms and conditions.
What Are The Social Media Platforms Doing To Help?
Examples of what some of the main things social media platforms are doing, e.g. to tackle distressing videos and images from the conflict, misinformation, and disinformation being posted on their platforms include:
– X (Twitter) has emphasised its commitment to tackling misinformation and has implemented stricter rules about misleading information. X says it’s using a combination of technology and human review to flag and, if necessary, remove false or misleading content about the Israel-Gaza conflict, and they’re adding warning labels to potentially distressing or graphic content and offer users the choice to view or skip such posts.
– It’s been reported that Meta has established a special operations centre (with experts, including fluent Hebrew and Arabic) dedicated to the Israel-Gaza situation, focusing on detecting and removing harmful content more rapidly, and leveraging third-party fact-checkers to assess the accuracy of potentially misleading posts. Meta has also enhanced its measures to reduce the spread of graphic videos and images of the conflict and has introduced “sensitivity screens” which blur out potentially distressing content until a user chooses to view it.
– TikTok has reinforced its community guidelines that prohibit content promoting hate or misinformation and is reported to be working with experts and fact-checkers to identify and combat false narratives about the conflict. Although the platform has (since Musk took ownership) very much touted its ‘free speech’ approach, it’s now reported to have implemented a stronger content moderation system to quickly detect and restrict the spread of graphic videos related to the conflict. X is also reported to be using warning labels and restricting the reach of videos that may not violate their policies but could be distressing to some users.
– Although Snapchat focuses on content from trusted news outlets through its ‘Discover’ feature, it’s reported to have enhanced its moderation guidelines for user-generated content regarding the conflict, especially content that is graphic in nature. Snapchat uses both automated systems and human reviewers to monitor and, when necessary, remove certain such content and labels have been introduced for stories or snaps that may contain distressing imagery.
What Does This Mean For Your Business?
With Hamas reportedly using Telegram as their main means of communication with supporters and with anyone on any side able to upload and share videos and images on social media platforms, plus use encrypted apps like WhatsApp to share content, this conflict is a moderation nightmare for social media companies and a source of real concern for parents and schools.
Even though social media platforms are facing investigations and questions and have introduced some measures to help, as the advice from schools shows, perhaps the only sure and trusted way to protect children is to delete social medias apps together.
This story highlights how in conflicts such as Russia’s war on Ukraine and now the conflict in Gaza, social media channels are not just sources of information but can be used as a tool in information warfare and as a tool to deliberately terrorise and horrify people. Being vulnerable and inquisitive, alongside not having the capacity to cope with the many images of war, children are particularly at risk of distress and psychological damage.
It’s not surprising therefore, that schools and parents are seeking to take time to talk to children about what’s happening and their feelings and questions about it, plus reason with them about parental monitoring of what children are looking at and of the advantages of deleting their much-valued social media apps.
This story also highlights why many feel that social media platforms still have such a long way to go in protecting people (particularly their youngest users) from online threats and perhaps provides some vindication to governments and critics who have called for (and supported the introduction of) protective laws, e.g. the Online Safety Bill, and how these may force social media companies to be more socially responsible.
For the social media companies, issues that arrive in conflicts are a reminder of the difficulties of maintaining a balance between free speech and preventing harm and influence from bad actors. With a ground invasion by Israel apparently imminent, the situation for those directly affected in the Middle East only looks like getting worse, as do the worries for parents and the challenges for social media companies.