Platform design and content moderation decisions affect what people see, hear, and believe about the October 7 Hamas attacks on Israel and the conflict in Gaza that has followed. Do X, Meta, Telegram, and TikTok recognize how their algorithms affect people, politics, and history? Do we?
Here's what to watch
Distortion by design: How social media platforms shaped the first stage of the Mideast crisis
Emerson T. Brooking, Layla Mashkoor, Jacqueline Malaret
40 minute read
Kirsten Fontenrose is a nonresident fellow at the Scowcroft Middle East Security Initiative in the Atlantic Council’s Middle East Programs.
The first declaration of war came via Telegram.
On October 7, at 7:14 a.m. local time, Hamas’s al-Qassam Brigades used its Telegram channel to announce the beginning of a coordinated terror attack against Israel. Posts on Hamas’s press channel followed several minutes later. The press channel then posted a brief video clip of what appeared to be Israeli buildings in flames at 7:30 a.m. At 8:47 a.m., al-Qassam Brigades released a ten-minute propaganda video seeking to justify the terror attack, which was then shared via the press channel four minutes later. At 9:50 a.m., al-Qassam shared the first gruesome images of the actual attack; at 10:22 a.m., a grisly video collage. In both cases, the press channel followed suit.
Then came many more graphic posts, propelled to virality by both Hamas Telegram channels and a constellation of other Hamas-adjacent paramilitary groups. Within a few hours, the al-Qassam Brigades channel’s distribution grew by more than 50 percent, rising to 337,000 subscribers. Within a matter of days it would surpass 600,000 subscribers. Three other Palestinian militant groups rushed to release self-congratulatory statements about their own roles in the attack, not wanting Hamas to get all the credit. All told, Hamas and Hamas-adjacent groups would produce nearly 6,000 Telegram posts in the first seventy-two hours of the war.
X: A crisis of misinformation
Meta: A history of distrust
Can the platforms thread this needle?
Section 6: Lorem ipsum dolor sit amet
TikTok: Renewed political pressure
Section 7: Lorem ipsum dolor sit amet
Section 8: Lorem ipsum dolor sit amet
Section 9: Lorem ipsum dolor sit amet
Section 10: Lorem ipsum dolor sit amet
On Telegram, chaotic, unfiltered, and unverified battlefield footage
At X, recent policy changes drive a misinformation crisis
SECTIONS
Can the platforms thread this needle?
Section 6
Section 7
Section 8
Section 9
Section 10
Back to top
The Big Story
November 16, 2023
Photo: Reuters
Back to top
Meta has been here before
On TikTok, fewer graphic visuals but a similar volume of misinformation
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
Back to top
Emerson T. Brooking is a resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab and coauthor of LikeWar: The Weaponization of Social Media. From 2022 to 2023, he served as cyber policy adviser in the Office of the Undersecretary of Defense for Policy.
Privacy policy
Cookie Policy
Terms and conditions of use
Intellectual Independence Policy
Government Funding Review Process
Policy on donor acceptance and disclosure
Modern Day Slavery and Anti-Human Trafficking Policy
2023 Atlantic Council
All rights reserved.
In Telegram, Hamas found many features it liked: a large distribution channel, relative security against the abrupt deletion of its accounts, and an incorporated encrypted messaging service. By contrast, Israel—with many more communications pathways open to it—did not seem to place the same emphasis on Telegram. As the October 7 terror attack began, the official English-language Israel Defense Forces (IDF) Telegram channel broadcast dozens of automated messages that appeared to be linked to Israel’s emergency alert network. The IDF sent what appeared to be its first non-automated message to its Telegram audience at 12:21 p.m., sharing video of a retaliatory airstrike. From then onward, the IDF used the channel to share raw video of recent operations, as well as other updates.
Left: At 8:47 a.m. local time, a voice and silhouetted image alleged to be that of Mohammed Deif, commander of Hamas’s al-Qassam Brigades, announced the start of “Operation al-Aqsa Storm.” The Arabic-language video was shared first via Telegram. Right: At 11:35 a.m. local time, Israeli Prime Minister Benjamin Netanyahu responded with a brief video that declared, “We are at war.” The Hebrew-language clip was shared simultaneously via X and Facebook.
This was an early glimpse of a conflict that would be largely mediated by the internet. In the aftermath of October 7, an audience of tens of millions would turn to social media to understand a terror attack that killed 1,139 people and resulted in the abduction of roughly 240 hostages by Hamas militants. Social media would remain the primary conduit through which audiences tracked and debated Israel’s ensuing siege of Gaza and military operation—one that would kill at least twenty thousand people in the first seventy days of fighting. Many users experienced the crisis not as a series of static news headlines but as a stream of viral events, often accompanied by unverified claims, decontextualized footage, and salacious imagery.
Almost as soon as the war began, it collided with the engineering and policy decisions of social media companies. Differences in their user interfaces, algorithms, monetization systems, and content restrictions meant that the reality of the war could appear wildly distorted within and across platforms. While a full reckoning of social media’s role in the conflict is not yet possible, we wanted to capture early apparent trends: the rapid and mostly uncontested spread of terrorist content on Telegram; the proliferation of false or unverifiable claims on X, formerly Twitter; the often one-sided content moderation decisions of Meta, which worked to the detriment of Palestinian political expression; and deep confusion around TikTok, due to both the insular nature of TikTok communities and a broad lack of understanding about how the TikTok algorithm works.
Evident across all platforms is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.
Telegram: Unfiltered, unverified chaos
The IDF’s official Telegram account, programmed to automatically rebroadcast security alerts, offers a unique window into the confusion that prevailed during the first hours of Hamas’s October 7 attack. (Source: @idfofficial)
Other pro-Israel efforts were quicker to embrace Telegram as a means of rapid coordination and content dissemination. Among the most notable was South First Responders, created in the early morning of October 9 by individuals claiming to be affiliated with casualty recovery teams that had begun to clear the sites of massacres at the kibbutzim and Tribe of Nova music festival in southern Israel. They shared a stream of graphic images and videos (including recovered dashcam footage) that revealed the systematic way in which Hamas murdered Israeli civilians. The channel also became a clearinghouse for photographs of dead Hamas fighters and first-hand accounts of Israeli civilian resistance.
As the immediate confusion of October 7 passed, Israeli civil volunteers turned to Telegram to share accumulating evidence of the scale and horror of the Hamas attack. (Source: South First Responders)
As Israel began to respond to the attacks militarily, Telegram continued to host a large volume of primary source material, including the bloody aftermath of airstrikes and shelling in Gaza and clips of IDF soldiers encircling and neutralizing Hamas militants. In this way, Telegram assumed the same role that it has in Russia’s war against Ukraine, offering a chaotic, unfiltered, and unverified view of battlefield realities. This is in spite of the fact that Telegram appears to have been relatively unpopular in the region before the outbreak of the war, as compared with widespread Russian and Ukrainian Telegram use on the eve of that conflict.
Telegram continues to resist content moderation responsibilities
According to prior analysis by our Atlantic Council Digital Forensic Research Lab (DFRLab) colleagues, Telegram remains the least restrictive large point-to-point messaging app in operation today. Although Telegram reportedly passed eight hundred million monthly active users in July 2023, its group structure and emphasis on user freedom and anonymity mean that it has relatively few enforceable content moderation policies. Despite its size and influence, Telegram has also avoided joining the Global Internet Forum to Counter Terrorism. Commonly known as GIFCT, the organization uses data-sharing arrangements and channels for industry coordination to help social media platforms limit the spread of terrorist content. Telegram took few steps to restrict the platform’s exploitation by combatants in the aftermath of the October 7 attack. Its first high-profile content moderation action did not come until October 23, when the company restricted some Hamas-aligned channels without banning them, likely in response to pressure by the Apple iOS and Google Play app stores.
Telegram CEO Pavel Durov (who is famously outspoken about his resistance to government surveillance and content takedown requests) was selective in how he spoke about the conflict unfolding on his platform. “Hundreds of thousands are signing up for Telegram from Israel and the Palestinian Territories,” Durov noted on October 8, announcing Hebrew-language support in addition to Arabic. Only on October 13 did Durov directly address the violent imagery and calls to action being propelled by scores of Hamas-aligned channels. “Tackling war-related content is seldom obvious,” he wrote. Durov explained that Telegram held value for researchers, journalists, and fact-checkers and warned against “[destroying] this source of information.” Moreover, because users had to voluntarily choose which Telegram groups to join, he considered it unlikely that Telegram could “significantly amplify propaganda.”
On October 13, Telegram founder Pavel Durov posted a statement to his personal channel expressing his unwillingness to ban Hamas from the platform. (Source: @durov)
Although Durov was technically correct when it comes to how users choose to follow Telegram channels, this does not convey the whole story, as Telegram channels routinely exploit the platform for amplifying propaganda. Channel owners can choose to broadcast their messages to a wide audience unidirectionally, thus allowing them to tightly control what messages are amplified via the channel. This level of control makes Telegram an attractive first stop for actors to produce and disseminate content that quickly spreads to other platforms and audiences.
In the first weeks of the Israel-Hamas propaganda war, this meant that many of the most viral images and videos would begin life on Telegram.
Despite the decline of Twitter’s audience base under Elon Musk’s stewardship, the platform, rebranded as “X” in July, has remained popular among journalists, policymakers, and other political elites. This means that many influential groups still turn first to X to understand—and shape the understanding of—fast-moving events. In the aftermath of the October 7 attacks, unverified claims that went viral on X significantly affected public perception of the war, spreading across the information ecosystem and making their way into newspaper headlines, television broadcasts, and official government statements.
While it can be difficult to characterize the flow of public opinion across a large social media platform like X, especially during times of violent conflict, certain trends quickly emerged over the weekend of October 7—most notably, a chaotic information environment with users struggling to sort fact from fiction, and the proliferation of graphic content for both evidentiary and propaganda purposes. The volume of this conversation was unprecedented. A rudimentary scan of X via the Meltwater Explore social-listening tool finds that users made 342 million English-language war-related posts in the first month of the conflict. By contrast, the largest study of Twitter activity during Russia’s 2022 invasion of Ukraine was based on a dataset of 57.3 million tweets across all languages, assembled over the first thirty-nine days of fighting.
By rough estimate, there were approximately 342 million war-related English-language posts made on X in the first month of the Middle East crisis. Keyword results for “Israel,” “Israeli,” “IDF,” “Hamas,” “Palestinian,” including reposts. (Source: Meltwater Explore)
This confusion—manifesting first in uncertainty about the scale and basic facts of the October 7 attack—would become endemic to X users’ experience of the conflict on the platform. Although the chaotic nature of Twitter conversations contributed to the fog of war during other fast-moving conflict events, including Russia’s invasion of Ukraine, there was no precedent for the volume of misinformation that spread on X during the first phase of the Middle East crisis. We repeatedly observed false claims that reached millions of impressions on the platform, far outpacing any attempt at correction or contextualization.
Much of the spread of this misinformation—and the real-world harm that sometimes resulted from it—was due to specific platform design policies instituted by X in the last year. These include the dismantling of its verification mechanisms and the repurposing of blue checkmarks as a premium accessory for the general public, the introduction of a “pay-per-view” monetization model for premium users, the elimination of headline previews for links shared on the platform, and significant restrictions to X’s application programming interface (API).
Paid verification fuels a misinformation crisis
In April 2023, X formally ended the user verification system that had appended a blue checkmark to “notable” accounts whose identity the platform had previously confirmed. Instead, X began offering all users the opportunity to purchase a blue checkmark for an eighty-four-dollar annual fee and present their accounts as “verified,” but without going through a vetting process. For users purchasing verification, Twitter offered algorithmic prominence in search results and elevation of their replies in discussion threads over non-premium users. X introduced an additional paid tier—“Verified Organizations”—that granted a gold checkmark for one thousand dollars per month and offered additional services, including dedicated support to avoid account impersonation.
This change severely undermined news media’s presence on X and media organizations’ prominence in conversations on the platform. In 2015, roughly a quarter of all verified users were journalists. As of March 2023, following the initial rollout of paid verification, only about 6,500 of X’s 420,000 legacy verified accounts had willingly made the transition to paid status. At the same time, roughly half of the 444,000 users who did pay for verification did so with fewer than 1,000 followers. The result was a sharp reduction in the visibility and discoverability of journalists, and previously unknown accounts expanding their audience reach through paid verification rather than through building trust over time. This impact on media users on X appears to have been by design; as Musk stated of journalists in December 2022, “They think they’re better than everyone else.”
Where verification had once helped amplify fact-based reporting on the platform, now it worked to obfuscate it. According to a Newsguard analysis of war-related misinformation on X during the first week of the conflict, of the 250 most-engaged false or unsubstantiated claims about the war, 74 percent originated and spread via paid verified accounts. This included claims that the Hamas terror attack had been a false flag operation, that Hamas fighters had taunted caged Israeli children, that Hamas acquired weapons from Ukraine, and that certain senior IDF leaders had been captured during the attack. It also included a widely circulated video in which someone inserted fake audio into a CNN clip to make it appear as if reporters were being coaxed into feigning fear during a Hamas rocket attack.
A separate study by the University of Washington’s Center for an Informed Public found that between October 7 and 10, content spread by the seven most influential news aggregators (all of which had paid for verification) outperformed content shared by the most popular news media accounts by nearly a factor of twenty. User accounts that had purchased verification posted at a considerably higher volume than institutional media and commanded significantly more views per post, despite having a small fraction of the total follower count. In one instance, Musk personally promoted these accounts, overlooking their history of inaccurate reporting or explicit antisemitism and support for Hezbollah.
As these untrue claims spread in the first month of the war, only one prominent account is known to have been stripped of its organizational verification on X: that of the New York Times. The @nytimes account was the third-most followed institutional media account on the platform, with 55 million followers. No reason was given for X’s decision to remove the outlet’s gold checkmark, although it followed an October 17 incident in which the Times and other mainstream media organizations initially attributed a Gaza hospital explosion to an IDF airstrike, a claim that quickly spread worldwide, before updating their reporting to note that responsibility for the blast had not yet been confirmed. By then, though, it was too late, with violent riots taking place across the region, including in the West Bank, Morocco, Lebanon, and Jordan, where rioters attempted to storm the Israeli embassy. Later, multiple outlets concluded that the blast likely occurred when a rocket launched from Gaza crashed into the hospital, while the New York Times’ analysis did not draw a conclusion. In contrast, thousands of paid accounts that amplified false or misleading information during the same incident retain their verified status. At the time of publishing, the @nytimes account displayed a blue checkmark available for purchase by all platform users but had not been restored to its gold checkmark status.
Eight verified accounts on X, formerly Twitter, amplify the same false narrative that Ukraine has helped provide Hamas armaments. (Source: X)
Monetization upends the incentive structure for accounts covering the crisis
As part of X’s effort to encourage users to purchase blue verification checkmarks, the platform announced the launch of its Creator Ads Revenue Sharing program in July 2023. This program offered unspecified “ads revenue sharing” to accounts that purchased verification and met certain follower counts and popularity thresholds. The first tranche of payments for this program went to pre-selected accounts, which the Washington Post noted were disproportionately far-right influencers. Although X has since opened this monetization scheme to all users, basic elements of the program—who receives approval, how payments are calculated, or even how to tell if an account is participating—remain unknown.
It seems likely that some of the most popular aggregators for war-related content are participants in this monetization program. The operator of one such account, OSINTdefender, appeared to confirm as much on July 13, when they noted, “I never expected to ever make much money off of this App because I primarily do it as a Hobby but this could honestly change so much about what I do on here.”
An aggregator account appears to confirm participation in X’s revenue sharing program. (Source:@sentdefender)
Because of a common presumption that payments are calculated based on the total view count of posts originating from an account, creators have an incentive to produce as many posts as possible to maximize their reach.
Here, OSINTdefender’s posting behavior provides an instructive example. From October 7 to October 21, OSINTdefender posted 1,937 times—an average of more than 92 messages per day. Although this content included the expected surfacing of open-source intelligence material and cross-talk with other war-focused observers, it also included a very high volume of editorialization and unsupported suppositions intermingled with fact-based reporting.
Changes make X harder to understand, harder to leave
Other technical changes have also exacerbated the misinformation crisis on X. In August 2023, X instituted a five-second redirect delay for links to certain websites. As noted by the Washington Post at the time, these included some news media outlets that Musk had previously criticized, such as the New York Times and Reuters, but not others such as Fox News or the New York Post. The policy change also impacted rival social media platforms like Facebook, Instagram, Bluesky, and Substack, but not YouTube.
Just prior to the Hamas attack, X also abruptly removed the display of headline metadata for most outbound web links, taking away important context about their content. Such friction was by design. “Traffic to legacy media websites keeps declining,” Musk stated on October 4, “while X rises.”
As X limited external link-sharing, it also significantly reduced visibility into its own internal processes. In February, X shut down the free API that had previously made Twitter the most widely studied social media platform in the world. A search for “Twitter” on arxiv.org, an archive of scholarly articles in computer science and related fields, produces 3,865 results, more than Facebook, Instagram, YouTube, and TikTok combined. The shutdown effectively terminated hundreds of dependent apps and research dashboards in the process. Instead, X instituted a tiered structure in which the most expensive API access cost $210,000 per month. As the Coalition for Independent Technology Research observed in April, researchers get “an 80% [reduction of data availability] at about 400 times the price” as compared to what Twitter previously made available to researchers.
In February, X introduced a tiered API structure and severely handicapped free API access, leading to a sharp reduction in researcher use of the platform. The most expensive API access can cost up to $210,000 per month. (Source: X Developer Platform)
These API changes have had a chilling effect on the study of the platform. According to a September survey by the Coalition for Independent Technology Research, more than one hundred research projects focused on X have been canceled, suspended, or significantly altered in the wake of technical changes to the service.
Content moderation policies fail to keep up with the conflict
The first notable communication by X about its content policies during the Middle East crisis came on October 10, when the X safety team announced that it was instituting the “highest level of response” in order to address the more than fifty million posts that had already been made in response to the October 7 attack.
On October 12, X CEO Linda Yaccarino provided further detail about X’s efforts to reduce the spread of violent or harmful content. Yaccarino’s letter came in response to European—not US—regulators, who hinted at actions they might take under the European Union’s recently passed Digital Services Act. In her letter, Yaccarino highlighted that X had “identified and removed hundreds of Hamas-affiliated accounts.”
Yet Yaccarino’s letter downplayed the torrent of viral misinformation that had already begun to imperil wartime reporting on the platform. Instead, Yaccarino pointed to the “Community Notes” that X users can add to posts to offer context, fact-checking, or further information. “More than 700 unique notes” had been produced in the first four days of the conflict, Yaccarino wrote, adding context that had been viewed “tens of millions of times.” While it had so far taken a median time of five hours for these notes to appear, Yaccarino expressed confidence that X could achieve “major acceleration” in this process.
It is unclear if X ever achieved the planned speed increase. In a late November investigation that examined four hundred pieces of war-related misinformation, Bloomberg News found that a “typical” Community Note took around seven hours to appear. In some cases, the waiting period could take ten times longer. Because of the brief half-life of material posted to X, this meant that false claims had received most of their shares and engagements well before the first Community Notes appeared.
Beyond the issue of timeliness, the crowdsourced nature of Community Notes made them dependent on the knowledge of individual contributors and susceptible to both human error and ideological disagreements. Not only could Community Notes carry wrong or inaccurate information, especially during fast-moving conflict events, but the content of the message itself could change based on the flow of downvotes and upvotes.
According to a study by the data scientist known as Conspirador Norteño (@conspirator0), X users proposed 4,008 Community Notes to append to 2,037 unique war-related posts during the first five days of the conflict. Only 438 of these notes received a “HELPFUL” rating from X users; of these, roughly a quarter were produced in order to overwrite another note that had previously been judged “HELPFUL.” On multiple occasions, we observed that Community Notes elevated by the X community showed signs of editorialization and selective citation that made them little more useful than the posts they were intended to correct.
Community Notes are prone to editorialization, especially during fast-moving events. On October 14, the DFRLab observed that a video—purporting to show an explosion amid a Palestinian refugee column in Gaza that was initially attributed to an IDF airstrike—underwent multiple Community Note changes in real time. As of November 1, the Community Note had been removed from the video entirely. (Source: X)
X’s first comprehensive policy communication came on November 14. In a blog post, the X safety team shared the volume of content it had removed (more than 25,000 pieces of AI-generated content alone) and backed away from the monetization of wartime misinformation. X reiterated the usefulness of its Community Notes program, stating that “people are 30% less likely to agree with the substance of the original post after reading a note about it.” (X did not provide a source for this claim, although it may have been citing old studies that Twitter conducted prior to Musk’s acquisition and rebranding of the company.)
It is instructive to compare X’s response to the October 7 Hamas attack and subsequent fighting with Twitter’s response to the February 2022 invasion of Ukraine. Within a few hours of Russia’s invasion, Twitter began to share Ukrainian-language information on how to delete accounts and stop location tracking in order for Ukrainian users to avoid possible reprisals by Russian military occupiers.
Several weeks later, Twitter published an English-, Ukrainian-, and Russian-language note outlining the range of platform and community-driven initiatives it had undertaken to limit the spread of false information, as well as extensive efforts to limit the propaganda harm of hostile state media. It also announced that it had taken immediate steps to pause monetization. “Content that discusses or focuses on the war, or that is considered false or misleading under the Twitter Rules, is not eligible for monetization,” the policy team wrote. “We’ve also demonetized Search terms related to the war, preventing ads from appearing on the Search results pages for certain words.”
It does not appear that most of the policies first implemented in February 2022 and later codified in Twitter’s May 2022 crisis misinformation policy were followed in the wake of the October 7 attack. Indeed, even if X had been in a position to consider such a response, most of the staff who could have stemmed X’s misinformation crisis in the first place had already departed the company through rounds of layoffs and firings.
On October 7, as Hamas was on its deadly rampage through southern Israel, Meta didn’t play as central a role as Telegram or X did as primary sources of breaking news. The popularity of Meta in the region—with at least 200 million Facebook, Instagram, or WhatsApp users as of 2022—means that it likely did play some role in the spread of information on October 7. WhatsApp appears to have served as a vital line of communication, but it is difficult to quantify the spread of information over the private messaging app. The BBC reported that a WhatsApp group for mothers in Be'eri kibbutz offered minute-by-minute updates, with the group name changed to "Be'eri Mothers Emergency" as the attack unfolded.
Meta received criticism early in the conflict over widely circulated reports of Hamas allegedly livestreaming its attack on Facebook. One early source for the claim, Mor Bayder, said of her grandmother’s death, “A terrorist broke into her home, murdered her, took her phone, photographed the horror and posted it on her Facebook wall. This is how we found out.” The New York Times later confirmed that Hamas used the social media accounts of victims to broadcast its attacks in at least four instances, including a livestream showing the aftermath of the murder of Bayder’s grandmother.
In subsequent weeks, Meta platforms have played an expanding role in the conflict’s information space. Instagram emerged as the platform of choice for journalists in Gaza, who used stories, posts, and reels to share updates from the ground at a time when Israel had barred international media from entering Gaza. As the bombardment of Gaza increased, so too did claims from Palestinian journalists and the pro-Palestinian community that their content was being restricted or removed on Meta platforms.
By Meta’s own admission, legitimate content is bound to be wrongfully removed in the course of efforts to control the spread of content that violates Meta’s community standards. Adding to the challenge of moderation is that users are sharing content in various languages, primarily Arabic, Hebrew, and English. According to internal documents accessed by the Wall Street Journal, Meta modified its threshold for hiding comments amid a surge in hateful comments. The company’s standard practice is to hide comments when it is 80 percent certain the content qualifies as “hostile speech.” In response to the war, Meta reduced that threshold by half for large parts of the Middle East, requiring 40 percent certainty to hide comments. In the case of content coming from the Palestinian Territories, the threshold was further reduced to 25 percent. The Journal noted that while Meta was deploying automated systems to monitor Arabic-language content, it did not do the same for Hebrew-language content until later in October due to a lack of data for its newly built Hebrew classifier, according to the company. Actions such as these have fueled the perception of an anti-Palestinian bias on Meta platforms.
As this conflict has exemplified, content moderation systems that are vulnerable to errors and opaque in their implementation can be harmful in times of war when clear communication is of paramount importance.
A history of engagement with the Israeli-Palestinian conflict
Facebook has long played a prominent role in the Israeli-Palestinian conflict. During the 2014 Gaza war, Israelis and Palestinians used the platform to incite violence. In 2015, in a legal first, Israel convicted a Palestinian man over Facebook posts inciting “violent acts and acts of terrorism.” And during the so-called “knife intifada” of 2015-2016, lone-wolf militants were inspired to action by content posted to Facebook and other platforms. At the time, the Israel Security Agency assessed that social media was a key factor driving attacks. Israeli Prime Minister Benjamin Netanyahu, addressing the convergence of social media and terror attacks, had harsh words for Facebook.
In 2015, Israel established a cyber unit within its Ministry of Justice that monitors Palestinian content it claims could be threatening and reports it to Facebook and other platforms. In September 2016, Meta sent a delegation to Israel to meet with government officials; after the meeting, one minister said the company had agreed to work with the Israeli government to address the incitements to violence. Two months later, Facebook invited Palestinian advocates to its California headquarters to discuss concerns over suppression of speech and Israeli tactics.
Another major test for the platform came with the resurgence of violence between Israelis and Palestinians in May 2021. At the time, Meta faced significant backlash from human rights organizations for removing posts and blocking hashtags during Israeli raids on the al-Aqsa Mosque in Jerusalem. Internal documents showed the content was wrongly flagged for removal because the mosque shares a name with an organization sanctioned by the US government. Twitter also faced criticism for suspending accounts posting about the mosque raids, which the company said were wrongly removed by the automated spam filter. The wrongful removals sparked significant conversation about moderation efforts surrounding the Israeli-Palestinian conflict and Facebook’s failure to effectively moderate non-English content. On May 13, 2021, senior Facebook executives met virtually with Palestinian Prime Minister Mohammad Shtayyeh—the first meeting between the platform and Palestinian leadership. In its 2022 report, the Palestinian digital rights organization 7amleh accused Meta of being “the most restricting company” in “its moderation of the Palestinian digital space.”
In the aftermath of those events and following a recommendation from its Oversight Board, Meta commissioned Business for Social Responsibility, a management consultancy, to conduct an independent due diligence assessment of the platform’s impact on the Israeli-Palestinian conflict. The group’s report concluded that Meta’s policies had an “unintentional impact on Palestinian and Arab communities — primarily on their freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.” It added, “The data reviewed indicated that Arabic content had greater over-enforcement (e.g., erroneously removing Palestinian voice) on a per user basis.” Business for Social Responsibility also found under-enforcement to be a concern: “Hebrew content experienced greater under-enforcement, largely due to the lack of a Hebrew classifier,” it noted.
Consequently, Meta said that it would take ten of the twenty-one recommendations made by Business for Social Responsibility into consideration. Weeks before the latest escalation of violence between Israelis and Palestinians, Meta released an update on its progress in implementing the recommendations, noting that many of the changes would be rolled out in 2024.
As a result of this history and these efforts, Meta was better prepared to navigate the complex information environment surrounding the current war. The company has, however, continued to face certain content moderation challenges, including the volume of incoming requests. A November 14, 2023 report from Forbes found that Israel’s state prosecutor’s office had sent content takedown requests to Meta, TikTok, X, YouTube, and Telegram, with the most requests—nearly 5,700—going to Meta. The prosecutor’s office said content takedown requests to all platforms had increased tenfold since October 7, 2023, and that 94 percent of the requested takedowns were successful.
A history of distrust
On October 13, Meta released a statement outlining its efforts to address content from the current conflict. It said 795,000 pieces of content were removed in the three days following the October 7 Hamas attack. Citing the high volume of content being reported, Meta acknowledged, “[W]e know content that doesn’t violate our policies may be removed in error.”
These erroneous removals have impacted Palestinian journalists and media organizations, whose work provides a necessary lens into the war in Gaza. On the same day that Meta released its statement, the popular Palestinian journalist Motaz Azaiza had his Instagram account temporarily suspended. Palestinians, still harboring distrust from the events of 2021 and anticipating that their accounts and content will be removed, have made significant efforts to keep their content online. One approach is to create back-up accounts, which Azaiza relied on to continue sharing news during his suspension. On October 23, Azaiza’s X account was also temporarily suspended. On October 24, Azaiza’s back-up Instagram account was temporarily suspended. In a notice shared by the journalist, Instagram cited what it referred to as its “Community Guidelines on account integrity” as the reason for the removal.
A message from Motaz Azaiza’s backup account, which was suspended on October 24. (Source: @motagaza, left; @motaz_azaiza, right)
On October 25, the Instagram account @eye.on.palestine, which had more than six million followers, and its backup account @eye.on.palestine2, were removed from the platform. Associated Facebook and Threads accounts were also removed. When we tried to access the removed Instagram pages, the platform wrongly displayed a “no internet connection” message, which disappeared when accessing other Instagram accounts. Meta spokesperson Andy Stone said, “These accounts were initially locked for security reasons after signs of compromise, and we’re working to make contact with the account owners to ensure they have access. We did not disable these accounts because of any content they were sharing.” On October 27, the primary account was restored and verified. In a statement, @eye.on.palestine said the account was wrongly removed due to “continuous reports” and “technical problems.”
Numerous reports have emerged from Palestinian journalists and the pro-Palestinian community accusing Meta of shadow bans, post removals, and account restrictions. While it is difficult to ascertain the exact chronology of these reports, in the first days of the conflict the reports of wrongful content removals appeared minimal. But as the intensity of the bombardment against Gaza escalated, the reports of content or account restrictions proliferated.
Meta has offered its appeals process as a solution to erroneous content removals. Many Instagram users believe they have been subject to shadow bans, which are difficult to detect and cannot be appealed. As a result, Palestinian journalists and the pro-Palestinian community have employed a variety of tactics to avoid detection and content removals. Amid the posts documenting the siege and bombardment of Gaza, there are also posts sharing tips for how to keep content online. Pro-Palestinian users encourage each other not to use the share feature, but rather to screen-record content and post it directly from their own account. Language tricks known as “algo-speak” are also used to avoid auto-deletion by intentionally misspelling words (“att@ck/s” in lieu of “attacks,” for instance). Other tactics have focused on confusing the algorithm by posting content or phrases not related to the conflict in between posts on Gaza.
When accessing the Eye on Palestine Instagram accounts soon after they were removed, the DFRLab received an incorrect “No internet connection” error message. (Source: DFRLab via @eye.on.palestine/@eye.on.palestine2)
Examples of the tricks used to confuse the Instagram algorithm. (Sources: @belalkh, left; @mohammedelkurd, center; @hamed.sinno, right)
Meta has been among the most communicative platforms during this conflict—an effort to provide transparency amid mounting accusations that it is suppressing pro-Palestinian content. On October 18, the platform released another update, writing, “There is no truth to the suggestion that we are deliberately suppressing voice [sic]. However, content containing praise for Hamas, which is designated by Meta as a Dangerous Organization, or violent and graphic content, for example, is not allowed on our platforms.” Meta cited a number of “bugs” that it said affected users globally and prevented them from re-sharing content. It also noted new features specific to “the region,” without defining what territory is considered part of the region. The new features enable users to lock their profiles in one step and limit by default who can comment on a user’s public Facebook posts (although users can opt out of the limits).
Some examples of “algo-speak” used to prevent Meta’s algorithm from identifying the content as relating to the Middle East crisis. (Source: Najjart_1)
In another sign of apprehension toward Meta, Palestinians have been warning each other to not update the Instagram app as there is uncertainty about whether the update will result in account limitations or changes, as some have claimed. While we cannot verify the authenticity of this claim, that it is circulating highlights a larger problem: Without transparency and clarity about platform updates, rules, and limitations, people are left to fill in the gaps themselves. In another example, Meta issued a statement noting that certain hashtags were being restricted, but it did not list which ones. Meta’s statement also said that it had “established a special operations center staffed with experts,” but the company does not specify who the experts are or what their expertise is.
Social media is a vital space for speech, and explicit and precise guidelines delineating the boundaries of acceptable content are critical. At the moment, the system hinges upon users' adeptness at navigating a labyrinthine series of potential tripwires.
Journalists in Gaza share notices received from Instagram about removing posts or restricting accounts. (Source: @hamdaneldahdouh, left; @belalkh, center; @shehab._2001, right)
Fearing a platform crackdown, the pro-Palestinian community recommended adding hidden Israeli flag emojis to Instagram stories, thinking it will help them circumvent the algorithm. The prevailing sense of distrust was exacerbated when, as reported by 404 Media, user bios that included “Palestinian” and an Arabic phrase that means “praise be to God” were translated to “Palestinian terrorists are fighting for their freedom.” Instagram later apologized for the translation error. The Wall Street Journal reported that an internal investigation at Meta found the translation error was the result of a “hallucination” by its machine learning systems. This is evidence that automated systems are not objective and unbiased but rather reflect the data they are fed, which can reinforce inequalities and prejudices. In another example of a potential error that Meta has yet to address, several claims have emerged of Instagram wrongly citing its policy against nudity and sexual behavior in takedowns of graphic content.
Graphic content is uniquely hard to monitor during war
In times of war, platforms face a difficult balance between permitting graphic content that documents realities on the ground and protecting users from exposure to that content—especially unexpected exposures. The direct experience of multiple DFRLab researchers confirmed that Instagram has been flooded with graphic content, including gruesome depictions of dead children. But removing that content would eliminate vital documentation, particularly in instances of alleged war crimes.
Rather than eliminate such content, Meta offers buffers that attempt to protect users from accidental exposure while allowing those intentionally seeking the content to find it. One example of an existing buffer is the “sensitive content” filter, which allows users to decide what degree of sensitive content they would like to see. Pro-Palestinian accounts are circulating recommendations to adjust the setting to allow for “more” sensitive content for those seeking it. Despite these buffer efforts, DFRLab staff experienced multiple instances of accidental exposure due to the auto-forward function on Instagram Stories, which automatically plays the next story for users after they have viewed an initial one. During times when conflict and violence aren’t dominating public discourse, this feature allows users to experience content that might be of interest to them, but the same functionality can unwittingly expose users to graphic content.
In the early hours of October 7, TikTok saw a surge of content showing the initial attack alongside users expressing support for either side. Simply by selecting the platform’s “For You Feed” option, TikTok users in the United States could see first-person graphic imagery of the Hamas attack on the Tribe of Nova music festival followed automatically by unrelated video game footage that was wrongly labeled as being from the attacks. Like all social media platforms, TikTok struggled to manage the volume of content related to the ongoing conflict, and the platform is rife with mis- and disinformation. Unlike other platforms, TikTok took early and direct interventions to eliminate this graphic content from its service. This came at the expense of speech, expression, and documentary content from both sides of the conflict, as well as TikTok’s global user base.
TikTok is registered in the Cayman Islands, and headquartered in both Singapore and Los Angeles, while its parent company, ByteDance Ltd, is based in Beijing. Following a contentious congressional hearing with TikTok CEO Shou Zi Chew, the company attempted to distance itself from allegations of Chinese government control over the company, authoring an April 2023 blog post detailing its structure and ownership. Some details of the relationship between ByteDance and TikTok, and ByteDance and the Chinese government, remain unclear.
Content on TikTok is not amplified via a central “trending topics” list, but instead by the algorithmic “For You Feed” feature. Whereas Meta’s, X’s, and Instagram's algorithmic network effects are based on amplifying content to platform-wide virality, TikTok’s For You Feed sorts videos into niches, allowing them to go viral within disparate communities but only achieve platform-wide virality by breaking through those predetermined communities. Similarly, TikTok does not have a central “trending” interface to aggregate trends across the app. These platform dynamics and a lack of transparency have made it difficult to quantify the amount of content circulating on the platform that is in violation of TikTok’s standards, and to compare it to other large social media platforms.
One of the core principles of TikTok’s community guidelines is to “ensure any content that may be promoted by our recommendation system is appropriate for a broad audience.” For this reason, certain forms of content are “also ineligible for the [For You Feed] if it contains graphic footage of events that are in the public interest to view.” In practice, TikTok is restricting or removing footage of “real-world torture [and] graphic violence.” TikTok reaffirmed its application of its community guidelines in an October 14 blog post, stating that in the first week of the current conflict in the Middle East it “removed over 500,000 videos and closed 8,000 livestreams in the impacted region.” In a subsequent blog post, TikTok stated that “between October 7 and October 31, 2023, [it] removed more than 925,000 videos in the conflict region for violating our policies around violence, hate speech, misinformation, and terrorism, including content promoting Hamas,” and that “globally, between October 7-31, 2023, we removed 730,000 videos for violating our rules on hateful behavior.”
While TikTok has a limited public-interest exception for what it refers to as “documentary” content, it is unclear how this policy is applied in the context of content removal and algorithmic amplification. At least one pro-Palestinian news organization, Mondoweiss, reported that it had been banned from the platform twice. At the time of writing, the account had been restored. TikTok, however, is much more aggressive with content takedowns than other social media platforms. And because of the way the user community is segmented into niches, it is harder to tell what is trending on TikTok. This makes conducting on-platform digital forensics difficult for researchers, journalists, and the general public.
TikTok’s moderation policies have changed in recent months. On October 7, users could search “Hamas,” but by November 20 they no longer could. TikTok added in its mobile app a notice for some searches of keywords related to the conflict, stating that “when events are unfolding rapidly, content may not always be accurate.” It is unclear if this notice shows up for all users worldwide. How TikTok determines which keywords should prompt this notice and when is also unclear. For a period of time, searches for “Gaza” and “Palestine” carried the notice but “Israel” did not; at the time of publishing, all three keywords generated the notice.
Many TikTokers, long aware of the platform’s contentious relationship with political speech, are practicing self-censorship while simultaneously employing “algo-speak,” like users on Meta, to skirt community guidelines and algorithmically boost their content. Accounts seeking to engage in cross-platform promotion choose to avoid words that may run afoul of TikTok’s moderation guidelines, using substitute words for “kill” such as “k*ll " or "unalive" and changing “sexual assault” to “SA,” while directing TikTokers off platform to X, where they are more likely to find graphic footage of the conflict.
An example of substituting words to evade detection on TikTok. (Source: @Berikas0)
TikTok has also been used to seed misinformation that later spreads to other platforms, such as old footage shared out of context and presented as current. One notorious example depicted children in a cage allegedly captured by Hamas, which fact-checking outlet FakeReporter noted had been uploaded prior to the October 7 outbreak of violence (where and when it was actually filmed remains unclear). In another instance, a film production company’s TikTok was presented on X as purported evidence that Israel was falsifying footage of victims.
Still, young Israeli and Palestinian creators are taking to the app to share their point of view, providing first-person accounts about living through the conflict. TikTok discourse is taking the form of commentary, with TikTok’s community of content creators providing explainers, updates, and punditry. On TikTok, the conflict is experienced primarily not through graphic footage, but rather through rhetoric focused on spreading messaging in support of each side. TikTok users have incorporated recent trends on the app, such as “boy math or girl math,” to align themselves with Palestinians, and are creating fancam-style videos identifying celebrities supporting one side or the other. Pro-Palestinian users are employing the song Dammi Falastini (My Blood is Palestinian) by Mohammed Assaf to organize. TikTok has also become an organizing hub for diaspora communities and young people to engage in forms of economic and civil protest around the globe.
As on X, monetization drives the sort of content that creators produce on the platform. TikTok users can earn money from their videos if they are enrolled in the TikTok creator fund. TikTok Live, in which users can earn financial compensation from “gifts,” has incentivized coverage of the conflict that includes “live matches,” which allow two users to co-stream in a five-minute long “match” with the goal of garnering more engagement for one side of the conflict or another.
Screenshot of a “Live Match” on TikTok Live, with users sending financial support. (Source: @dutapetridisi)
There have been multiple instances of harassment of Israeli and Palestinian creators. And as the conflict rages on, both Islamophobia and antisemitism have surged on the platform, despite these behaviors being ostensibly banned under TikTok’s community guidelines. For instance, some Israeli content creators have dressed as Palestinians, employing racial stereotypes, to make fun of civilians in Gaza, who are currently without water and electricity. Meanwhile, the Anti-Defamation League documented the spread of antisemitic TikTok memes that exploited gaps in the platform’s content moderation policies. In response to mounting complaints about Islamophobia and antisemitism on TikTok, spokespeople for the company stated that the company was actively working to combat hate speech on the platform.
The government of Israel is using its own official account to share combat footage and interviews with Israeli citizens and soldiers, and to defend Israel’s military actions. The IDF is also active on TikTok, similarly posting patriotic imagery and combat footage. Further, Israel’s prosecutor’s office has directly engaged with TikTok and asked for more aggressive content moderation measures, including the removal of several songs praising Hamas that “served as soundtracks for thousands of videos on TikTok.” In response to this request, TikTok removed the songs in question.
The backlash to the backlash: Distrust and misapprehension of TikTok creates unique policy pressures
Over the past three years, there has been a debate in the United States over whether to ban TikTok because of its potential vulnerability to Chinese government action. The surge of content related to the Middle East crisis is feeding into long-standing cultural and political discourse surrounding alleged Chinese government control over the platform, national security concerns created by the vulnerable US information environment, and a national TikTok ban.
Following the October 7 attack on Israel, a viral X thread alleged that pro-Palestinian user bias on TikTok was influencing young people in the United States to side with the Palestinian cause. The thread referenced the global volume of views of the hashtags “#standwithisrael” and “#standwithpalestine,” noting that the hashtag “#standwithpalestine” received more views than “#standwithisrael.” The Washington Post questioned this interpretation of hashtags, noting that an examination of US TikTok data from the start of the conflict suggested a very different trend, and did not account for instances in which pro-Palestinian hashtags were paired with TikTok videos that the Post described as “fiercely critical of Hamas.”
Nevertheless, some US politicians and officials have alleged that the Chinese Communist Party is utilizing TikTok to introduce anti-Israeli narratives to the American public. In a blog post for the Free Press, Representative Mike Gallagher (R-WI) likened the app to “digital fentanyl” and called it “perhaps the largest scale malign influence operation ever conducted.” Additionally, CNN reported that White House aides “are also warily monitoring developments like how the Chinese government-controlled TikTok algorithm just happens to be prioritizing anti-Israel content on the social media platform preferred by many under 30.”
In response to this mounting political pressure, TikTok created a new account on X, @TikTokpolicy, to provide the platform’s perspective. It has taken the novel approach of directly engaging with US policymakers on X, with its official account quote-tweeting and arguing against those officials in what appears to be a first for a social media platform of its size and influence. TikTok also released a statement in an attempt to refute allegations that content on its app favors the Palestinian community in the context of the current crisis. Instead of solely focusing on the platform’s response to the war, the statement notably sought to address “misinformation and mischaracterization about how the TikTok platform actually operates.” It aimed to rebut hashtag analyses supporting the narrative that TikTok is biased toward Palestinians and highlighted the platform’s efforts to remove graphic content and hate speech. It also cited Gallup polling from March 2023, prior to the current conflict, to argue that young Americans are more likely to sympathize with the Palestinians. The statement noted that public data from Instagram and Facebook for the hashtags #standwithisrael and #freepalestine is comparable to TikTok’s data in order to maintain that TikTok’s content is on par with that of other large platforms. (A Washington Post analysis produced similar findings.)
It is not possible for us to measure with any confidence whether there is inherent platform-wide bias on TikTok regarding the Israeli-Palestinian conflict—be it algorithmic or user-driven—or whether any such bias is influencing public and user perceptions of the conflict. Much of the evidence cited in viral threads on X, by public officials, and by TikTok has been selective and incomplete. Further, we lack evidence to conclude with any degree of certainty that perceived platform-wide bias regarding the Israeli-Palestinian conflict has been at the direction of China.
The flow of information surrounding this conflict is rapid and chaotic, rife with graphic footage, hate speech, incitements of violence, and disinformation. But caught up in all the content that may violate the policies of social media platforms is also legitimate speech and important documentation for the historical record. Platforms must carefully thread the needle on content moderation. How these technology companies define the difference between over-moderation, under-moderation, and effective moderation in times of war will have consequences for millions of people.
Differences in platform design and content moderation policies can lead to vastly divergent understandings of a particular event. The brutality of the Hamas terrorist attack, for example, became clear within a matter of hours as bloody footage emerged on Telegram before going viral on X, subsequently followed by graphic imagery depicting IDF airstrikes killing civilians in Gaza. By contrast, users of TikTok, which disincentivizes the spread of graphic content, had less ready access to this material as they sought to understand the events of October 7 and their aftermath—and were flooded instead with punditry.
In this image, the official X account of the state of Israel uses an AI-generated image of Voldemort, villain of the “Harry Potter” series, to direct users to an online repository of October 7 massacre videos. (Source: @Israel)
Moreover, user perceptions of platform design matter just as much as actual design decisions because of the way perception shapes user behavior. Because of the broadcast-like nature of Telegram channels, channel operators are free to share any content of their choosing, including propaganda, disinformation, and graphic footage. Influencers on X who directly monetize their content views have an incentive to publish attention-drawing information as often as possible. Because of their previous encounters with perceived algorithmic censorship or suppression by Meta, Palestinians have proven adept at navigating and circumventing limitations on Facebook and Instagram in order to express themselves and document harms. To discuss the war, pro-Palestinian and pro-Israeli voices have embraced the unique vernacular for talking about sensitive subjects that TikTok’s heavy-handed content moderation has inspired.
Beyond issues of platform engineering, it is clear that content moderation policy regarding graphic or violent material cannot be readily separated from basic questions of political expression. TikTok users who reckon with that platform’s zero-tolerance approach to graphic content must find ways to obliquely discuss the price and horrors of war without ever mentioning them directly—or showing them. Meanwhile, on X, Facebook, and Instagram, efforts to reduce the spread of “pro-Hamas” material have begun to collide awkwardly with peaceful expressions of solidarity with the Palestinian people, as well as Israeli efforts to increase the spread of graphic material as part of its public diplomacy efforts to build support for its military operation. Even Telegram’s general avoidance of instituting basic content moderation is itself a question of political expression—one that may anger governments and lead to regulatory pressure whose effects would echo across the social media ecosystem for years to come.
A final issue that platforms must address is the extent to which they are duty-bound to preserve content documenting a war and facilitate the archive of that material. Efforts to moderate the online space during a conflict should acknowledge the paramount need to safeguard vital documentation.
Ultimately, platform design and content moderation policy will shape the history of the conflict itself. Many firsthand perspectives on the war have been captured in intimate and unlisted Telegram groups, ephemeral Instagram Stories, or winding X threads that may be deleted at any moment. Should this digital content simply be allowed to disappear, it could one day disappear from human memory as well.
Layla Mashkoor is an associate editor at the Atlantic Council’s Digital Forensic Research Lab. She has previously reported on disinformation, content moderation, and digital repression and has contributed investigative reporting to the New York Times and Wall Street Journal.
Jacqueline Malaret is an assistant director at the Atlantic Council’s Digital Forensic Research Lab.
Back to top
Back to top
Back to top
Note: Meta has been a funder of the Atlantic Council’s Digital Forensic Research Lab. This article, which did not involve Meta, reflects the authors’ views.
How social media platforms shaped our initial understanding of the Israel-Hamas conflict
Back to top
The Big Story
December 21, 2023
Back to top
Back to top
Back to top
Back to top
Privacy policy
Cookie Policy
Terms and conditions of use
Intellectual Independence Policy
Government Funding Review Process
Policy on donor acceptance and disclosure
Modern Day Slavery and Anti-Human Trafficking Policy
2023 Atlantic Council
All rights reserved.