A small group of volunteers from Israel’s tech sector is working tirelessly to remove content it says doesn’t belong on platforms like Facebook and TikTok, tapping personal connections at those and other Big Tech companies to have posts deleted outside official channels, the project’s founder told The Intercept.

The project’s moniker, “Iron Truth,” echoes the Israeli military’s vaunted Iron Dome rocket interception system. The brainchild of Dani Kaganovitch, a Tel Aviv-based software engineer at Google, Iron Truth claims its tech industry back channels have led to the removal of roughly 1,000 posts tagged by its members as false, antisemitic, or “pro-terrorist” across platforms such as X, YouTube, and TikTok.

In an interview, Kaganovitch said he launched the project after the October 7 Hamas attack, when he saw a Facebook video that cast doubt on alleged Hamas atrocities. “It had some elements of disinformation,” he told The Intercept. “The person who made the video said there were no beheaded babies, no women were raped, 200 bodies is a fake. As I saw this video, I was very pissed off. I copied the URL of the video and sent it to a team in [Facebook parent company] Meta, some Israelis that work for Meta, and I told them that this video needs to be removed and actually they removed it after a few days.”

Billed as both a fight against falsehood and a “fight for public opinion,” according to a post announcing the project on Kaganovitch’s LinkedIn profile, Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime, as well as the mission creep they enable. The project’s public face is a Telegram bot that crowdsources reports of “inflammatory” posts, which Iron Truth’s organizers then forward to sympathetic insiders. “We have direct channels with Israelis who work in the big companies,” Kaganovitch said in an October 13 message to the Iron Truth Telegram group. “There are compassionate ones who take care of a quick removal.” The Intercept used Telegram’s built-in translation feature to review the Hebrew-language chat transcripts.

Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime.

So far, nearly 2,000 participants have flagged a wide variety of posts for removal, from content that’s clearly racist or false to posts that are merely critical of Israel or sympathetic to Palestinians, according to chat logs reviewed by The Intercept. “In the U.S. there is free speech,” Kaganovitch explained. “Anyone can say anything with disinformation. This is very dangerous, we can see now.”

“The interests of a fact checking or counter-disinformation group working in the context of a war belongs to one belligerent or another. Their job is to look out for the interests of their side,” explained Emerson Brooking, a fellow with the Atlantic Council’s Digital Forensic Research Lab. “They’re not trying to ensure an open, secure, accessible online space for all, free from disinformation. They’re trying to target and remove information and disinformation that they see as harmful or dangerous to Israelis.”

While Iron Truth appears to have frequently conflated criticism or even mere discussion of Israeli state violence with misinformation or antisemitism, Kaganovitch says his views on this are evolving. “In the beginning of the war, it was anger, most of the reporting was anger,” he told The Intercept. “Anti-Israel, anti-Zionist, anything related to this was received as fake, even if it was not.”

The Intercept was unable to independently confirm that sympathetic workers at Big Tech firms are responding to the group’s complaints or verify that the group was behind the removal of the content it has taken credit for having deleted. Iron Truth’s founder declined to share the names of its “insiders,” stating that they did not want to discuss their respective back channels with the press. In general, “they are not from the policy team but they have connections to the policy team,” Kaganovitch told The Intercept, referring to the personnel at social media firms who set rules for permissible speech. “Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.” While companies like Meta routinely engage with various civil society groups and NGOs to discuss and remove content, these discussions are typically run through their official content policy teams, not rank-and-file employees.

The Iron Truth Telegram account regularly credits these supposed insiders. “Thanks to the TikTok Israel team who fight for us and for the truth,” read an October 28 post on the group’s Telegram channel. “We work closely with Facebook, today we spoke with more senior managers,” according to another post on October 17. Soon after a Telegram chat member complained that something they’d posted to LinkedIn had attracted “inflammatory commenters,” the Iron Truth account replied, “Kudos to the social network LinkedIn who recruited a special team and have so far removed 60% of the content we reported on.”

Kaganovitch said the project has allies outside Israel’s Silicon Valley annexes as well. Iron Truth’s organizers met with the director of a controversial Israeli government cyber unit, he said, and its core team of more than 50 volunteers and 10 programmers includes a former member of the Israeli Parliament.

“Eventually our main goal is to get the tech companies to differentiate between freedom of speech and posts that their only goal is to harm Israel and to interfere with the relationship between Israel and Palestine to make the war even worse,” Inbar Bezek, the former Knesset member working with Iron Truth, told The Intercept in a WhatsApp message.

“Across our products, we have policies in place to mitigate abuse, prevent harmful content and help keep users safe. We enforce them consistently and without bias,” Google spokesperson Christa Muldoon told The Intercept. “If a user or employee believes they’ve found content that violates these policies, we encourage them to report it through the dedicated online channels.” Muldoon added that Google “encourages employees to use their time and skills to volunteer for causes they care about.” In interviews with The Intercept, Kaganovitch emphasized that he works on Iron Truth only in his free time, and said the project is entirely distinct from his day job at Google.

Meta spokesperson Ryan Daniels pushed back on the notion that Iron Truth was able to get content taken down outside the platform’s official processes, but declined to comment on Iron Truth’s underlying claim of a back channel to company employees. “Multiple pieces of content this group claims to have gotten removed from Facebook and Instagram are still live and visible today because they don’t violate our policies,” Daniels told The Intercept in an emailed statement. “The idea that we remove content based on someone’s personal beliefs, religion, or ethnicity is simply inaccurate.” Daniels added, “We receive feedback about potentially violating content from a variety of people, including employees, and we encourage anyone who sees this type of content to report it so we can investigate and take action according to our policies,” noting that Meta employees have access to internal content reporting tools, but that this system can only be used to remove posts that violate the company’s public Community Standards.

Neither TikTok nor LinkedIn responded to questions about Iron Truth. X could not be reached for comment.

A Palestinian woman cries in the garden of Al-Ahli Arab Hospital after it was hit in Gaza City, Gaza, on Oct. 18, 2023.
Photo by Mustafa Hassona/Anadolu via Getty Images

“Keep Bombing!”

Though confusion and recrimination are natural byproducts of any armed conflict, Iron Truth has routinely used the fog of war as evidence of anti-Israeli disinformation.

At the start of the project in the week after Hamas’s attack, for example, Iron Truth volunteers were encouraged to find and report posts expressing skepticism about claims of the mass decapitation of babies in an Israeli kibbutz. They quickly surfaced posts casting doubt on reports of “40 beheaded babies” during the Hamas attack, tagging them “fake news” and “disinformation” and sending them to platforms for removal. Among a list of LinkedIn content that Iron Truth told its Telegram followers it had passed along to the company was a post demanding evidence for the beheaded baby claim, categorized by the project as “Terror/Fake.”

But the skepticism they were attacking proved warranted. While many of Hamas’s atrocities against Israelis on October 7 are indisputable, the Israeli government itself ultimately said it couldn’t verify the horrific claim about beheaded babies. Similarly, Iron Truth’s early efforts to take down “disinformation” about Israel bombing hospitals now contrast with weeks of well-documented airstrikes against multiple hospitals and the deaths of hundreds of doctors from Israeli bombs.

On October 16, Iron Truth shared a list of Facebook and Instagram posts it claimed responsibility for removing, writing on Telegram, “Significant things reported today and deleted. Good job! Keep bombing! ”

While most of the links no longer work, several are still active. One is a video of grievously wounded Palestinians in a hospital, including young children, with a caption accusing Israel of crimes against humanity. Another is a video from Mohamed El-Attar, a Canadian social media personality who posts under the name “That Muslim Guy.” In the post, shared the day after the Hamas attack, El-Attar argued the October 7 assault was not an act of terror, but of armed resistance to Israeli occupation. While this statement is no doubt inflammatory to many, particularly in Israel, Meta is supposed to allow for this sort of discussion, according to internal policy guidance previously reported by The Intercept. The internal language, which detailed the company’s Dangerous Individuals and Organizations policy, lists this sentence among examples of permitted speech: “The IRA were pushed towards violence by the brutal practices of the British government in Ireland.”

Read our complete coverage

Israel’s War on Gaza

While it’s possible for Meta posts to be deleted by moderators and later reinstated, Daniels, the spokesperson, disputed Iron Truth’s claim, saying links from the list that remain active had never been taken down in the first place. Daniels added that other links on the list had indeed been removed because they violated Meta policy but declined to comment on specific posts.

Under their own rules, the major social platforms aren’t supposed to remove content simply because it is controversial. While content moderation trigger-happiness around mere mentions of designated terror organizations has led to undue censorship of Palestinian and other Middle Eastern users, Big Tech policies on misinformation are, on paper, much more conservative. Facebook, Instagram, TikTok, and YouTube, for example, only prohibit misinformation when it might cause physical harm, like snake oil cures for Covid-19, or posts meant to interfere with civic functions such as elections. None of the platforms targeted by Iron Truth prohibit merely “inflammatory” speech; indeed, such a policy would likely be the end of social media as we know it.

Still, content moderation rules are known to be vaguely conceived and erratically enforced. Meta for instance, says it categorically prohibits violent incitement, and touts various machine learning-based technologies to detect and remove such speech. Last month, however, The Intercept reported that the company had approved Facebook ads calling for the assassination of a prominent Palestinian rights advocate, along with explicit calls for the murder of civilians in Gaza. On Instagram, users leaving comments with Palestinian flag emojis have seen these responses inexplicably vanished. 7amleh, a Palestinian digital rights organization that formally partners with Meta on speech issues, has documented over 800 reports of undue social media censorship since the war’s start, according to its public database.

Disinformation in the Eye of the Beholder

“It’s really hard to identify disinformation,” Kaganovitch acknowledged in an interview, conceding that what’s considered a conspiracy today might be corroborated tomorrow, and pointing to a recent Haaretz report that an Israel Defense Forces helicopter may have inadvertently killed Israelis on October 7 in the course of firing at Hamas.

Throughout October, Iron Truth provided a list of suggested keywords for volunteers in the project’s Telegram group to use when searching for content to report to the bot. Some of these terms, like “Kill Jewish” and “Kill Israelis,” pertained to content flagrantly against the rules of major social media platforms, which uniformly ban explicit violent incitement. Others reflected stances that might understandably offend Israeli social media users still reeling from the Hamas attack, like “Nazi flag israel.”

But many other suggestions included terms commonly found in news coverage or general discussion of the war, particularly in reference to Israel’s brutal bombardment of Gaza. Some of those phrases — including “Israel bomb hospital”; “Israel bomb churches”; “Israel bomb humanitarian”; and “Israel committing genocide” — were suggested as disinformation keywords as the Israeli military was being credibly accused of doing those very things. While some allegations against both Hamas and the IDF were and continue to be bitterly disputed — notably who bombed the Al-Ahli Arab Hospital on October 17 — Iron Truth routinely treated contested claims as “fake news,” siding against the sort of analysis or discussion often necessary to reach the truth.

“This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy.”

Even the words “Israel lied” were suggested to Iron Truth volunteers on the grounds that they could be used in “false posts.” On October 16, two days after an Israeli airstrike killed 70 Palestinians evacuating from northern Gaza, one Telegram group member shared a TikTok containing imagery of one of the bombed convoys. “This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy,” the member added. A minute later, the Iron Truth administrator account encouraged this member to report the post to the Iron Truth bot.

Although The Intercept is unable to see which links have been submitted to the bot, Telegram transcripts show the group’s administrator frequently encouraged users to flag posts accusing Israel of genocide or other war crimes. When a chat member shared a link to an Instagram post arguing “It has BEEN a genocide since the Nakba in 1948 when Palestinians were forcibly removed from their land by Israel with Britain’s support and it has continued for the past 75 years with US tax payer dollars,” the group administrator encouraged them to report the post to the bot three minutes later. Links to similar allegations of Israeli war crimes from figures such as popular Twitch streamer Hasan Piker; Colombian President Gustavo Petro; psychologist Gabor Maté; and a variety of obscure, ordinary social media users have received the same treatment.

Iron Truth has acknowledged its alleged back channel has limits: “It’s not immediate unfortunately, things go through a chain of people on the way,” Kaganovitch explained to one Telegram group member who complained a post they’d reported was still online. “There are companies that implement faster and there are companies that work more slowly. There is internal pressure from the Israelis in the big companies to speed up the reports and removal of the content. We are in constant contact with them 24/7.”

Since the war began, social media users in Gaza and beyond have complained that content has been censored without any clear violation of a given company’s policies, a well-documented phenomenon long before the current conflict. But Brooking, of the Atlantic Council, cautioned that it can be difficult to determine the process that led to the removal of a given social media post. “There are almost certainly people from tech companies who are receptive to and will work with a civil society organization like this,” he said. “But there’s a considerable gulf between claiming those tech company contacts and having a major influence on tech company decision making.”

Iron Truth has found targets outside social media too. On November 27, one volunteer shared a link to NoThanks, an Android app that helps users boycott companies related to Israel. The Iron Truth administrator account quickly noted that the complaint had been forwarded to Google. Days later, Google pulled NoThanks from its app store, though it was later reinstated.

The group has also gone after efforts to fundraise for Gaza. “These cuties are raising money,” said one volunteer, sharing a link to the Instagram account of Medical Aid for Palestinians. Again, the Iron Truth admin quickly followed up, saying the post had been “transferred” accordingly.

But Kaganovitch says his thinking around the topic of Israeli genocide has shifted. “I changed my thoughts a bit during the war,” he explained. Though he doesn’t agree that Israel is committing a genocide in Gaza, where the death toll has exceeded 20,000, according to the Gaza Health Ministry, he understands how others might. “The genocide, I stopped reporting it in about the third week [of the war].”

Several weeks after its launch, Iron Truth shared an infographic in its Telegram channel asking its followers not to pass along posts that were simply anti-Zionist. But OCT7, an Israeli group that “monitors the social web in real-time … and guides digital warriors,” lists Iron Truth as one of its partner organizations, alongside the Israeli Ministry for Diaspora Affairs, and cites “anti-Zionist bias” as part of the “challenge” it’s “battling against.”

Despite Iron Truth’s occasional attempts to rein in its volunteers and focus them on finding posts that might actually violate platform rules, getting everyone on board has proven difficult. Chat transcripts show many Iron Truth volunteers conflating Palestinian advocacy with material support for Hamas or characterizing news coverage as “misinformation” or “disinformation,” perennially vague terms whose meaning is further diluted in times of war and crisis.

“By the way, it would not be bad to go through the profiles of [United Nations] employees, the majority are local there and they are all supporters of terrorists,” recommended one follower in October. “Friends, report a profile of someone who is raising funds for Gaza!” said another Telegram group member, linking to the Instagram account of a New York-based beauty influencer. “Report this profile, it’s someone I met on a trip and it turns out she’s completely pro-Palestinian!” the same user added later that day. Social media accounts of Palestinian journalist Yara Eid; Palestinian photojournalist Motaz Azaiza; and many others involved in Palestinian human rights advocacy were similarly flagged by Iron Truth volunteers for allegedly spreading “false information.”

Iron Truth has at times struggled with its own followers. When one proposed reporting a link about Israeli airstrikes at the Rafah border crossing between Gaza and Egypt, the administrator account pointed out that the IDF had indeed conducted the attacks, urging the group: “Let’s focus on disinformation, we are not fighting media organizations.” On another occasion, the administrator discouraged a user from reporting a page belonging to a news organization: “What’s the problem with that?” the administrator asked, noting that the outlet was “not pro-Israel, but is there fake news?”

But Iron Truth’s standards often seem muddled or contradictory. When one volunteer suggested going after B’Tselem, an Israeli human rights organization that advocates against the country’s military occupation and broader repression of Palestinians, the administrator account replied: “With all due respect, B’Tselem does publish pro-Palestinian content and this was also reported to us and passed on to the appropriate person. But B’Tselem is not Hamas bots or terrorist supporters, we have tens of thousands of posts to deal with.”

Israeli flags fly in front of the Knesset, the unicameral parliament of the state of Israel, on Sept. 11, 2022, in Jerusalem.
Photo: Christophe Gateau/AP

Friends in High Places

Though Iron Truth is largely a byproduct of Israel’s thriving tech economy — the country is home to many regional offices of American tech giants — it also claims support from the Israeli government.

The group’s founder says that Iron Truth leadership have met with Haim Wismonsky, director of the controversial Cyber Unit of the Israeli State Attorney’s Office. While the Cyber Unit purports to combat terrorism and miscellaneous cybercrime, critics say it’s used to censor unwanted criticism and Palestinian perspectives, relaying thousands upon thousands of content takedown demands. American Big Tech has proven largely willing to play ball with these demands: A 2018 report from the Israeli Ministry of Justice claimed a 90 percent compliance rate across social media platforms.

Following an in-person presentation to the Cyber Unit, Iron Truth’s organizers have remained in contact, and sometimes forward the office links they need help removing, Kaganovitch said. “We showed them the presentation, they asked us also to monitor Reddit and Discord, but Reddit is not really popular here in Israel, so we focus on the big platforms right now.”

Wismonsky did not respond to a request for comment.

Kaganovitch noted that Bezek, the former Knesset member, “helps us with diplomatic and government relationships.” In an interview, Bezek confirmed her role and corroborated the group’s claims, saying that while Iron Truth had contacts with “many other employees” at social media firms, she is not involved in that aspect of the group’s work, adding, “I took on myself to be more like the legislation and legal connection.”

“What we’re doing on a daily basis is that we have a few groups of people who have social media profiles in different medias — LinkedIn, X, Meta, etc. — and if one of us is finding content that is antisemitic or content that is hate claims against Israel or against Jews, we are informing the other people in the group, and few people at the same time are reporting to the tech companies,” Bezek explained.

Bezek’s governmental outreach has so far included organizing meetings with Israel’s Ministry of Foreign Affairs and “European ambassadors in Israel.” Bezek declined to name the Israeli politicians or European diplomatic personnel involved because their communications are ongoing. These meetings have included allegations of foreign, state-sponsored “antisemitic campaigns and anti-Israeli campaigns,” which Bezek says Iron Truth is collecting evidence about in the hope of pressuring the United Nations to act.

Iron Truth has also collaborated with Digital Dome, a similar volunteer effort spearheaded by the Israeli anti-disinformation organization FakeReporter, which helps coordinate the mass reporting of unwanted social media content. Israeli American investment fund J-Ventures, which has reportedly worked directly with the IDF to advance Israeli military interests, has promoted both Iron Truth and Digital Dome.

FakeReporter did not respond to a request for comment.

While most counter-misinformation efforts betray some geopolitical loyalty, Iron Truth is openly nationalistic. An October 28 write-up in the popular Israeli news website Ynet — “Want to Help With Public Diplomacy? This is How You Start”— cited the Telegram bot as an example of how ordinary Israelis could help their country, noting: “In the absence of a functioning Information Ministry, Israeli men and women hope to be able to influence even a little bit the sounding board on the net.” A mention in the Israeli financial news website BizPortal described Iron Truth as fighting “false and inciting content against Israel.”

Iron Truth is “a powerful reminder that it’s still people who run these companies at the end of the day,” said Brooking. “I think it’s natural to try to create these coordinated reporting groups when you feel that your country is at war in or in danger, and it’s natural to use every tool at your disposal, including the language of disinformation or fact checking, to try to remove as much content as possible if you think it’s harmful to you or people you love.”

The real risk, Brooking said, lies not in the back channel, but in the extent to which companies that control the speech of billions around the world are receptive to insiders arbitrarily policing expression. “If it’s elevating content for review that gets around trust and safety teams, standing policy, policy [into] which these companies put a lot of work,” he said, “then that’s a problem.”

The post Israeli Group Claims It’s Using Big Tech Back Channels to Censor “Inflammatory” Wartime Content appeared first on The Intercept.

QOSHE - Israeli Group Claims It’s Using Big Tech Back Channels to Censor “Inflammatory” Wartime Content - Sam Biddle
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Israeli Group Claims It’s Using Big Tech Back Channels to Censor “Inflammatory” Wartime Content

3 0
10.01.2024

A small group of volunteers from Israel’s tech sector is working tirelessly to remove content it says doesn’t belong on platforms like Facebook and TikTok, tapping personal connections at those and other Big Tech companies to have posts deleted outside official channels, the project’s founder told The Intercept.

The project’s moniker, “Iron Truth,” echoes the Israeli military’s vaunted Iron Dome rocket interception system. The brainchild of Dani Kaganovitch, a Tel Aviv-based software engineer at Google, Iron Truth claims its tech industry back channels have led to the removal of roughly 1,000 posts tagged by its members as false, antisemitic, or “pro-terrorist” across platforms such as X, YouTube, and TikTok.

In an interview, Kaganovitch said he launched the project after the October 7 Hamas attack, when he saw a Facebook video that cast doubt on alleged Hamas atrocities. “It had some elements of disinformation,” he told The Intercept. “The person who made the video said there were no beheaded babies, no women were raped, 200 bodies is a fake. As I saw this video, I was very pissed off. I copied the URL of the video and sent it to a team in [Facebook parent company] Meta, some Israelis that work for Meta, and I told them that this video needs to be removed and actually they removed it after a few days.”

Billed as both a fight against falsehood and a “fight for public opinion,” according to a post announcing the project on Kaganovitch’s LinkedIn profile, Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime, as well as the mission creep they enable. The project’s public face is a Telegram bot that crowdsources reports of “inflammatory” posts, which Iron Truth’s organizers then forward to sympathetic insiders. “We have direct channels with Israelis who work in the big companies,” Kaganovitch said in an October 13 message to the Iron Truth Telegram group. “There are compassionate ones who take care of a quick removal.” The Intercept used Telegram’s built-in translation feature to review the Hebrew-language chat transcripts.

Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime.

So far, nearly 2,000 participants have flagged a wide variety of posts for removal, from content that’s clearly racist or false to posts that are merely critical of Israel or sympathetic to Palestinians, according to chat logs reviewed by The Intercept. “In the U.S. there is free speech,” Kaganovitch explained. “Anyone can say anything with disinformation. This is very dangerous, we can see now.”

“The interests of a fact checking or counter-disinformation group working in the context of a war belongs to one belligerent or another. Their job is to look out for the interests of their side,” explained Emerson Brooking, a fellow with the Atlantic Council’s Digital Forensic Research Lab. “They’re not trying to ensure an open, secure, accessible online space for all, free from disinformation. They’re trying to target and remove information and disinformation that they see as harmful or dangerous to Israelis.”

While Iron Truth appears to have frequently conflated criticism or even mere discussion of Israeli state violence with misinformation or antisemitism, Kaganovitch says his views on this are evolving. “In the beginning of the war, it was anger, most of the reporting was anger,” he told The Intercept. “Anti-Israel, anti-Zionist, anything related to this was received as fake, even if it was not.”

The Intercept was unable to independently confirm that sympathetic workers at Big Tech firms are responding to the group’s complaints or verify that the group was behind the removal of the content it has taken credit for having deleted. Iron Truth’s founder declined to share the names of its “insiders,” stating that they did not want to discuss their respective back channels with the press. In general, “they are not from the policy team but they have connections to the policy team,” Kaganovitch told The Intercept, referring to the personnel at social media firms who set rules for permissible speech. “Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.” While companies like Meta routinely engage with various civil society groups and NGOs to discuss and remove content, these discussions are typically run through their official content policy teams, not rank-and-file employees.

The Iron Truth Telegram account regularly credits these supposed insiders. “Thanks to the TikTok Israel team who fight for us and for the truth,” read an October 28 post on the group’s Telegram channel. “We work closely with Facebook, today we spoke with more senior managers,” according to another post on October 17. Soon after a Telegram chat member complained that something they’d posted to LinkedIn had attracted “inflammatory commenters,” the Iron Truth account replied, “Kudos to the social network LinkedIn who recruited a special team and have so far removed 60% of the content we reported on.”

Kaganovitch said the project has allies outside Israel’s Silicon Valley annexes as well. Iron Truth’s organizers met with the director of a controversial Israeli government cyber unit, he said, and its core team of more than 50 volunteers and 10 programmers includes a former member of the Israeli Parliament.

“Eventually our main goal is to get the tech companies to differentiate between freedom of speech and posts that their only goal is to harm Israel and to interfere with the relationship between Israel and Palestine to make the war even worse,” Inbar Bezek, the former Knesset member working with Iron Truth, told The Intercept in a WhatsApp message.

“Across our products, we have policies in place to mitigate abuse, prevent harmful content and help keep users safe. We enforce them consistently and without bias,” Google spokesperson Christa Muldoon told The Intercept. “If a user or employee believes they’ve found content that violates these policies, we encourage them to report it through the dedicated online channels.” Muldoon added that Google “encourages employees to use their time and skills to volunteer for causes they care about.” In interviews with The Intercept,........

© The Intercept


Get it on Google Play