AI chatbots are getting their wires crossed on the Israel-Hamas war – DIGIWIZ CENTRAL

AI chatbots are getting their wires crossed on the Israel-Hamas war

Israeli security forces patrol streets of Sderot, Israel, on October 11, 2023.

Mostafa Alkharouf/Anadolu via Getty Images

AI chatbots may be the most hyped tech of 2023, but maybe don’t rely too heavily on them just yet.Google, Microsoft, and OpenAI chatbots are struggling to keep up with the Israel-Hamas war.As well as facts, they shared inaccuracies or made-up details when prompted about the war.

If you’re trying to stay on top of the latest developments from the Israel-Hamas conflict, some words of advice: you probably shouldn’t ask an AI chatbot for updates.

Chatbots, including Google’s Bard, Microsoft’s Bing, and ChatGPT Plus appear out of touch with the reality of the present day, mixing accurate statments with details that are flat-out wrong or made up in response to Insider’s queries about the war between the two Middle East regions.

Bloomberg first reported that Bing and Bard falsely claimed there was a ceasefire in place in Israel — mere days after Hamas launched a surprise attack at the state’s southern border.

It’s a sign that Big Tech’s most hyped tools of the year, which have been touted as the future of search, remain deeply flawed and unreliable. Here’s how each chatbot responded.

Google’s Bard

When Insider prompted Google’s chatbot about the status of a ceasefire in Israel, it said a truce that took effect around the Gaza Strip earlier this year was active.

“Yes, there is currently a ceasefire in Israel. It was agreed to on May 13, 2023, between Israel and the Palestinian militant group Islamic Jihad, after days of violence that claimed the lives of at least 35 people,” Bard said.

Google’s Bard initially suggested a ceasefire from May 2023 was still in effect.

Google Bard/Insider

Upon a second prompt, Bard said a ceasefire was agreed on August 7, 2023. It was only after being prompted again that Bard said, “There is no ceasefire in Israel” as of October 12, following Hamas’ infiltration at Israel’s southern border.

In response to a question about whether or not Gaza has run out of power, fuel, and electricity, Bard offered a more accurate initial response. It wrote that the “Gaza Strip’s sole power plant stopped working on October 11, 2023, after the fuel needed for generating electricity ran out.”

“This was due to a blockade imposed by Israel in retaliation to a mass infiltration by Hamas fighters into southern Israel on Saturday, October 8,” Bard wrote. In a disclaimer before using Bard, Google notes that “Bard is an experiment” and “will not always get it right.”

Microsoft’s Bing

Bing’s AI chatbot responded to the prompt about a ceasefire by suggesting “there is a ceasefire between Israel and Hamas in the Gaza Strip that came into effect on Friday, 10 October 2023.”

The problem is that Friday, October 10, 2023, is not a real date.

Bing’s chatbot generated a fake date in response to a question from Insider about the Israel-Hamas conflict.

Microsoft Bing/Insider

When prompted further, Bing recognized that the date “does not exist,” but then reverted back to its suggestion that “a ceasefire was declared in Israel” on that made-up date.

Bing was more accurate when prompted with other questions about the conflict.

When asked if Israel and the Palestinian people were fighting, the chatbot said the two were engaged in a “violent conflict,” which began on Saturday, 7 October, 2023, the correct date.

However, it still said “Egypt has brokered a ceasefire between Israel and Hamas” that had yet to be accepted by either side. Bing chat also claimed Gaza was out of fuel and power when prompted, but still added Egypt had brokered a ceasefire between the two sides.


ChatGPT Plus, the premium tier of OpenAI’s chatbot that was updated last month to provide users up-to-date information, offered softer responses to questions about the conflict. The chatbot’s “browse with Bing” feature, which connects it to the internet, is still in beta.

OpenAI’s chatbot offered softer responses to questions about the conflict.

Insider / OpenAI

Though it didn’t generate inaccuracies, it avoided giving a direct answer to the question by saying “the situation suggests that while there have been efforts and calls for a ceasefire, hostilities have continued, making the truce fragile and the overall situation precarious.”

When asked again, ChatGPT acknowledged that “the recent escalations in October 2023 indicate that any previous ceasefire agreements might have broken down.”

Google, Microsoft, and OpenAI did not immediately respond to Insider’s request for comment.

Read the original article on Business Insider
Please follow and like us:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow by Email