
The political polarization of media coverage is pervasive globally; but, can be found recently and potently in the press treatment of the 2020 US elections. As the President of the United States remains largely (or at least in its own perception) the most powerful and influential position in the word, the upcoming election matters.
An international study conducted before the 2016 US election found that 83% of US respondents described themselves as interested in the US presidential election. Comparatively, 84% of Australian respondents described themselves as interested in the US presidential election, which makes Australians, a nation located on the other side of the world, apparently more interested than Americans are. There is no doubt that this election is of immense importance both to the US as well as to the rest of the world, as is, its media coverage, is the direct line of communication to the masses. The issue of media outlet bias is an ongoing one, its solution I do not claim to be able to provide. However, a closer examination of the problem and a discussion around innovative Israeli technology working actively to reduce that bias, aids in deciphering the meaning from the medium.
The United States undergoes a hotly contested 2020 presidential election year, with new studies from Pew Research Centre finding Republicans and Democrats place their trust in two, nearly inverse, media environments. The study asked about the use of, trust in, and distrust of 30 different sources. Greater portions of Republicans expressed distrust rather than trust of 20 of the 30 sources asked about. Only seven of the outlets generated more trust than distrust among Republicans, including; Fox News, The Rush Limbaugh Show (a popular conservative American talk radio show), and similarly The Sean Hannity Show.
Conversely, the Democratic numbers were almost exactly reversed. Larger portions of Democrat supporters expressed trust rather than distrust in 22 of the sources. Only eight generated more distrust than trust, including; Fox News, Rush Limbaugh, and Sean Hannity.
In September of 2020, a poll taken by Gallup revealed: “69% of Americans say they are more concerned about bias in the news other people consume than its presence in their own news (29%).” In other words, more Americans are prejudiced against information-sources that do not fit their ideology than are not. Further, America currently has a record high of voter engagement but nearly half say they will have difficulties voting. A recent study reveals 50% of Americans say it will be ‘very’ or ‘somewhat easy’ to decide who to vote for, while about the same share, 49%, say they will have difficulties in deciding. By comparison, in October 2018, 85% of voters indicated it would be easy to vote. The engagement has never been higher, confusion has never been worse. In these grave circumstances, how can technology assist in leveraging this engagement and dealing with the confusion brought on by media outlets?
The first Presidential debate of 2020 was widely criticized for the childlike behaviour exhibited by candidates and their wholly fruitless discourse. The content of the debate imitated opinions delivered in largely belligerent tones that more often than not lead to hollow arguments of polarized views, than anything resembling informative. Israeli tech company: OpenWeb, tackles the issue of toxic political conversations at a grassroots level.
OpenWeb’s technology leverages decentralized principles of the internet to create an online environment that enables all points of view to be a part of the conversation and empowers online media publishers to create healthy communities with engaged users. Founded in 2012, Openweb has offices in Tel Aviv and New York, and is used by top tier publishers, including but not limited to AOL, Huffington Post, MSN, and SkySports. OpenWeb is one platform that is striving to take a stance and provide a solution to the dichotomy of political views and toxicity that often ensues in dialogue.
Additional to the conversely opinionated and ill-tempered dialogues between leaders and often ourselves, we can find further sources of our problematic media, namely, fake news. Americans rated untrue information intending to damage the reputation of a person or entity or making money through advertising revenue, otherwise known as fake news, as being a larger problem than racism or climate change. Social media has exacerbated the issue of the authenticity of the information; millions of bots to help propagate fake news use these platforms. Unfortunately, fake news is a highly lucrative industry, generating hundreds of millions of dollars in advertising a year. It additionally serves to galvanize already heavily prejudiced points of view and further segment those on either side of the political fence.
According to Tamir Pardo, Mossad (Israel’s Secret Intelligence Service) Chief, “what we’ve seen so far concerning bots and the distortion of information is just the tip of the iceberg. It is the greatest threat of recent years, and it threatens the basic values that we share: democracy and the world order created since World War Two”. Cheq, an Israeli artificial intelligence technology that made the 2019 CNBC Upstart 100 list, provides a solution to the dissemination of information by identifying fake advertisements and fraudulent content on the web.
CEO Guy Tytunovich, a former member of the Israeli Defense Force’s 8200 Unit that deals with the military’s cybersecurity, leads Cheq. Tytunovich drew upon his existing cybersecurity and language processing knowledge he gained during his military service, to prevent advertisers from appearing on harmful content. Now, in the lead up to the 2020 elections, Cheq is using artificial intelligence to identify fake news and ensure brand agencies are not placing ads on them.
The issue of fake news serves to reinforce the chasm of political opinions. Studies around the dissemination of disinformation have shown fake news actively targets a specific demographic of the population, who based on collected data, likely have the propensity to believe it. In addition to Cheq, there is further technology coming out of Israel that can assist with this issue, including, GeoQuant.
GeoQuant uses machine-learning software that searches the web for large volumes of reputable data, news, and social media content. This data is used to fuel intelligent algorithms that generate highly objective risk data and analytics. Their forecasts are highly accurate as they are built on models and systems that are data-based rather than a pundit, and therefore inherently coloured with human prejudice. CEO of GeoQuant, Mark Rosenburg, has said the company designed a SaaS platform that will deliver political and country risk assessments in real-time through a customizable dashboard.
Human understanding is immersed in polarity. If you look at any great story or tales of heroism there is a clear definition of good and evil – Star Wars, Braveheart, Harry Potter, James Bond, Lord Of The Rings, Avatar, etc.
We think in stories, we understand in stories, we are story-telling animals. We do not think in facts or statistics. We think in stories of good and evil. What happens when the story of good and evil is too muddied and complex to cite a clear hero and villain? We fill in the gaps with our own bias. In a turbulent political landscape, further convoluted by the volatility of a pandemic, it is vital to approach information from media outlets with the caution of its source and motive and to approach political views on an individual level with the patience and empathy necessary to have meaningful conversations.
To overcome the gulf of dissemination of disinformation, news must be approached with a critical eye and individuals with an open ear; in conjunction with the clarity, that technology can provide.