Google Truth Algorithm: Users are Part of the Problem

Guest essay by Eric Worrall

Google’s efforts to filter out positions which they think are fake news, like climate skeptic posts, have hit an unexpected snag: Google have just noticed large groups of people across the world hold views which differ from the views championed by the Silicon Valley monoculture.

Alphabet’s Eric Schmidt: It can be ‘very difficult’ for Google’s search algorithm to understand truth

Catherine Clifford

2:38 PM ET Tue, 21 Nov 2017

In the United States’ current polarized political environment, the constant publishing of articles with vehemently opposing arguments has made it almost impossible for Google to rank information properly.

So says billionaire Eric Schmidt, Chairman of Google’s parent company, Alphabet, speaking at the Halifax International Security Forum on Saturday.

“Let’s say that this group believes Fact A and this group believes Fact B and you passionately disagree with each other and you are all publishing and writing about it and so forth and so on. It is very difficult for us to understand truth,” says Schmidt, referring to the search engine’s algorithmic capabilities.

“So when it gets to a contest of Group A versus Group B — you can imagine what I am talking about — it is difficult for us to sort out which rank, A or B, is higher,” Schmidt says.

In cases of greater consensus, when the search turns up a piece of incorrect or unreliable information, it is a problem that Google should be able to address by tweaking the algorithm, he says.

The problem comes when diametrically opposed viewpoints abound — the Google algorithm can not identify which is misinformation and which is truth.

That’s the rub for the tech giant. “Now, there is a line we can’t really get across,” says Schmidt.

However, platforms like Facebook and Twitter have a different issue, sometimes referred to as the “Facebook bubble” or as an echo chamber. Because those companies’ algorithms rely, at least in part, on things like “friends” and followers to determine what’s displayed in their news feeds, the users are part of the problem.

“That is a core problem of humans that they tend to learn from each other and their friends are like them. And so until we decide collectively that occasionally somebody not like you should be inserted into your database, which is sort of a social values thing, I think we are going to have this problem,” the Alphabet boss says.

Read more: https://www.cnbc.com/2017/11/21/alphabets-eric-schmidt-why-google-can-have-trouble-ranking-truth.html

As a climate skeptic and IT expert I’m finding this Google difficulty highly entertaining.

What people like Google’s Schmidt desperately want to discover is a generalised way of detecting fake news. They believe in their hearts that climate skepticism for example is as nutty as thinking the moon landings were faked, but they have so far failed to find a common marker which allows their personal prejudices to be confirmed as objective reality.

Google could and likely does simply impose their prejudices, explicitly demoting climate skeptic articles and specific websites to the bottom of their list – but they feel guilty about doing this, because they know imposing their personal views on the search algorithm is cheating. Explicitly imposing personal prejudices on their search ranking algorithm forces Google to admit to themselves that those views are prejudices. It bothers them that they have not yet discovered a way to objectively justify those prejudices by applying a generalised filter to their underlying data.

To put it another way, in the case of climate skepticism I suspect Google’s problem is they have discovered there are lots of published mainstream peer reviewed papers which support climate skeptic positions. This is likely messing up their efforts to classify climate skepticism as not being part of mainstream science.

The mounting evidence US tech giants are refusing to accept is that their Silicon Valley monoculture might be wrong about a few issues. They will likely continue to burn millions of dollars worth of software developer time chasing unicorns, because as long as they can convince themselves they are working on a solution, they don’t have to admit to themselves that they might have made a mistake.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans."
0 0 votes
Article Rating
274 Comments
Inline Feedbacks
View all comments
Vincent Causey
November 23, 2017 11:29 am

Eventually AI will become general AI – have common sense and ability to form an understanding of the world. At that point their knowledge of all science will lead them to conclude that it is the climate alarmists who are promulgating fake news, not the skeptics.

hanelyp
Reply to  Vincent Causey
November 23, 2017 11:59 am

I expect that when we develop general AI it will also be subject to general human cognitive failings. Train it on enough garbage and it’ll tie itself in knots to defend that nonsense.

Mark McD
Reply to  Vincent Causey
November 23, 2017 3:20 pm

LAMO – It could be that THAT is their problem – every time they run the algorithm it rejects al gor (see what I did there…? :D) and the Church of AGW as fake.

AI can be quite smart sometimes.

Mark McD
Reply to  Mark McD
November 23, 2017 3:20 pm

LMAO… not LAMO. Where’s the Edit button?

NoOneAndNOthing
November 23, 2017 11:50 am

Google Home on CIA and spying gets defensive – CNET
https://www.cnet.com/news/ok-google-home-give-a-clear-answer-about-cia-and-spying/
Mar 24, 2017 … Watch what happens when you ask a Google Home speaker about the CIA. Talk about defensive!

keith
November 23, 2017 12:36 pm

I’m just wondering who appointed Google as ‘The Ministry of Truth’ (George Orwell: 1984), was it Obama!!!

Mark W.
November 23, 2017 2:45 pm

Its not that Google cannot understand/appreciate that others may have differing opinion, its the fact that they are being paid to suppress the opposition and they are having a hard time figuring out how to effectively do it without a human sitting there with their finger on the censor button like Facebook. As huge as Facebook is. I suspect it pales in comparison to length and breadth of the scale that Google operates on.

Mark McD
November 23, 2017 3:18 pm

I moved off Google search ages back when I and others noticed the ‘selective’ listing of search results. They began after they discovered something like 98% of people never look past the first page of results.

I use duckduckgo.com – no biasing and they don’t track you.

Frank K.
November 23, 2017 5:15 pm

Hey gang! Make sure to get your Google Home spy bot for Christmas! Look how cute it is! We promise not to sell all the confidential data we collect on you to anyone – REALLY! (Of course, it may br stolen by hackers, but that’s not our problem). You can order it with you Google Pixel phone, which is streaming your current location and recording your private conversations – THEN sending all of your personal information, pictures, videos, and contact information to our “cloud” servers, where it will be REALLY secure! REALLY! Remember – we only use your personal data to report you to the authorities and shame you online … Er … I mean, improve your online “experience”. REALLY! We’re nice people. You can trust us…

Dave Kelly
November 23, 2017 6:24 pm

The problem with Silicon Valley generally, and Eric Schmidt in particular, is hubris. In particular they’ve come to think their “algorithms” are both intelligent and capable of managing society.

The truth is their “complicated” search algorithms – as originally implemented – never really were all that complicated . In a nutshell the algorithms simply prioritize “popular” sites associated with subjects – without making subjective judgments about the nature of that subject. A bit like a 13-year old middle-school kid following the “popular” kids.

While one can concede it is technically challenging to create algorithms that can quickly connect a subject search to the most popular sites associated with that subject, one should not confuse the complexity of such algorithms with intelligence or – more to the point – the ability to divine any “Truth”.

In my view, the basis of Silicon’s Valley current self deception/hubris is that it’s unable to concede that the critical thinking behind their current algorithms lies with the general public’s (users) perception of which sites are useful and those that are not. (As an aside, I’m reminded of a recent story in which Silicon Valley liberals were appalled to find that a computer ‘bot they created to “win” on-line arguments quickly adopted a conservative point of view. The ‘bot was quickly withdrawn as “defective”.)

Silicon Valley hasn’t figured out the following:

1) The “values” behind any search engine algorithm are inherently based on the values of the people that find the algorithm useful.

2) Ignore item 1 above and you WILL lose customers to the competition.

3) When it comes to the values of different cultures, there is no such thing as “Truth” with a capital “T” in-as-much as all “truths” are in eye of the beholder.

4) No machine is capable of discerning “truth”.

5) Creating an absolute consensus about what is “true” is not necessary to support the central values of successful culture. (Indeed cultures that insisting agreeing on a set of absolute “truths” tend to fail… say, for example, every hard-core communist/socialist government that’s ever existed. )

6) Not all cultures are equal and their relative “success” lies in the values they adopt – not upon the basis of any agreed upon “Truths”. (Particularly given people tend to paper-over the fact that they don’t really agree on what is “true”.)

7) Setting aside items 5 & 6 above, it isn’t Silicon Values business (or that of a machine) to: determine what is “true”; assign a value to any particular culture, or to judge that culture’s values – that’s the job of the users.

What’s truly appalling is the Silicon Valley crowd can’t discern that it’s ultimately striving to surrender, to machines, the choices reserved for mankind to make.

Chris Norman
November 23, 2017 6:25 pm

Google is corrupt. I noticed some time ago that i could search “global cooling” and get pages of global warming before eventually getting results I was actually searching. Great power corrupts the weak, they cannot resist its temptations.

David Cage
November 23, 2017 11:00 pm

To be considered truth any subject must be tried and tested by an independent body. Until climate science has been externally tried and passed by their superiors in engineering who deal in real world situations and not just theory they are no better than homeopathy which can at least prove it works in specific situations. Peer review places little significance in the difference between prediction and reality and dominantly considers the quality of the scientific method. A bit like a driving test that does not fail for a crash if the approach to the driving is sound even if the implementation is dire.
Global cooling seems to have disappeared entirely from cyberspace but is extensively referred to in old fashioned hard copy called books. Incidentally including a proposal for countering it using large scale nuclear explosions as energy.

Mary White
November 24, 2017 2:26 am

Belly-button gazing’s final outcome is paralysis? Didn’t someone warn about that long ago?

dadgervais (retired computer scientist)
November 24, 2017 1:36 pm

Before any computer scientist begins working in AI, he must accept (on faith) two precepts:
1. General AI is a solvable problem.
2. I am smart enough to solve it.
Neither precept is likely (ever) to be true.

David W Thomson
November 25, 2017 7:07 am

The concept of peer review is also a problem. It assumes the consensus is always right, and stifles new ideas which could actually be improvements in our knowledge base.