Scientific American article on Computer Chess

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Scientific American article on Computer Chess

Post by syzygy »

hgm wrote:
Lyudmil Tsvetkov wrote:what could an electrical circuit understand, no matter how complicated?
Uh? The human brain is an electrical circuit. Conduction is by ions rather than electrons, but that just makes it a bit slower.
And it seems to me that, for the most part, the human brain functions just like Watson without any "understanding". Even most of the cognitive work we do is done in our subconscious, with a thin "rational" layer filtering out the sense from the nonsense.

From the Roger Schank link:
Suppose I told you that I heard a friend was buying a lot of sleeping pills and I was worried. Would Watson say I hear you are thinking about suicide? Would Watson suggest we hurry over and talk to our friend about their problems? Of course not. People understand in context because they know about the world and real issues in people's lives. They don't count words.
This is nonsense. There is no reason why something like Watson could not associate sleeping pills with suicide on the basis of all the data it has processed. It is of course true that humans do not consciously count words; it is our subconscious that is doing the counting.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Scientific American article on Computer Chess

Post by mcostalba »

Steve Maughan wrote: I sort of agree. Is there any computer application which you'd say is really AI?
Today in 2017 there is broad consensus that deep learning mimics what can be defined as AI definitely more than a traditional software (chess engine or other). The main difference is that in a deep neural network there is not an explicitly logic rule, written by the programmer, that defines how the software behaves according to a given input. Nobody writes the neural net "code" at least not in the traditional sense. In deep learning the programmer writes how the net is structured and designed, but not what the net does given an input, this is all up to a phase called "training".

If this is AI, well IMO still not, but is a step closer.

P.S: For me AI has nothing to do to be intelligent or smart (I found this definition very amusing and naive, very superficial), it has all to do to be independent and not deterministic in a rigid way defined a priori by an external designer (the programmer) in the actions the entity performs given an input.
Last edited by mcostalba on Sat Jun 03, 2017 3:30 pm, edited 1 time in total.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Scientific American article on Computer Chess

Post by D Sceviour »

This paragraph in the Scientific American article seems interesting:
An important part of what we’re doing right now is taking very advanced artificial neural network–based systems that tend to be very black box—they aren’t particularly good at explaining why they’re recommending what they’re recommending—and giving them the capability to explain themselves. How can you really trust a recommendation coming out of system if it can’t explain it?
Here there are at least two types of AI: (1) trying to assist human understanding, and (2) trying to replace human reasoning. Explanations are not necessary for the second case.

It is not clear what neural-network "black boxes" have to do with the conclusion in the article. Computers look at millions of moves per second, while the human player looks at maybe 4 moves per second to find the same move. If a neural network system can be developed that can resolve chess positions with 4 moves per second, it would still have to demonstrate whether the purpose is to assist human understanding, or to replace human reasoning.

Sometimes it is impossible to understand a game between Stockfish and Houdini. Are the moves of any benefit to human understanding of chess? The moves are probably no longer of any use for chess understanding, but only as demonstration for replacement of human reasoning.
Uri Blass
Posts: 10282
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Scientific American article on Computer Chess

Post by Uri Blass »

syzygy wrote:
mcostalba wrote:The term AI associated to a chess engine is totally misleading. There is no AI in a traditional chess engine more than can be in a CAD or in a word processor. This article is a blast from the 90's, but we are in 2017 now.
What is AI?
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1]

[1] The intelligent agent paradigm:
  • Russell & Norvig 2003, pp. 27, 32–58, 968–972
  • Poole, Mackworth & Goebel 1998, pp. 7–21
  • Luger & Stubblefield 2004, pp. 235–240
  • Hutter 2005, pp. 125–126
The definition used in this article, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional criteria.
Certainly a chess engine perceives its environment (the chess board) and takes actions (selects moves) that maximizes its chance of success at some goal (that of winning the game). Chess engines also encode knowledge, and many include a basic form of learning.

So a chess engine is an example of AI as that term is used in the field.

Whether there is "real intelligence" in a chess engine is another question. To answer it one first needs to have a definition of "real intelligence".

If a chess engine does not exhibit "real intelligence" because we understand how they work, then I guess the human brain will stop being referred to as "intelligent" once we fully understand how it functions...
I think that by definition the human brain cannot fully understand how it works.

If the human brain can understand how it works then if you tell me to choose a number(1 or 2) than I can have an algorithm to calculate exactly if I choose 1 or 2.

There may be an algorithm to calculate it but if I understand the algorithm then I can easily prove that the algorithm can give wrong result by using the following algorithm:
1)calculate the number that I am supposed to choose
2)choose a different number.

The conclusion is that there is no way that I fully understand how my brain works.
syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Scientific American article on Computer Chess

Post by syzygy »

Uri Blass wrote:I think that by definition the human brain cannot fully understand how it works.

If the human brain can understand how it works then if you tell me to choose a number(1 or 2) than I can have an algorithm to calculate exactly if I choose 1 or 2.
Not at all. I can fully understand how a random generator based on atmospheric noise functions without being able to predict its result on any particular run.
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Scientific American article on Computer Chess

Post by Michel »

Lots of black and white here as usual. "AI is this". "AI is not that" with people making up arbitrary definitions to justify their preset conclusions.
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Scientific American article on Computer Chess

Post by mjlef »

mcostalba wrote: It doesn't understand what it's reading.
.
An argument could be made that we do not understand what we are reading either. Although science now allows finer and finer CAT scan/MRI resolution, we still do not know exactly how our brains work. But if we define "AI" to mean "plays chess as well or better than a human, they did surpass humans, although with so few games it is hard to get an elo rating. Deep Blue was great engineering, and they knew how it worked. Neural Networks are black boxes. They learn, but it is very hard to take the artificial neurons and interconnections and know what they are doing. But in both techniques they seem to mimic intelligence.
pilgrimdan
Posts: 405
Joined: Sat Jul 02, 2011 10:49 pm

Re: Scientific American article on Computer Chess

Post by pilgrimdan »

hgm wrote:
Lyudmil Tsvetkov wrote:what could an electrical circuit understand, no matter how complicated?
Uh? The human brain is an electrical circuit. Conduction is by ions rather than electrons, but that just makes it a bit slower.
I would imagine
at its basic level
the brain (intelligence)
would be a mass collection
of randomness
and determinism
whereagles
Posts: 565
Joined: Thu Nov 13, 2014 12:03 pm

Re: Scientific American article on Computer Chess

Post by whereagles »

mjlef wrote:
mcostalba wrote: It doesn't understand what it's reading.
.
An argument could be made that we do not understand what we are reading either. Although science now allows finer and finer CAT scan/MRI resolution, we still do not know exactly how our brains work.
Science today understands how the brain works at micro and macro levels. Neural cell biology is more or less understood, as is collective brain behaviour, as e.g. how humans behave in given situations. What we don't understand is how one links to the other.

I call this "the intermediate level problem" and it's transversal to many areas of science. In physics, for instance, the missing link is the connection between general relativity and quantum mechanics.

The solution to the intermediate level problem is likely to apply to most of the fields, so whoever finds it will push mankind one level up in knowledge. Hopefully in wisdom too...
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Scientific American article on Computer Chess

Post by Daniel Shawul »

The chess vs Go AI according to a stanford website http://www-formal.stanford.edu/jmc/whatisai/node1.html before the deep learning era

========
Q. What about chess?

A. Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.

Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.

Q. What about Go?

A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.

Sooner or later, AI research will overcome this scandalous weakness.
========