Fruit fly races on steroids?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

chesskobra
Posts: 154
Joined: Thu Jul 21, 2022 12:30 am
Full name: Chesskobra

Re: Fruit fly races on steroids?

Post by chesskobra »

towforce wrote: Thu Apr 18, 2024 10:18 am
chesskobra wrote: Wed Apr 17, 2024 8:02 pmSo your interpretation that the number of positions grows polynomially with n is wrong - it is neither the fact nor do the authors say so. Let us discuss what the paper actually says.
What the paper has to say about the growth of the scale of chess on an NxN board as N increases is entirely in the section "POLYNOMIALITY OF TRANSFORMATION". It takes a little over half a page, and it's on page 213 of the paper.

It does not explicitly say that the size of the game tree increases polynomially with the size of the board - but this is my best interpretation of what it does say. Game tree size is mentioned earlier in the paper, and is a sensible interpretation of what the section says IMO.
So again you give a 'sensible' interpretation of what the paper doesn't say. Did you ask ChatGPT to summarise it? Or you hallucinated on your own?

Do you know what simulating one problem in another means? Do you know what polynomial time reduction or transformation means? Do you know what intractability means? (Your example of KR vs K is completely irrelevant to intractability.) These are standard notions in complexity theory. In fact because they are so standard, and not made up by the authors for this paper, they have been mentioned only informally in the introduction. Look them up in some other reference for formal definitions. Also look up what nondeterministic polynomial time means, which is also a standard notion.

I have no interest in talking about your interpretations of what the authors are saying. Let us only discuss what the authors actually say. After all it is a mathematics paper, and anybody who understands the area (complexity theory in this case) would understand it quite clearly and quickly. I think you have zero understanding of what is in the paper.

To me what seems to be happening here is:

- some mathematicians and computer scientists write a technical mathematical paper in a mainstream mathematics journal, in an area in which they have track record
- someone outside the area (and likely even outside of maths/CS) looks at the paper superficially and claims to understand what the paper is essentially doing
- makes up some naive interpretations as to what the paper is saying
- starts trashing the paper on a random forum on the internet, calls the conclusions of the paper click-bait, quickly comes up with counter examples, etc.
- when flaws in his interpretations are pointed out, he doubles down his attack on the paper.

This is what you are doing. It is a classic case of trolling. Hence all the harsh comments above. You don't have to accept someone's authority, but the authors in this case are from Weismann Institute and UC Berkeley, have track record, have published papers with other known authorities in the domain. Serious work also goes into refereeing such papers. It is ok not to know or understand anything about many domains. But when you start faking knowledge or expertise, it gets exposed very quickly. So stop trashing the work that you don't understand at all.
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

petero2 wrote: Thu Apr 18, 2024 12:11 pm1. It is known from prior work that the game G3 is EXPTIME-complete.

The prior work is this paper - link. We now need to work our way through that! :)
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

towforce wrote: Thu Apr 18, 2024 12:40 pm
petero2 wrote: Thu Apr 18, 2024 12:11 pm1. It is known from prior work that the game G3 is EXPTIME-complete.

The prior work is this paper - link. We now need to work our way through that! :)

OK - I've read through that paper, and I am under the impression that the proof relates to the upper limit to the complexities of games - which nobody is going to argue with. Games like chess, though, are full of patterns, symmetries and fixed interactions between squares/pieces that almost certainly introduce simplifications - and, as chess players, we're probably only aware of a tiny fraction of these.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
chrisw
Posts: 4345
Joined: Tue Apr 03, 2012 4:28 pm

Re: Fruit fly races on steroids?

Post by chrisw »

chesskobra wrote: Thu Apr 18, 2024 12:14 pm
towforce wrote: Thu Apr 18, 2024 10:18 am
chesskobra wrote: Wed Apr 17, 2024 8:02 pmSo your interpretation that the number of positions grows polynomially with n is wrong - it is neither the fact nor do the authors say so. Let us discuss what the paper actually says.
What the paper has to say about the growth of the scale of chess on an NxN board as N increases is entirely in the section "POLYNOMIALITY OF TRANSFORMATION". It takes a little over half a page, and it's on page 213 of the paper.

It does not explicitly say that the size of the game tree increases polynomially with the size of the board - but this is my best interpretation of what it does say. Game tree size is mentioned earlier in the paper, and is a sensible interpretation of what the section says IMO.
So again you give a 'sensible' interpretation of what the paper doesn't say. Did you ask ChatGPT to summarise it? Or you hallucinated on your own?

Do you know what simulating one problem in another means? Do you know what polynomial time reduction or transformation means? Do you know what intractability means? (Your example of KR vs K is completely irrelevant to intractability.) These are standard notions in complexity theory. In fact because they are so standard, and not made up by the authors for this paper, they have been mentioned only informally in the introduction. Look them up in some other reference for formal definitions. Also look up what nondeterministic polynomial time means, which is also a standard notion.

I have no interest in talking about your interpretations of what the authors are saying. Let us only discuss what the authors actually say. After all it is a mathematics paper, and anybody who understands the area (complexity theory in this case) would understand it quite clearly and quickly. I think you have zero understanding of what is in the paper.

To me what seems to be happening here is:

- some mathematicians and computer scientists write a technical mathematical paper in a mainstream mathematics journal, in an area in which they have track record
- someone outside the area (and likely even outside of maths/CS) looks at the paper superficially and claims to understand what the paper is essentially doing
- makes up some naive interpretations as to what the paper is saying
- starts trashing the paper on a random forum on the internet, calls the conclusions of the paper click-bait, quickly comes up with counter examples, etc.
- when flaws in his interpretations are pointed out, he doubles down his attack on the paper.

This is what you are doing. It is a classic case of trolling. Hence all the harsh comments above. You don't have to accept someone's authority, but the authors in this case are from Weismann Institute and UC Berkeley, have track record, have published papers with other known authorities in the domain. Serious work also goes into refereeing such papers. It is ok not to know or understand anything about many domains. But when you start faking knowledge or expertise, it gets exposed very quickly. So stop trashing the work that you don't understand at all.
Entirely agree. There’s quite a problem with attention seeking narcissism on this forum, whether from people who endlessly splatter spam in bold upper case multicolour or those who imposter faux-knowledge that they manifestly don’t have. Whatever, it drives off smarter people who are used to and expect and would otherwise participate in a higher level of debate. Can moderation solve the problem?
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

chesskobra wrote: Thu Apr 18, 2024 12:14 pmDo you know what intractability means?

This is a class assignment rather than a peer-reviewed paper - link - but it is instructive to see how wrong it is!
UMSL wrote:EXAMPLE #1: Factoring a number into primes.
I'm actually going to concede this one: there are sieves for finding primes - but they don't enable the solution of factoring large numbers (the difficulty of which is the basis of RSA public key encryption).

UMSL wrote:SAT, the satisfiability problem to test whether a given Boolean formula is satisfiable. All sets of input must be tried systematically (brute-force) until a satisfying case is discovered.
Most SAT problems can be cracked far more quickly than by brute force testing both possible Boolean values of all the variables (there are some which are extremely difficult to solve right now).

UMSL wrote:Integer partition: Can you partition n integers into two subsets such that the sums of the subsets are equal? ...we are forced to use the brute-force method to test the subsets of each division and their sums
Absolute nonsense. This is the problem of creating a subset of the integers that satisfies sum(subset) = sum(integers) \ 2, for which an obvious algorithm is backtracking. No doubt better ones could be found.

UMSL wrote:Bin packing: How many bins of a given size do you need to hold n items of variable size? Again, the best algorithm for this problem involves going through all subsets of n items, seeing how they fit into the bins, and backtracking to test for better fits among subsets until all possibles subsets have been tested to achieve the proper answer. Once again, brute-force. Once again, exponential.
That is ABSOLUTELY NOT "the best algorithm" (UMSL's words) for solving that problem! Integer programming* would provide the mathematically optimal solution to that problem in a tiny, tiny fraction of the time that UMSL's would.

*a simplified explanation of integer programming: solve the problem in real numbers with linear programming (LP), apply the nearest integers, if the solution doesn't work, use cutting planes to apply new constraints to the LP model, then run the LP model again, until solved.

I'm not trying to be clever here and find fault with a class exercise - I'm using simple examples to show that some intractable problems can actually be solved without resorting to brute force. A very good example of this is the travelling salesman problem: this has been solved (using cutting planes) to 85,900 towns - a search space of 85900! (9.6e386526).
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

petero2 wrote: Thu Apr 18, 2024 12:11 pm1. It is known from prior work that the game G3 is EXPTIME-complete.

Actually, there is a concise version of what the paper proves in that very article (see the paragraph starting "Other examples of EXPTIME-complete problems include the problem of evaluating a position in generalized chess").

On an 8x8 board, the maximum length of a chess game is 5,898 moves due to the 50 move rule. This number grows exponentially as the board size increases.

Chess on 5x5 boards ("Minichess") has actually been solved - link.

Anyway, I'm thinking now that Peter Osterlund's post from the 6th April (link) might contain a better answer than endlessly discussing whether Exptime-Complete means that it's impossible to create an algorithm to solve a problem that runs in less than exponential time: in that post, he said, "A potential way out is that you write "close to perfect" and not "perfect"". Not wanting to point fingers or attribute blame, but originally it was Srdja who applied the term "God Algorithm" to what I am talking about! :)

Nobody would dispute that it's possible to make a "very good" algorithm for chess that would not require an amount of time that is exponential with regard to the size of the board.

Also, earlier in this thread, one of you said that we only need to resolve chess on an 8x8 board. This is true. 100x100 chess will never be a popular game: too slow for 99.9% of people. I brought up larger boards to illustrate my belief that, once you'd worked out how to make a very good algorithm for an 8x8 board, the lessons learned could then be applied to making an extremely good algorithm for chess on an NxN board. In other words, the solution would have wide applications. This could not be said of the traditional computer chess methodology, "selective search".
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
petero2
Posts: 692
Joined: Mon Apr 19, 2010 7:07 pm
Location: Sweden
Full name: Peter Osterlund

Re: Fruit fly races on steroids?

Post by petero2 »

towforce wrote: Thu Apr 18, 2024 6:35 pm Anyway, I'm thinking now that Peter Osterlund's post from the 6th April (link) might contain a better answer than endlessly discussing whether Exptime-Complete means that it's impossible to create an algorithm to solve a problem that runs in less than exponential time:
Indeed, that is not very interesting to discuss, since it has been known that such an algorithm is impossible since the deterministic time hierarchy theorem was proven more than 50 years ago.
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

petero2 wrote: Fri Apr 19, 2024 6:37 am
towforce wrote: Thu Apr 18, 2024 6:35 pm Anyway, I'm thinking now that Peter Osterlund's post from the 6th April (link) might contain a better answer than endlessly discussing whether Exptime-Complete means that it's impossible to create an algorithm to solve a problem that runs in less than exponential time:
Indeed, that is not very interesting to discuss, since it has been known that such an algorithm is impossible since the deterministic time hierarchy theorem was proven more than 50 years ago.

The problem with an NxN board has clicked with me now: yesterday, I wrote a proof (not a mathematically watertight one - just one that was good enough for myself) that it might (not "is", but "might") be possible to create a fast algorithm for a large N. The proof works, and I was about to post it, when I realised that in my proof, the space available for such an algorithm converges towards zero very rapidly as N increases (it never reaches zero, but it converges towards zero so quickly that I feel safe saying that making such an algorithm is likely to be impractical for a large value of N). This doesn't preclude creating "good" algorithms that are fast, though: such algorithms are known for many NP-hard problems.

If the game of chess actually has a simple deep underlying pattern which enables it to be resolved very easily with a quick algorithm (like the game of nim does - no matter how big the starting position), then it might be possible - but I don't think there's a way to find that out other than actually uncovering the deep pattern.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
towforce
Posts: 11607
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Fruit fly races on steroids?

Post by towforce »

towforce wrote: Fri Apr 19, 2024 9:34 amThe problem with an NxN board has clicked with me now: yesterday, I wrote a proof (not a mathematically watertight one - just one that was good enough for myself) that it might (not "is", but "might") be possible to create a fast algorithm for a large N.

I'll share a simplified version of my thinking: basically, use a statistical distribution (the binomial distribution) to think about it rather than combinational complexity.

* if a deep underlying pattern can be found for chess which always gives the correct answer (possible because, unlike, say, the SAT problem, there is a lot of dependency between the variables in chess, so there are going to be patterns), then the problem is solved

* assume, though, that a quick algorithm one can build gets, say, at least 99.99% of the positions right

* further assume that this algorithm can be improved by tuning

* given that, unlike nim, this algorithm doesn't have a theoretical underpinning, getting the edge-case positions solved is going to require some luck

* the binomial distribution will show you that as the number of edge cases rises, the probability of getting all these edge cases by luck converges quickly towards zero

* as the size of the board increases, one would expect the number of edge cases to rise exponentially

So unless a deep pattern can be found which solves chess, you're not going to be able to make a merely good algorithm that solves it on an NxN board. As I said before, unless somebody comes up with a new way to approach the solving of chess, then probably the only way to demonstrate that a good algorithm can be created that runs quickly is to actually build it out.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!