Bitboards using 2 DOUBLE's??

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Carey
Posts: 313
Joined: Wed Mar 08, 2006 7:18 pm
Contact:

Bitboards using 2 DOUBLE's??

Post by Carey » Tue Jun 02, 2009 12:47 am

Here's a questions that popped into my head the other day...

Is it possible to do bitboards using two 64 bit (53 bit mantissa) DOUBLE variables?

Without undo conversion to / from integer, of course. (Pretend conversion is difficult so it's best to avoid it if at all possible.)

I know several people have developed math packages that create a fake 128 bit (106 bit mantissa) 'quad' or 'double double' math package. So it is possible to do a lot of things if you want to be creative. (I know from experience those packages tend to be fragile, though. Most were written for non-PC FORTRAN where a 64 bit float really is 64 bits, unlike on Windows where it can be 64 bits or 80 bits or even change without your knowledge. I've had that happen. A whole lotta fun to debug a math package when you don't know your precision is changing behind your back!)

1) Setting & clearing of bits. Not too hard provided you assume the bits are in a known state. So you don't try to set a bit already set, for example. It'd just be addition & subtraction.

2) Clear specific bits. Can be done. Multiply by some huge value to make the lower bits underflow, then divide by it and add back the other bits you forced off. Clumsy, but I've done things like that before. It used to be done as a quick way to convert a double to integer because floor() was so slow.

3) XOR for the Zobrist hash. Don't think so. Although with 106 bits of mantissa, you might be able to make do with just adding & subtracting 80+ bit numbers. I suppose a single DOUBLE could work tolerably well provided the numbers don't get so big you start getting round-off errors.

4) Bit counting. I don't know...

5) Find first / next / last bit. Not easy. At least not in a convenient way. Right off hand, I can't think of any reasonable way.

6) Magic attacks... I'd say you probably could do the Kindergarten stuff but anything else might be too complicated.

7) Bit-Ops. AND & OR. Pretty hard, I think. (My way of saying not possible....)

8) And what else....?


I know it wouldn't actually be practical.

As I said, it was just something that popped into my head the other day and I started getting really curious.

Any thoughts on this?

Carey

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Bitboards using 2 DOUBLE's??

Post by bob » Tue Jun 02, 2009 4:20 am

Carey wrote:Here's a questions that popped into my head the other day...

Is it possible to do bitboards using two 64 bit (53 bit mantissa) DOUBLE variables?

Without undo conversion to / from integer, of course. (Pretend conversion is difficult so it's best to avoid it if at all possible.)

I know several people have developed math packages that create a fake 128 bit (106 bit mantissa) 'quad' or 'double double' math package. So it is possible to do a lot of things if you want to be creative. (I know from experience those packages tend to be fragile, though. Most were written for non-PC FORTRAN where a 64 bit float really is 64 bits, unlike on Windows where it can be 64 bits or 80 bits or even change without your knowledge. I've had that happen. A whole lotta fun to debug a math package when you don't know your precision is changing behind your back!)

1) Setting & clearing of bits. Not too hard provided you assume the bits are in a known state. So you don't try to set a bit already set, for example. It'd just be addition & subtraction.

2) Clear specific bits. Can be done. Multiply by some huge value to make the lower bits underflow, then divide by it and add back the other bits you forced off. Clumsy, but I've done things like that before. It used to be done as a quick way to convert a double to integer because floor() was so slow.

3) XOR for the Zobrist hash. Don't think so. Although with 106 bits of mantissa, you might be able to make do with just adding & subtracting 80+ bit numbers. I suppose a single DOUBLE could work tolerably well provided the numbers don't get so big you start getting round-off errors.

4) Bit counting. I don't know...

5) Find first / next / last bit. Not easy. At least not in a convenient way. Right off hand, I can't think of any reasonable way.

6) Magic attacks... I'd say you probably could do the Kindergarten stuff but anything else might be too complicated.

7) Bit-Ops. AND & OR. Pretty hard, I think. (My way of saying not possible....)

8) And what else....?


I know it wouldn't actually be practical.

As I said, it was just something that popped into my head the other day and I started getting really curious.

Any thoughts on this?

Carey
There are issues with various architectures in doing this. For example, a bitboard with the pattern "NaN" (not a number) will cause a floating exception. There are other special cases as well. However, there is no reason nor benefit for doing this, since most processors today are already 64 bits and can deal with 64 bit values natively.

I am not sure why you would want to use two doubles, since that is twice as long as a bitboard, unless you go into Gerd's wild way of thinking with his SSE stuff...

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 6:43 pm

Re: Bitboards using 2 DOUBLE's??

Post by sje » Tue Jun 02, 2009 10:56 am

What is old is new again, for this is what Slate and Atkin did thirty five years ago on a CDC 6400 mainframe with their Chess 4.x program.

But I'm unable to see why the idea should be making a comeback.

Carey
Posts: 313
Joined: Wed Mar 08, 2006 7:18 pm
Contact:

Re: Bitboards using 2 DOUBLE's??

Post by Carey » Tue Jun 02, 2009 1:08 pm

bob wrote:There are issues with various architectures in doing this. For example, a bitboard with the pattern "NaN" (not a number) will cause a floating exception.
NaN's and other special codes wouldn't be a problem at all because you wont ever do any math that would cause them to show up.
There are other special cases as well. However, there is no reason nor benefit for doing this, since most processors today are already 64 bits and can deal with 64 bit values natively.
I *did* say it was just something that popped into my head and I was curious about.

I even said it wasn't practical. That doesn't mean I can't be curious though.

I am not sure why you would want to use two doubles, since that is twice as long as a bitboard, unless you go into Gerd's wild way of thinking with his SSE stuff...
I'd want to use two doubles because one only has 53 bits to play with and that's not enough to hold a 64 bit bitboard.

The only way to do it would be to do some sort of 'double-double' style math package to create a 106 bit mantisa which could hold a 64 bit bitboard.
Last edited by Carey on Tue Jun 02, 2009 1:13 pm, edited 1 time in total.

Carey
Posts: 313
Joined: Wed Mar 08, 2006 7:18 pm
Contact:

Re: Bitboards using 2 DOUBLE's??

Post by Carey » Tue Jun 02, 2009 1:13 pm

sje wrote:What is old is new again, for this is what Slate and Atkin did thirty five years ago on a CDC 6400 mainframe with their Chess 4.x program.
I know about Chess 4.x.

I've read their papers and looked at the source etc.

They generally don't use floats though. They tend to do things as easily assesible integers that happen to be stored in a 60 bit float word. I can't remember for sure, but I think right off hand they only treat them as floats when they are trying to find the next bit or something and are taking advantage of the floating points normalization hardware.

I was curious about what if you couldn't even do that. That you had to do almost everything as real floating point stuff.
But I'm unable to see why the idea should be making a comeback.
Curiosity. Isn't that enough??

Let's be honest here... *NOTHING* about computer chess is practical anymore. Everybody does it simply because they want to, because they are interested, because they are curious about something.

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Bitboards using 2 DOUBLE's??

Post by bob » Tue Jun 02, 2009 4:59 pm

Carey wrote:
bob wrote:There are issues with various architectures in doing this. For example, a bitboard with the pattern "NaN" (not a number) will cause a floating exception.
NaN's and other special codes wouldn't be a problem at all because you wont ever do any math that would cause them to show up.
For some values, just putting 'em in a FP register will cause a problem. When I started on Crafty in 1994, I started using doubles (did not know about long longs at the time). On a sun things were crashing right and left because of invalid floating point numbers. moved over to an older VAX and danged if it didn't do the same thing. Moving FP numbers on the vax used moves thru a floating point register. I had to compile the program, change all the moves to not use floating point (changed opcode to movq which treated the thing as a integer value 64 bits long) to make it work.

I then discovered long long which completely solved the problem.
There are other special cases as well. However, there is no reason nor benefit for doing this, since most processors today are already 64 bits and can deal with 64 bit values natively.
I *did* say it was just something that popped into my head and I was curious about.

I even said it wasn't practical. That doesn't mean I can't be curious though.

I am not sure why you would want to use two doubles, since that is twice as long as a bitboard, unless you go into Gerd's wild way of thinking with his SSE stuff...
I'd want to use two doubles because one only has 53 bits to play with and that's not enough to hold a 64 bit bitboard.

The only way to do it would be to do some sort of 'double-double' style math package to create a 106 bit mantisa which could hold a 64 bit bitboard.
Why? if you use the entire number and never treat it as a FP value, you could use the exponent bits just as easily???

BTW, one error above. You have 52 bits, not 53. The 53rd bit _must_ be a 1 bit (the IEEE "hidden" bit).

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Bitboards using 2 DOUBLE's??

Post by bob » Tue Jun 02, 2009 5:03 pm

sje wrote:What is old is new again, for this is what Slate and Atkin did thirty five years ago on a CDC 6400 mainframe with their Chess 4.x program.

But I'm unable to see why the idea should be making a comeback.
I don't remember them using floating point numbers. They just ran on a 60 bit word CDC and had to use 2 words to store one value...

Carey
Posts: 313
Joined: Wed Mar 08, 2006 7:18 pm
Contact:

Re: Bitboards using 2 DOUBLE's??

Post by Carey » Tue Jun 02, 2009 6:48 pm

bob wrote:
Carey wrote:
bob wrote:There are issues with various architectures in doing this. For example, a bitboard with the pattern "NaN" (not a number) will cause a floating exception.
NaN's and other special codes wouldn't be a problem at all because you wont ever do any math that would cause them to show up.
For some values, just putting 'em in a FP register will cause a problem. When I started on Crafty in 1994, I started using doubles (did not know about long longs at the time). On a sun things were crashing right and left because of invalid floating point numbers. moved over to an older VAX and danged if it didn't do the same thing. Moving FP numbers on the vax used moves thru a floating point register. I had to compile the program, change all the moves to not use floating point (changed opcode to movq which treated the thing as a integer value 64 bits long) to make it work.
Right, right. I'm well aware of those kinds of things.

You've got lots of special floating point codes plus the FPU will try to normalize the vars everytime it touches them. One thing you never ever do is use a float to pick up arbitrary data. It can change it or cause exceptions. (Technically that is also possible for integers in C. That's why you can only safely use unsigned chars to copy data. Every other built in data type is allowed to actually change the contents upon load or storage. Everybody ignores it though because there are so few systems that it would effect.)


But I'm not talking about putting arbitrary *64 bit* data into a double and trying to work with it.

I'm talking about using just the mantissa (the integer part) to hold the bitboard data.

To hold a 64 bit bitboard you'd have to use either a software based 'double-double' (which gives 106 bits of mantissa) or use Intel's 'long double' (which Windows doesn't like.)

For the sake of this discussion, you can pretend we are working with Intel's 80 bit long double which can safely hold 64 bit integers, or that we are using a software based 'double-double' which can hold 106 bit integers, or even that a bitboard is only 50 bits and will safely fit into a regular 'double'.

Any of those are okay because....


My original question was: Can you do all the needed bitboard stuff using *only* floating point operations.

Not hold it in FPU registers or such. But actually do the needed bitboard operations using floating point operations. Or if not all, then at least most.

As near as I can tell, the only things really causing problems are AND & OR bit operations.

I can't think of any way other than to do things a bit at a time or use some sort of precomputed truth table. Both would be very cumbersome.

I then discovered long long which completely solved the problem.
There are other special cases as well. However, there is no reason nor benefit for doing this, since most processors today are already 64 bits and can deal with 64 bit values natively.
I *did* say it was just something that popped into my head and I was curious about.

I even said it wasn't practical. That doesn't mean I can't be curious though.

I am not sure why you would want to use two doubles, since that is twice as long as a bitboard, unless you go into Gerd's wild way of thinking with his SSE stuff...
I'd want to use two doubles because one only has 53 bits to play with and that's not enough to hold a 64 bit bitboard.

The only way to do it would be to do some sort of 'double-double' style math package to create a 106 bit mantisa which could hold a 64 bit bitboard.
Why? if you use the entire number and never treat it as a FP value, you could use the exponent bits just as easily???
If you never treat it as a floating point number then that kind of invalidates my original question.

I'm not wanting to use a double var to hold a bitboard. I'm curious if its possible to actually do all the bitboard stuff *as* floating point operations.

Just sheer curiosity about whether it'd be possible to do all (or nearly all) the bitboard stuff using nothing but floating point operations.

BTW, one error above. You have 52 bits, not 53. The 53rd bit _must_ be a 1 bit (the IEEE "hidden" bit).
Sorry Bob, but no. Obvious example is the number zero. It has no bits set. (Yes, I know it uses a special exponent code to indicate zero.)

You can safely have any arbitrary 53 bit integer stored into a double without any risk of corruption. The FPU will represent it however it needs to.

Yes, there are only 52 bits available, but the FPU does what it needs to hold the full 53 bits of integer data.

It normalizes things and has a special code for zero, etc. If the 53'rd bit is '1' then things are fine. If it's a zero then the FPU normalizes it and the exponent is adjusted. It all works out that you can safely hold 53 bits of integer data.


*HOWEVER*, just to make you happy, I'll say that you can only old 40 bits of mantissa in a double. Doesn't matter because I'm talking about using software based 'double-double' math that doubles the number of mantissa bits anyway.

So in this case instead of 53*2=106 it'd be 40*2=80 which is still more than enough for what I'm talking about.

Makes no difference because I'm curious about the bit operations as floating points.

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Bitboards using 2 DOUBLE's??

Post by bob » Tue Jun 02, 2009 7:02 pm

Carey wrote:
bob wrote:
Carey wrote:
bob wrote:There are issues with various architectures in doing this. For example, a bitboard with the pattern "NaN" (not a number) will cause a floating exception.
NaN's and other special codes wouldn't be a problem at all because you wont ever do any math that would cause them to show up.
For some values, just putting 'em in a FP register will cause a problem. When I started on Crafty in 1994, I started using doubles (did not know about long longs at the time). On a sun things were crashing right and left because of invalid floating point numbers. moved over to an older VAX and danged if it didn't do the same thing. Moving FP numbers on the vax used moves thru a floating point register. I had to compile the program, change all the moves to not use floating point (changed opcode to movq which treated the thing as a integer value 64 bits long) to make it work.
Right, right. I'm well aware of those kinds of things.

You've got lots of special floating point codes plus the FPU will try to normalize the vars everytime it touches them. One thing you never ever do is use a float to pick up arbitrary data. It can change it or cause exceptions. (Technically that is also possible for integers in C. That's why you can only safely use unsigned chars to copy data. Every other built in data type is allowed to actually change the contents upon load or storage. Everybody ignores it though because there are so few systems that it would effect.)


But I'm not talking about putting arbitrary *64 bit* data into a double and trying to work with it.

I'm talking about using just the mantissa (the integer part) to hold the bitboard data.

To hold a 64 bit bitboard you'd have to use either a software based 'double-double' (which gives 106 bits of mantissa) or use Intel's 'long double' (which Windows doesn't like.)

For the sake of this discussion, you can pretend we are working with Intel's 80 bit long double which can safely hold 64 bit integers, or that we are using a software based 'double-double' which can hold 106 bit integers, or even that a bitboard is only 50 bits and will safely fit into a regular 'double'.

Any of those are okay because....


My original question was: Can you do all the needed bitboard stuff using *only* floating point operations.

Not hold it in FPU registers or such. But actually do the needed bitboard operations using floating point operations. Or if not all, then at least most.

As near as I can tell, the only things really causing problems are AND & OR bit operations.

I can't think of any way other than to do things a bit at a time or use some sort of precomputed truth table. Both would be very cumbersome.

I then discovered long long which completely solved the problem.
There are other special cases as well. However, there is no reason nor benefit for doing this, since most processors today are already 64 bits and can deal with 64 bit values natively.
I *did* say it was just something that popped into my head and I was curious about.

I even said it wasn't practical. That doesn't mean I can't be curious though.

I am not sure why you would want to use two doubles, since that is twice as long as a bitboard, unless you go into Gerd's wild way of thinking with his SSE stuff...
I'd want to use two doubles because one only has 53 bits to play with and that's not enough to hold a 64 bit bitboard.

The only way to do it would be to do some sort of 'double-double' style math package to create a 106 bit mantisa which could hold a 64 bit bitboard.
Why? if you use the entire number and never treat it as a FP value, you could use the exponent bits just as easily???
If you never treat it as a floating point number then that kind of invalidates my original question.

I'm not wanting to use a double var to hold a bitboard. I'm curious if its possible to actually do all the bitboard stuff *as* floating point operations.

Just sheer curiosity about whether it'd be possible to do all (or nearly all) the bitboard stuff using nothing but floating point operations.

BTW, one error above. You have 52 bits, not 53. The 53rd bit _must_ be a 1 bit (the IEEE "hidden" bit).
Sorry Bob, but no. Obvious example is the number zero. It has no bits set. (Yes, I know it uses a special exponent code to indicate zero.)
Here we are going to have to disagree, because I teach IEEE floating point in my asm class. Zero is a special case. when the exponent field is all zero, we now have an unnormalized number (perfectly legal) which eliminates that hidden 1 bit. But you can't use it. For 32 bit IEEE, you have 1 sign bit, 8 exponents, and 32 - 1 - 8 leaves 23 bits, not 24. Ditto for 64 bit IEEE which has 52 bits stored in the value, + the hidden 1 bit that is always present except for non-nomalized numbers.

But you show me any way to represent a 53 bit number with the high-order bit 0 or 1, and I'll kiss you. :) You can get it to be zero with a zero exponent. You can get it to be 1 with a normal exponent. But you can't do boolean operations on that bit. It would require fiddling with the exponent instead since that 53rd bit does not exist in the actual number in IEEE format.



You can safely have any arbitrary 53 bit integer stored into a double without any risk of corruption. The FPU will represent it however it needs to.
See above. one of those 53 bits does not physically exist. Which means to manipulate the rightmost 52 bits, you twiddle with those. To manipulate that non-existent bit, you have to twiddle with the entire left end (exponent field) of the number rather than a single bit.

Yes, there are only 52 bits available, but the FPU does what it needs to hold the full 53 bits of integer data.

It normalizes things and has a special code for zero, etc. If the 53'rd bit is '1' then things are fine. If it's a zero then the FPU normalizes it and the exponent is adjusted. It all works out that you can safely hold 53 bits of integer data.


*HOWEVER*, just to make you happy, I'll say that you can only old 40 bits of mantissa in a double. Doesn't matter because I'm talking about using software based 'double-double' math that doubles the number of mantissa bits anyway.

So in this case instead of 53*2=106 it'd be 40*2=80 which is still more than enough for what I'm talking about.

Makes no difference because I'm curious about the bit operations as floating points.
Those do not naturally exist in any processor I know of (IEEE math processor). One can do any arbitrary boolean function as a mathematical operation. "not" means subtract from all 1's, AND means to multiply bit by bit. Etc. Not going to be fast at all when compared to using two normal 32 bit integer values which do have AND/OR/XOR operations that operate on all bits at once...

Carey
Posts: 313
Joined: Wed Mar 08, 2006 7:18 pm
Contact:

Re: Bitboards using 2 DOUBLE's??

Post by Carey » Tue Jun 02, 2009 9:16 pm

bob wrote:
BTW, one error above. You have 52 bits, not 53. The 53rd bit _must_ be a 1 bit (the IEEE "hidden" bit).
Sorry Bob, but no. Obvious example is the number zero. It has no bits set. (Yes, I know it uses a special exponent code to indicate zero.)
Here we are going to have to disagree, because I teach IEEE floating point in my asm class.
Then you don't teach it very well, Sir.

I mean no disrespect, especially considering how much I respect your chess programming abilities and history. When I tell somebody that "Hyatt said..." I know you are right because I know you've tested it thourghly.

But if you are going to brag about teaching float in your asm class, then I have to stand up and say that I don't think you do it well. That in this case, you are wrong.
Zero is a special case. when the exponent field is all zero, we now have an unnormalized number (perfectly legal) which eliminates that hidden 1 bit. But you can't use it. For 32 bit IEEE, you have 1 sign bit, 8 exponents, and 32 - 1 - 8 leaves 23 bits, not 24. Ditto for 64 bit IEEE which has 52 bits stored in the value, + the hidden 1 bit that is always present except for non-nomalized numbers.

But you show me any way to represent a 53 bit number with the high-order bit 0 or 1, and I'll kiss you. :)
The specs say that there *are* 52 bits of genuine data space physically available with the 53rd bit being implicitly and unchangably set as '1'. You are saying that as well.

So, for a 53 bit integer where the 53rd bit *is* set, then there's no problem. Because you have the high bit set and 52 bits of other data. The very thing the float specs explicitly allow with the 53rd bit being set but not physically present.

For the case where you have 53 bits of data where the 53rd bit is *NOT* set then you don't actually have 53 bits of data. You have 52 bits. Or 51 or 50 or where ever the highest bit is set. In which case the 52 bits that are physically there are enough to hold it without issue. The FPU will normalize things so the highest bit is set and adjust its exponent accordingly.

For the case where there are no bits at all set, the FPU has a special flag to idicate that.

So you've covered all the bases. Bit 53 set and 52 bits of other data. Bit 53 not set and up to 52 bits of other data. No bits set.

A total of 53 bits of user data.

You can get it to be zero with a zero exponent. You can get it to be 1 with a normal exponent. But you can't do boolean operations on that bit. It would require fiddling with the exponent instead since that 53rd bit does not exist in the actual number in IEEE format.
Doing boolean operations would require some fiddling with the numbers.

It's possible some sort of MAGIC multiplication would work, but I'm not sure. I suspect it would, but I don't know the values or how many bits you could do at a time.

The best I've been able to come up that I know would work is that since we would have to use two doubles to hold 64 bits of real data, we'd keep 32 bits in each one. (Just to make things consistant with both doubles.)

We'd then add a huge value to shift our 32 bits down to the lower word.

We then use the lowest 8 bits as an index into an array of truth tables. Repeat with the other bytes of data. We are using an integer to grab the lowest 8 bits and use it as an index, but we aren't really doing math as integers.

I'm not real fond of using truth tables but that's the best I've come up with.

(The addition of huge values to shift our data to a lower part of the word was a classic x87 hack for conversion to integer, truncating, etc. The x87 was so slow doing those operations it was actually faster to do them by hand with some clever math. Not common today, but the ideas are well tested and would still work.)


You can safely have any arbitrary 53 bit integer stored into a double without any risk of corruption. The FPU will represent it however it needs to.
See above. one of those 53 bits does not physically exist. Which means to manipulate the rightmost 52 bits, you twiddle with those. To manipulate that non-existent bit, you have to twiddle with the entire left end (exponent field) of the number rather than a single bit.
1) We don't have to actually store a full 53 bits into each double. We could store just 32, that way we have some extra space to mess with if we want.

2) If we did do some floating point operation that involved the hidden 53rd bit, then the FPU would take care of that automatically. It's designed to do that.

No, it's not designed to do boolean operations, but with the right kind of math we might be able to trick it. If not, then there's always the 'cheating' method of using truth tables like I mentioned above.


Yes, there are only 52 bits available, but the FPU does what it needs to hold the full 53 bits of integer data.

It normalizes things and has a special code for zero, etc. If the 53'rd bit is '1' then things are fine. If it's a zero then the FPU normalizes it and the exponent is adjusted. It all works out that you can safely hold 53 bits of integer data.


*HOWEVER*, just to make you happy, I'll say that you can only old 40 bits of mantissa in a double. Doesn't matter because I'm talking about using software based 'double-double' math that doubles the number of mantissa bits anyway.

So in this case instead of 53*2=106 it'd be 40*2=80 which is still more than enough for what I'm talking about.

Makes no difference because I'm curious about the bit operations as floating points.
Those do not naturally exist in any processor I know of (IEEE math processor). One can do any arbitrary boolean function as a mathematical operation. "not" means subtract from all 1's, AND means to multiply bit by bit. Etc. Not going to be fast at all when compared to using two normal 32 bit integer values which do have AND/OR/XOR operations that operate on all bits at once...
No, they don't exist. I thought I was pretty clear about that in my first post. And was one of the reasons I was asking if anybody had thought about this kind of useless stuff.

And no, it's not going to be fast. But who cares? As I've said several times this wasn't a practical method. It was just something I got curious about.

Just a "Can it be done?" kind of thing.

Post Reply