I made stockfish play 20,000,000 random games to ply 60 at 1ms per move. Each game was made different by always picking a random move from its top 6 pvs.
For each (white) position I read Stockfish's score estimate and recorded it in a table, together with a binary vector representation of the board position.
I then used regression with l1 loss to find the best possible psqts for the given positions. I managed to get an average difference between Stockfish and the simple table evaluation of 1 pawn.
The (normalised) piece values always quickly converged to
Pawn 100, Knight 283, Bishop 323, Rook 481, Queen: 935
The following pictures show the heat map of each table:
Queen and Rook

Bishop and King

Knight and Pawn

I also tried enforcing symmetry, which allowed faster training, though the average loss was the same:
Queen and Rook

Knight and Pawn

Bishop and King

Clearly both of the King tables are somewhat noisy. Even with 20,000,000 games, the king just doesn't wander off to d8/e8 that often.
There are also artefacts coming from the ground truth containing possible future gains, such as the knights being encouraged to go to likely fork positions.
Overall however, the tables seem to have learned some useful things, such at positioning rooks in d1/e1 and promoting pawns.
Do you see any other useful things they have learned or things that are clearly mistakes?