2 Comments

Also note what Chessbase themselves say about whether Let's check can be manipulated. It's only in the German manual, anyone able to read German (I know this is the case for Ken Regan) can check the original http://help.chessbase.com/CBase/14/Deu/index.html?grundlagen.htm - for others my partial translation:

"Because Let's Check is open for all engines, it is possible to use old, bad, or manipulated engines. Destructive things will happen, as always when people in some way contribute to a community on the Internet. ..... The system is self-correcting: unconfirmed variations will disappear over time and also outdated results from old engines will gradually vanish."

Here "old engines" probably means "old analysis", not analysis from old engines submitted recently for a "destructive" purpose. Some people suggest that this was done by FM Yosha Iglesias, or rather her anonymous source "gambit-man" - there seems to be no 100% correlation with modern engines. self-correction may happen but could take days, weeks or months.

Let's Check can be used for its stated purpose - better engine analysis than you manage yourself, your choice which engines from the cloud collection you prefer. It thus shouldn't be used for cheating detection, as it can be manipulated and the 100% verdict seems rather meaningless: one engine out of 20, 50 or 100 means 100% (not 5%, 2% or 1%), and several/many engines can cause a 100% verdict for any game.

Others apparently used the PGN-Spy software to find "evidence" against Niemann. The developer MGleason clearly states - both on GitHub here the program can be downloaded and later on Reddit - that this is "quick and dirty": at most reason for further investigations with more advanced methods, not evidence of guilt.

Expand full comment
Oct 3, 2022·edited Oct 3, 2022

See (i.e., hear) GM David Smerdon discuss Let's Check at 1:20:00 on the recent Ben Johnson podcast at https://podcasts.apple.com/gb/podcast/perpetual-chess-podcast/id1185023674?i=1000580966031 This supplements what I've said about massive confirmation bias from counting a move at any time (by any one of multiple engines found in the cloud) as a match. If the utility is run backwards through a game while preserving hash, as the ChessBase/Fritz "Blundercheck" utility used to do, then you get upwards of 10 percentage points of bias from the resulting higher value of played moves in positions with clear advantage.

And see my appearance yesterday with Jim Eade on Sasha Starr's chess show for some updates: https://www.youtube.com/watch?v=KsxmmEg7_U4

Expand full comment