Could humans outsmart Stockfish?
Probably not, but apparently even AIs can get overconfident (or something enough like it).
Go and chess are sufficiently different that what apparently happened to KataGo can’t happen to, say, Stockfish, but could there be some analogous glitch? I’m inclined to doubt it, but that’s just a guess based on not having seen anything like that in a long time. (The only way that an engine might get tricked is in a fortress situation, but even if it could be led astray there, good luck getting to a position where the possibility of a fortress might arise.)
But some of you may know far better than I do; if so, please weigh in. As for the Go story, have a look here.
Another point, then there is the composition itself that is not easy at all. There are attempts to computerize compositions (compositions are not puzzles, are a level higher), but so far they cannot outmatch humans.
It is a matter of resources though, if enough (developing) resources (time is one) were poured in making a chess engine composer, then it would be possible to have a superhuman one I think.
Yes, it depends on the position. Chess engines are very strong for positions where the opponent (or the position) isn't silly or unusual. Of course those positions that confuse computers are diminishing over time because there is a sort of "slow race" for composers to find those position and for developers to cover them.
https://www.youtube.com/watch?v=15nuJdAUW0s