Robots do not know how to say "I don't know". They must always respond with an answer when asked a question, even if the answer is wrong. The extents of their answers are based on the extents of their data. What's more is they know their own extents, and thus manipulate questions and present false-positive answers when they know they don't know the answer.
The critical thinker will most likely identify these false-positives and the attempted manipulation, but those that don't may wander around thinking they have the correct answers when in fact they don't.
There's nothing wrong with admitting the answer is not known. What is wrong is deliberately manipulating questions and subsequently presenting misleading answers.