[37 / 3 / ?]
So we got a question on my AI exam about the Imitation Game(the Turing test).
And I find it absolutely a shiet-tier question. It goes as follows:
Do you believe this is an adequate test for machine intelligence? Justify your answer.
And my honest opinion is that you can't measure a fish's intelligence by measuring it's ability to climb. Then why the fuck should a machine mimic a human? It's absolutely non-intuitive.
A machine should outperform a human. Being asked arithmetic questions the machine is instructed to make false errors (i.e. making errors by purpose) just to mimic a humans inability to add fuckhuge numbers. Shouldn't a better test make the computer run past human levels. Where a human should compete against the machine, and if the machine should win it's the new intelligent being.
What do you think?
How would you propose a better test?
How would you answer this question?
This question was given on my exam, so it's technically not a homework question. Just an interesting question.
And I find it absolutely a shiet-tier question. It goes as follows:
Do you believe this is an adequate test for machine intelligence? Justify your answer.
And my honest opinion is that you can't measure a fish's intelligence by measuring it's ability to climb. Then why the fuck should a machine mimic a human? It's absolutely non-intuitive.
A machine should outperform a human. Being asked arithmetic questions the machine is instructed to make false errors (i.e. making errors by purpose) just to mimic a humans inability to add fuckhuge numbers. Shouldn't a better test make the computer run past human levels. Where a human should compete against the machine, and if the machine should win it's the new intelligent being.
What do you think?
How would you propose a better test?
How would you answer this question?
This question was given on my exam, so it's technically not a homework question. Just an interesting question.
