Ha - the thing to think about here is, pretend that you were asked a question of say, why you labeled a horse picture as a horse.
You might say something like "well, it has hoofs, and a mane, and a long snout, and 4 legs, etc". However, that sort of answer from a computer would probably be labeled as too generic since that could be any number of animals.
The issue is, that to you a horse is some complicated 'average' of all the examples of horses you've seen in your lifetime, that go through a complicated mechanism for you to classify it. Specifically, probabilities of different features that your eyes have extracted from seeing them.
Similarly, understanding why a neural network is doing something is very possible, however, it fails to mean much when the answer it could give is a list of, say, hundreds of vectors and weightings that contributed to its prediction.
You might say something like "well, it has hoofs, and a mane, and a long snout, and 4 legs, etc". However, that sort of answer from a computer would probably be labeled as too generic since that could be any number of animals.
The issue is, that to you a horse is some complicated 'average' of all the examples of horses you've seen in your lifetime, that go through a complicated mechanism for you to classify it. Specifically, probabilities of different features that your eyes have extracted from seeing them.
Similarly, understanding why a neural network is doing something is very possible, however, it fails to mean much when the answer it could give is a list of, say, hundreds of vectors and weightings that contributed to its prediction.