According to this article, "new evidence suggests that, to our brains, reading and hearing a story might not be so different." I have serious doubts about this study (33 page PDF), and even more about the reporting. So much is left to interpretation. The study uses brain scan analyses, which we are told are "color-coded maps of the brain show the semantic similarities during listening and reading" based on volumetric pixels or voxels. But when we read some of us at least sound out the words in our mind (I know I do, some of the time). Why don't voxels (assuming they are a thing at all, which I doubt) just correspond to sounds? There's no reason to believe they carry any semantical import at all. Here's the phrenology, er., I mean 'pragmatic semantic atlas' (zip file from Box.npz file (which is a file format by the Python numpy library that provides storage of array data using gzip compression)).
Today: 0 Total: 90 [Share]
] [