The information would probably be delivered raw into our minds. We’d just know what time it is, what the weather will be, who’s calling us and what the news are saying, without having to process the information.
The information would probably be delivered raw into our minds. We’d just know what time it is, what the weather will be, who’s calling us and what the news are saying, without having to process the information.
I wonder if hyperphants will still receive information in whatever senses they have overactive imagination in.
I have aphantasia, so I’m curious what the experience would be like for that as well.
Like if the whole thing is designed to evoke mental imagery rather than actual sight, what happens to those who don’t have that ability?
Maybe we aphantisians will get a version with mental voice narration instead of mental images, much like visual blind people have today for visual media and information.
I wonder how bad that would be when competing with the inner monologue… cuz my brain talks to me a lot.
It could create a really bad loop of the inner monologue and the implant voice interacting and discussing with each other.
I imagine the information would come to you same as any thought
Maybe, but if it’s designed for a brain that works “normally”, the way a thought “works” (or is served up) might be incompatible.
For example, consider the prevalence of the phrase “close your eyes and picture X”, particularly in meditation and other relaxation techniques. I am incapable of doing this exercise (and for most of my life I was very confused by the phrase). Most people, from what I’ve been able to gather, are incapable of imagining a world without that inner picture, so why would they make the interface work for that deficit?