06-05-2023, 07:07 PM
|
#2839
|
Franchise Player
Join Date: Mar 2006
Location: Shanghai
|
Very cool account of some of the tech that will make the user experience for this new device. Both cool and creepy.
https://twitter.com/user/status/1665792422914453506
Quote:
...
The work I did supported the foundational development of Vision Pro, the mindfulness experiences, ▇▇▇▇▇▇ products, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021, so I’m excited to experience how the last two years brought everything together. I’m really curious what made the cut and what will be released later on.
..
Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.
There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user's brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.
Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.
Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.
|
__________________
"If stupidity got us into this mess, then why can't it get us out?"
|
|
|