The overall goal of the following experiment is to demonstrate that interacting with a virtual environment allows for the transfer of spatial navigation skills that can be assessed later in the actual physical environment. This is achieved by first learning, the keystrokes used to navigate in the virtual environment during gameplay. Next, the participants learn how to identify sounds which allow for the identification of objects and their relative location.
After gameplay, a series of virtual and physical navigation tasks are presented in order to assess their spatial understanding of the environment. Results are obtained that show that participants are able to navigate through a real physical building that was represented by the virtual environment during gameplay. The main advantage of this technique over existing methods, such as standard orientation and mobility training, is that the spatial layout of an unfamiliar building can be learned through virtual exploration in a safe and controlled manner.
Demonstrating this procedure will be Aaron Connors and Lindsey Aino, both of whom are study coordinators for my laboratory. Participants in this study are legally blind and have no previous knowledge of the spatial layout of the target building to be learned, nor are they aware of the overall purpose of gameplay with regards to learning the spatial layout of the target building. To begin, provide the participant with a blindfold and headphones.
Make sure that the headphones are properly positioned over the ears. Next, instruct the participant on how to use the assigned keys and information presented by the audio cues in the audio-based environment.Simulator. Specific keystrokes as seen here are used to navigate through and explore the virtual environment.
Each virtual step approximates one step in the real physical building. After explaining the controls, explain the premise and the rules of the game. Inform the participant of the various audio cues used during gameplay.
For example, the sound of locating jewels and of monsters nearby as the user navigates through the building, auditory and spatial information is acquired sequentially and is dynamically updated after each step is taken in the virtual environment. Text to speech is used to provide further information regarding a user's current location, orientation, and heading, as well as the identity of objects and obstacles in their path. The spatial localization of sounds is updated to match the user's egocentric heading.
For example, if a door is located on the person's right side, the knocking sound is heard in the user's right ear. If the person now turns around 180 degrees so that the same door is now located on their left side, the same knocking sound is now heard in the left channel. Allow the participant to play the game for three 30 minute sessions.
Note any difficulties or challenges occurring during gameplay, following each training session, provide positive reinforcement and any clarifications or further instruction. Later, score their performance on gameplay by tallying the number, time and location where a participant finds a Juul. To test for performance on the virtual navigation task, explain to the participant that they will complete 10 predetermined navigation tasks presented sequentially by the Abes software.
The paths are of comparable difficulty and are chosen based on predetermined pairings of 10. Start and stop locations. Inform the participant that they will have a maximum of six minutes to complete each task at the start of each task.
Instructions describing the start location and the target destination are provided automatically. The timing starts automatically once the participant receives audio instruction and ends. Once they arrive at the target location, outcome measures are automatically recorded and include the successful completion of the navigation task and the time taken to reach the target.
To test the participant's performance in the physical location, explain that they will be given 10 predetermined navigation tasks. Under the supervision of an experienced investigator. Inform the participant that they will have a maximum of six minutes to complete each task and that they are allowed to use their cane for mobility support.
Be prepared with a stopwatch, the list of navigation tasks, and a clipboard for manual scoring. Square off the participant and describe the start location and the target destination. Timing begins immediately once the subject takes their first step and ends.
When the participant verbally reports arriving at the destination, the successful completion of a task and the time taken to reach the target are recorded. Explain to the participant that they will complete five navigation tasks with the goals of exiting the building using the shortest route possible. Under the supervision of an experienced investigator, inform the participant that they will have a maximum of six minutes to complete each navigation task and be allowed to use their cane for mobility support.
Five, predetermined starting locations are used such that three exit paths are possible of different lengths and complexity. Be prepared with a stopwatch, the list of navigation tasks and a clipboard for manual scoring. Square off the participant and describe the start location.
The timing begins immediately when they take their first step and ends when the participant verbally reports arriving to an exit door of the building. The successful completion of the drop-off task, the path followed and the time taken to exit the building are recorded. Furthermore, the paths are scored such that the shortest path is worth more points than longer ones.
The average performance from three participants on the navigation tasks is presented here. The percentage correct for the virtual and physical navigation tasks illustrate a high level of success on both tasks. The time to navigate to target was longer for virtual navigation when compared to physical performance.
On the drop off Task suggests that participants often selected the shortest route possible to exit the building. This is evidenced by the shorter average time taken to reach and exit. The results shown here are from one individual participant on all three navigation tasks.
During the virtual navigation, it took 79 seconds to complete. The task as seen by the path marked in yellow performance on the same path in the physical building took 46 seconds. In the drop off task.
The participant took the shortest path possible to exit the building. Don't forget that working with blind participants using new technology requires encouragement. We're grateful for their time and willingness to participate in this demonstration and study.