Space perception provides egocentric, oriented views of the environment from which working and long-term memories are constructed. "Allocentric" (i.e. position-independent) long-term memories may be organized as graphs of recognized places or views but the interaction of such cognitive graphs with egocentric working memories is unclear. Here we present a simple coherent model of view-based working and long-term memories, together with supporting evidence from behavioral experiments. The model predicts [Formula: see text] that within a given place, memories for some views may be more salient than others, [Formula: see text] that imagery of a target square should depend on the location where the recall takes place, and [Formula: see text] that recall favors views of the target square that would be obtained when approaching it from the current recall location. In two separate experiments in an outdoor urban environment, pedestrians were approached at various interview locations and asked to draw sketch maps of one of two well-known squares. Orientations of the sketch map productions depended significantly on distance and direction of the interview location from the target square, i.e. different views were recalled at different locations. Further analysis showed that location-dependent recall is related to the respective approach direction when imagining a walk from the interview location to the target square. The results are consistent with a view-based model of spatial long-term and working memories and their interplay.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.