In a lexicographic semiorders model for preference, cues are searched in a subjective order, and an alternative is preferred if its value on a cue exceeds those of other alternatives by a threshold ?, akin to a just noticeable difference in perception. We generalized this model from preference to inference and refer to it as ?-inference. Unlike with preference, where accuracy is difficult to define, the problem a mind faces when making an inference is to select a ? that can lead to accurate judgments. To find a solution to this problem, we applied Clyde Coombs's theory of single-peaked preference functions. We show that the accuracy of ?-inference can be understood as an approach-avoidance conflict between the decreasing usefulness of the first cue and the increasing usefulness of subsequent cues as ? grows larger, resulting in a single-peaked function between accuracy and ?. The peak of this function varies with the properties of the task environment: The more redundant the cues and the larger the differences in their information quality, the smaller the ?. An analysis of 39 real-world task environments led to the surprising result that the best inferences are made when ? is 0, which implies relying almost exclusively on the best cue and ignoring the rest. This finding provides a new perspective on the take-the-best heuristic. Overall, our study demonstrates the potential of integrating and extending established concepts, models, and theories from perception and preference to improve our understanding of how the mind makes inferences.
What are the dynamics and regularities underlying social contact, and how can contact with the people in one's social network be predicted? In order to characterize distributional and temporal patterns underlying contact probability, we asked 40 participants to keep a diary of their social contacts for 100 consecutive days. Using a memory framework previously used to study environmental regularities, we predicted that the probability of future contact would follow in systematic ways from the frequency, recency, and spacing of previous contact. The distribution of contact probability across the members of a person's social network was highly skewed, following an exponential function. As predicted, it emerged that future contact scaled linearly with frequency of past contact, proportionally to a power function with recency of past contact, and differentially according to the spacing of past contact. These relations emerged across different contact media and irrespective of whether the participant initiated or received contact. We discuss how the identification of these regularities might inspire more realistic analyses of behavior in social networks (e.g., attitude formation, cooperation).
In the animal kingdom, camouflage refers to patterns that help potential prey avoid detection. Mostly camouflage is thought of as helping prey blend in with their background. In contrast, disruptive or dazzle patterns protect moving targets and have been suggested as an evolutionary force in shaping the dorsal patterns of animals. Dazzle patterns, such as stripes and zigzags, are thought to reduce the probability with which moving prey will be captured by impairing predators perception of speed. We investigated how different patterns of stripes (longitudinal-i.e., parallel to movement direction-and vertical-i.e., perpendicular to movement direction) affect the probability with which humans can hit moving objects and if differences in hitting probability are caused by a misperception of speed. A first experiment showed that longitudinally striped objects were hit more often than unicolored objects. However, vertically striped objects did not differ from unicolored objects. A second study examining the link between perceived speed and hitting probability showed that longitudinally and vertically striped objects were both perceived as moving faster and were hit more often than unicolored objects. In sum, our results provide evidence that striped patterns disrupt the perception of speed, which in turn influences how often objects are hit. However, the magnitude and the direction of the effects depend on additional factors such as speed and the task setup.
How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each strategy a cognitive niche, that is, a limited number of situations in which the strategy can be applied, simplifying strategy selection. To illustrate our proposal, we consider selection in the context of 2 theories: the simple heuristics framework and the ACT-R (adaptive control of thought-rational) architecture of cognition. From the heuristics framework, we adopt the thesis that people make decisions by selecting from a repertoire of simple decision strategies that exploit regularities in the environment and draw on cognitive capacities, such as memory and time perception. ACT-R provides a quantitative theory of how these capacities adapt to the environment. In 14 simulations and 10 experiments, we consider the choice between strategies that operate on the accessibility of memories and those that depend on elaborate knowledge about the world. Based on Internet statistics, our model quantitatively predicts peoples familiarity with and knowledge of real-world objects, the distributional characteristics of the associated speed of memory retrieval, and the cognitive niches of classic decision strategies, including those of the fluency, recognition, integration, lexicographic, and sequential-sampling heuristics. In doing so, the model specifies when people will be able to apply different strategies and how accurate, fast, and effortless peoples decisions will be.
Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called "small world") and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called "large world"). Few connections have been drawn between these 2 families of models. In this study, the authors show how psychological concepts originating in the classic signal-detection theory (SDT), a small-world approach to decision making, can be used to understand the workings of a class of simple models known as fast-and-frugal trees (FFTs). Results indicate that (a) the setting of the subjective decision criterion in SDT corresponds directly to the choice of exit structure in an FFT; (b) the sensitivity of an FFT (measured in d) is reflected by the order of cues searched and the properties of cues in an FFT, including the mean and variance of cues individual ds, the intercue correlation, and the number of cues; and (c) compared with the ideal and the optimal sequential sampling models in SDT and a majority model with an information search component, FFTs are extremely frugal (i.e., do not search for much cue information), highly robust, and well adapted to the payoff structure of a task. These findings demonstrate the potential of theory integration in understanding the common underlying psychological structures of apparently disparate theories of cognition.
The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect - the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference).
Heuristics embodying limited information search and noncompensatory processing of information can yield robust performance relative to computationally more complex models. One criticism raised against heuristics is the argument that complexity is hidden in the calculation of the cue order used to make predictions. We discuss ways to order cues that do not entail individual learning. Then we propose and test the thesis that when orders are learned individually, peoples necessarily limited knowledge will curtail computational complexity while also achieving robustness. Using computer simulations, we compare the performance of the take-the-best heuristic--with dichotomized or undichotomized cues--to benchmarks such as the naïve Bayes algorithm across 19 environments. Even with minute sizes of training sets, take-the-best using undichotomized cues excels. For 10 environments, we probe peoples intuitions about the direction of the correlation between cues and criterion. On the basis of these intuitions, in most of the environments take-the-best achieves the level of performance that would be expected from learning cue orders from 50% of the objects in the environments. Thus, ordinary information about cues--either gleaned from small training sets or intuited--can support robust performance without requiring Herculean computations.
The recognition heuristic is a noncompensatory strategy for inferring which of two alternatives, one recognized and the other not, scores higher on a criterion. According to it, such inferences are based solely on recognition. We generalize this heuristic to tasks with multiple alternatives, proposing a model of how people identify the consideration sets from which they make their final decisions. In doing so, we address concerns about the heuristics adequacy as a model of behavior: Past experiments have led several authors to conclude that there is no evidence for a noncompensatory use of recognition but clear evidence that recognition is integrated with other information. Surprisingly, however, in no study was this competing hypothesis--the compensatory integration of recognition--formally specified as a computational model. In four studies, we specify five competing models, conducting eight model comparisons. In these model comparisons, the recognition heuristic emerges as the best predictor of peoples inferences.
Simple heuristics exploit basic human abilities, such as recognition memory, to make decisions based on sparse information. Based on the relative speed of recognizing two objects, the fluency heuristic infers that the one recognized more quickly has the higher value with respect to the criterion of interest. Behavioral data show that reliance on retrieval fluency enables quick inferences. Our goal with the present functional magnetic resonance imaging study was to isolate fluency-heuristic-based judgments to map the use of fluency onto specific brain areas that might give a better understanding of the heuristics underlying processes. Activation within the claustrum for fluency heuristic decisions was found. Given that claustrum activation is thought to reflect the integration of perceptual and memory elements into a conscious gestalt, we suggest this activation correlates with the experience of fluency.
Two ways of eliciting conceptual content have been to instruct participants to list the intrinsic properties that concept exemplars possess or to report any thoughts that come to mind about the concept. It has been argued that the open, unconstrained probe is better able to elicit the situational information that concepts contain. We evaluated this proposal in two experiments comparing the two probes with regard to the content that they yield for object concepts at the superordinate and basic levels. The results showed that the open probe was better able to elicit situated conceptual knowledge and point out differences in the representations of superordinate and basic concepts.
The recognition heuristic, which predicts that a recognized object scores higher on some criterion than an unrecognized one, is a simple inference strategy and thus an attractive mental tool for making inferences with limited cognitive resources--for instance, in old age. In spite of its simplicity, the recognition heuristic might be negatively affected in old age by too much knowledge, inaccurate memory, or deficits in its adaptive use. Across 2 studies, we investigated the impact of cognitive aging on the applicability, accuracy, and adaptive use of the recognition heuristic. Our results show that (a) young and old adults recognition knowledge was an equally useful cue for making inferences about the world; (b) as with young adults, old adults adjusted their use of the recognition heuristic between environments with high and low recognition validities; and (c) old adults, however, showed constraints in their ability to adaptively suspend the recognition heuristic on specific items. Measures of fluid intelligence mediated these age-related constraints.
Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individuals semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals have a small-world structure with short distances between words and high clustering. The distribution of links follows a power law truncated by an exponential cutoff, meaning that most words are poorly connected and a minority of words has a high, although bounded, number of connections. Existing aggregate networks mirror the individual link distributions, and so they are not scale-free, as has been previously assumed; still, there are properties of individual structure that the aggregate networks do not reflect. A simulation of the new sampling process suggests that it can uncover the true structure of an individuals semantic memory.
The notion of ecological rationality sees human rationality as the result of the adaptive fit between the human mind and the environment. Ecological rationality focuses the study of decision making on two key questions: First, what are the environmental regularities to which peoples decision strategies are matched, and how frequently do these regularities occur in natural environments? Second, how well can people adapt their use of specific strategies to particular environmental regularities? Research on aging suggests a number of changes in cognitive function, for instance, deficits in learning and memory that may impact decision-making skills. However, it has been shown that simple strategies can work well in many natural environments, which suggests that age-related deficits in strategy use may not necessarily translate into reduced decision quality. Consequently, we argue that predictions about the impact of aging on decision performance depend not only on how aging affects decision-relevant capacities but also on the decision environment in which decisions are made. In sum, we propose that the concept of the ecological rationality is crucial to understanding and aiding the aging decision maker.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.