The valuation of health-related states, including pain, is a critical issue in clinical practice, health economics, and pain neuroscience. Surprisingly the monetary value people associate with pain is highly context-dependent, with participants willing to pay more to avoid medium-level pain when presented in a context of low-intensity, rather than high-intensity, pain. Here, we ask whether context impacts upon the neural representation of pain itself, or alternatively the transformation of pain into valuation-driven behavior. While undergoing fMRI, human participants declared how much money they would be willing to pay to avoid repeated instances of painful cutaneous electrical stimuli delivered to the foot. We also implemented a contextual manipulation that involved presenting medium-level painful stimuli in blocks with either low- or high-level stimuli. We found no evidence of context-dependent activity within a conventional "pain matrix," where pain-evoked activity reflected absolute stimulus intensity. By contrast, in right lateral orbitofrontal cortex, a strong contextual dependency was evident, and here activity tracked the contextual rank of the pain. The findings are in keeping with an architecture where an absolute pain valuation system and a rank-dependent system interact to influence willing to pay to avoid pain, with context impacting value-based behavior high in a processing hierarchy. This segregated processing hints that distinct neural representations reflect sensory aspects of pain and components that are less directly nociceptive whose integration also guides pain-related actions. A dominance of the latter might account for puzzling phenomena seen in somatization disorders where perceived pain is a dominant driver of behavior.
An essential element of goal-directed decision-making in social contexts is that agents' actions may be mutually interdependent. However, the most well-developed approaches to such strategic interactions, based on the Nash equilibrium concept in game theory, are sometimes too broad and at other times 'overlook' good solutions to fundamental social dilemmas and coordination problems. The authors propose a new theory of social decision-making-virtual bargaining-in which individuals decide among a set of moves on the basis of what they would agree to do if they could openly bargain. The core principles of a formal account are outlined (vis-à-vis the notions of 'feasible agreement' and explicit negotiation) and further illustrated with the introduction of a new game, dubbed the 'Boobytrap game' (a modification on the canonical Prisoner's Dilemma paradigm). In the first empirical data of how individuals play the Boobytrap game, participants' experimental choices accord well with a virtual bargaining perspective, but do not match predictions from a standard Nash account. Alternative frameworks are discussed, with specific empirical tests between these and virtual bargaining identified as future research directions. Lastly, it is proposed that virtual bargaining underpins a vast range of human activities, from social decision-making to joint action and communication.
Cognitive science views thought as computation; and computation, by its very nature, can be understood in both rational and mechanistic terms. In rational terms, a computation solves some information processing problem (e.g., mapping sensory information into a description of the external world; parsing a sentence; selecting among a set of possible actions). In mechanistic terms, a computation corresponds to causal chain of events in a physical device (in engineering context, a silicon chip; in biological context, the nervous system). The discipline is thus at the interface between two very different styles of explanation--as the papers in the current special issue well illustrate, it explores the interplay of rational and mechanistic forces.
Many social interactions require humans to coordinate their behavior across a range of scales. However, aspects of intentional coordination remain puzzling from within several approaches in cognitive science. Sketching a new perspective, we propose that the complex behavioral patterns - or 'unwritten rules' - governing such coordination emerge from an ongoing process of 'virtual bargaining'. Social participants behave on the basis of what they would agree to do if they were explicitly to bargain, provided the agreement that would arise from such discussion is commonly known. Although intuitively simple, this interpretation has implications for understanding a broad spectrum of social, economic, and cultural phenomena (including joint action, team reasoning, communication, and language) that, we argue, depend fundamentally on the virtual bargains themselves.
Recent studies provide convincing evidence that data on online information gathering, alongside massive real-world datasets, can give new insights into real-world collective decision making and can even anticipate future actions. We argue that Bentley et al.'s timely account should consider the full breadth, and, above all, the predictive power of big data.
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental "programs" and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing.
Networks of interconnected nodes have long played a key role in Cognitive Science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions, and collaborations among scientists. Today, the inclusion of network theory into Cognitive Sciences, and the expansion of complex-systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the Cognitive Sciences.
Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes be corrected. Such learning from "positive evidence" has been viewed as raising "logical" problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this "logical problem" can be dissolved. In this study, we review recent formal results showing that the learner has sufficient data to learn successfully from positive evidence, if it favors the simplest encoding of the linguistic input. Results include the learnability of linguistic prediction, grammaticality judgments, language production, and form-meaning mappings. The simplicity approach can also be "scaled down" to analyze the learnability of specific linguistic constructions, and it is amenable to empirical testing as a framework for describing human language acquisition.
This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoffs Universal Prior Distribution for Bayesian learning. Golds model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both computational and representational complexity. The article concludes with a description of how semi-supervised learning can be applied to the study of cognitive learning models. Throughout this overview, the specific points raised by our contributing authors are connected to the models and methods under review.
We propose a simple model for genetic adaptation to a changing environment, describing a fitness landscape characterized by two maxima. One is associated with "specialist" individuals that are adapted to the environment; this maximum moves over time as the environment changes. The other maximum is static, and represents "generalist" individuals not affected by environmental changes. The rest of the landscape is occupied by "maladapted" individuals. Our analysis considers the evolution of these three subpopulations. Our main result is that, in presence of a sufficiently stable environmental feature, as in the case of an unchanging aspect of a physical habitat, specialists can dominate the population. By contrast, rapidly changing environmental features, such as language or cultural habits, are a moving target for the genes; here, generalists dominate, because the best evolutionary strategy is to adopt neutral alleles not specialized for any specific environment. The model we propose is based on simple assumptions about evolutionary dynamics and describes all possible scenarios in a non-trivial phase diagram. The approach provides a general framework to address such fundamental issues as the Baldwin effect, the biological basis for language, or the ecological consequences of a rapid climate change.
Six experiments studied relative frequency judgment and recall of sequentially presented items drawn from 2 distinct categories (i.e., city and animal). The experiments show that judged frequencies of categories of sequentially encountered stimuli are affected by certain properties of the sequence configuration. We found (a) a first-run effect whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence and (b) a dissociation between judgments and recall; respondents may judge 1 event more likely than the other and yet recall more instances of the latter. Specifically, the distribution of recalled items does not correspond to the frequency estimates for the event categories, indicating that participants do not make frequency judgments by sampling their memory for individual items as implied by other accounts such as the availability heuristic (Tversky & Kahneman, 1973) and the availability process model (Hastie & Park, 1986). We interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns and offer an account for the relationship between memory and judged frequencies of sequentially encountered stimuli.
How do people choose between options? At one extreme, the value-first view is that the brain computes the value of different options and simply favours options with higher values. An intermediate position, taken by many psychological models of judgment and decision making, is that values are computed but that the resulting choices depend heavily on the context of available options. At the other extreme, the comparison-only view argues that choice depends directly on comparisons, with or even without any intermediate computation of value. In this paper, we place past and current psychological and neuroscientific theories on this spectrum, and review empirical data that have led to an increasing focus on comparison rather than value as the driver of choice.
Although there may be no true language universals, it is nonetheless possible to discern several family resemblance patterns across the languages of the world. Recent work on the cultural evolution of language indicates the source of these patterns is unlikely to be an innate universal grammar evolved through biological adaptations for arbitrary linguistic features. Instead, it has been suggested that the patterns of resemblance emerge because language has been shaped by the brain, with individual languages representing different but partially overlapping solutions to the same set of nonlinguistic constraints. Here, we use computational simulations to investigate whether biological adaptation for functional features of language, deriving from cognitive and communicative constraints, may nonetheless be possible alongside rapid cultural evolution. Specifically, we focus on the Baldwin effect as an evolutionary mechanism by which previously learned linguistic features might become innate through natural selection across many generations of language users. The results indicate that cultural evolution of language does not necessarily prevent functional features of language from becoming genetically fixed, thus potentially providing a particularly informative source of constraints on cross-linguistic resemblance patterns.
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact generative model underlying a wide class of languages, purely from observing samples of the language. We then describe a recently proposed practical framework, which quantifies natural language learnability, allowing specific learnability predictions to be made for the first time. In previous work, this framework was used to make learnability predictions for a wide variety of linguistic constructions, for which learnability has been much debated. Here, we present a new experiment which tests these learnability predictions. We find that our experimental results support the possibility that these linguistic constructions are acquired probabilistically from cognition-general principles.
In this paper, two experiments are reported investigating the nature of the cognitive representations underlying causal conditional reasoning performance. The predictions of causal and logical interpretations of the conditional diverge sharply when inferences involving pairs of conditionals-such as if P(1)then Q and if P(2)then Q-are considered. From a causal perspective, the causal direction of these conditionals is critical: are the P(i)causes of Q; or symptoms caused byQ. The rich variety of inference patterns can naturally be modelled by Bayesian networks. A pair of causal conditionals where Q is an effect corresponds to a "collider" structure where the two causes (P(i)) converge on a common effect. In contrast, a pair of causal conditionals where Q is a cause corresponds to a network where two effects (P(i)) diverge from a common cause. Very different predictions are made by fully explicit or initial mental models interpretations. These predictions were tested in two experiments, each of which yielded data most consistent with causal model theory, rather than with mental models.
A central puzzle for theories of choice is that peoples preferences between options can be reversed by the presence of decoy options (that are not chosen) or by the presence of other irrelevant options added to the choice set. Three types of reversal effect reported in the decision-making literature, the attraction, compromise, and similarity effects, have been explained by a number of theoretical proposals. Yet a major theoretical challenge is capturing all 3 effects simultaneously. We review the range of mechanisms that have been proposed to account for decoy effects and analyze in detail 2 computational models, decision field theory (Roe, Busemeyer, & Townsend, 2001) and leaky competing accumulators (Usher & McClelland, 2004), that aim to combine several such mechanisms into an integrated account. By simulating the models, we examine differences in the ways the decoy effects are predicted. We argue that the LCA framework, which follows on Tverskys relational evaluation with loss aversion (Tversky & Kahneman, 1991), provides a more robust account, suggesting that common mechanisms are involved in both high-level decision making and perceptual choice, for which LCA was originally developed.
Debates concerning the types of representations that aid reading acquisition have often been influenced by the relationship between measures of early phonological awareness (the ability to process speech sounds) and later reading ability. Here, a complementary approach is explored, analyzing how the functional utility of different representational units, such as whole words, bodies (letters representing the vowel and final consonants of a syllable), and graphemes (letters representing a phoneme) may change as the number of words that can be read gradually increases. Utility is measured by applying a Simplicity Principle to the problem of mapping from print to sound; that is, assuming that the "best" representational units for reading are those which allow the mapping from print to sounds to be encoded as efficiently as possible. Results indicate that when only a small number of words are read whole-word representations are most useful, whereas when many words can be read graphemic representations have the highest utility.
We investigated whether financial risk preferences are dependent on the financial domain (i.e., the context) in which the risky choice options are presented. Previous studies have demonstrated that risk attitudes change when gambles are framed as gains, losses, or as insurance. Our study explores this directly by offering choices between identical gambles, framed in terms of seven financial domains. Three factors were extracted, explaining 68.6% of the variance: Factor 1 (Positive)-opportunity to win, pension provision, and job salary change; Factor 2 (Positive-Complex)-investments and mortgage buying; Factor 3 (Negative)-possibility of loss and insurance. Inspection of the solution revealed context effects on risk perceptions across the seven scenarios. We also found that the commonly accepted assumption that women are more risk averse cannot be confirmed with the context structure suggested in this research; however, it is acknowledged that in the students population the variance across genders might be considerably less. These results suggest that our financial risk attitude measures may be tapping into a stable aspect of "context dependence" of relevance to real-world decision making.
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the "logical" problem of language acquisition and has far-reaching implications for evolutionary psychology.
Cognitive science aims to reverse-engineer the mind, and many of the engineering challenges the mind faces involve induction. The probabilistic approach to modeling cognition begins by identifying ideal solutions to these inductive problems. Mental processes are then modeled using algorithms for approximating these solutions, and neural processes are viewed as mechanisms for implementing these algorithms, with the result being a top-down analysis of cognition starting with the function of cognitive processes. Typical connectionist models, by contrast, follow a bottom-up approach, beginning with a characterization of neural mechanisms and exploring what macro-level functional phenomena might emerge. We argue that the top-down approach yields greater flexibility for exploring the representations and inductive biases that underlie human cognition.
Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these "linguistic restrictions," and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required to learn a particular linguistic restriction. Our method provides a new research tool, allowing arguments about natural language learnability to be made explicit and quantified for the first time. We apply this method to a range of classic puzzles in language acquisition. We find some linguistic rules appear easily statistically learnable from language experience only, whereas others appear to require additional learning mechanisms (e.g., additional cues or innate constraints).
This special issue describes important recent developments in applying reinforcement learning models to capture neural and cognitive function. But reinforcement learning, as a theoretical framework, can apply at two very different levels of description: mechanistic and rational. Reinforcement learning is often viewed in mechanistic terms--as describing the operation of aspects of an agents cognitive and neural machinery. Yet it can also be viewed as a rational level of description, specifically, as describing a class of methods for learning from experience, using minimal background knowledge. This paper considers how rational and mechanistic perspectives differ, and what types of evidence distinguish between them. Reinforcement learning research in the cognitive and brain sciences is often implicitly committed to the mechanistic interpretation. Here the opposite view is put forward: that accounts of reinforcement learning should apply at the rational level, unless there is strong evidence for a mechanistic interpretation. Implications of this viewpoint for reinforcement-based theories in the cognitive and brain sciences are discussed.
In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahnemans (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that peoples experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.
This paper contrasts two structural accounts of psychological similarity: structural alignment (SA) and Representational Distortion (RD). SA proposes that similarity is determined by how readily the structures of two objects can be brought into alignment; RD measures similarity by the complexity of the transformation that "distorts" one representation into the other. We assess RD by defining a simple coding scheme of psychological transformations for the experimental materials. In two experiments, this "concrete" version of RD provides compelling fits of the data and compares favourably with SA. Finally, stepping back from particular models, we argue that perceptual theory suggests that transformations and alignment processes should generally be viewed as complementary, in contrast to the current distinction in the literature.
Herding is a form of convergent social behaviour that can be broadly defined as the alignment of the thoughts or behaviours of individuals in a group (herd) through local interaction and without centralized coordination. We suggest that herding has a broad application, from intellectual fashion to mob violence; and that understanding herding is particularly pertinent in an increasingly interconnected world. An integrated approach to herding is proposed, describing two key issues: mechanisms of transmission of thoughts or behaviour between agents, and patterns of connections between agents. We show how bringing together the diverse, often disconnected, theoretical and methodological approaches illuminates the applicability of herding to many domains of cognition and suggest that cognitive neuroscience offers a novel approach to its study.
When making decisions involving risky outcomes on the basis of verbal descriptions of the outcomes and their associated probabilities, people behave as if they overweight small probabilities. In contrast, when the same outcomes are instead experienced in a series of samples, people behave as if they underweight small probabilities. We present two experiments showing that the existing explanations of the underweighting observed in decisions from experience are not sufficient to account for the effect. Underweighting was observed when participants experienced representative samples of events, so it cannot be attributed to undersampling of the small probabilities. In addition, earlier samples predicted decisions just as well as later samples did, so underweighting cannot be attributed to recency weighting. Finally, frequency judgments were accurate, so underweighting cannot be attributed to judgment error. Furthermore, we show that the underweighting of small probabilities is also reflected in the bestfitting parameter values obtained when prospect theory, the dominant model of risky choice, is applied to the data.
Estimating the financial value of pain informs issues as diverse as the market price of analgesics, the cost-effectiveness of clinical treatments, compensation for injury, and the response to public hazards. Such valuations are assumed to reflect a stable trade-off between relief of discomfort and money. Here, using an auction-based health-market experiment, we show that the price people pay for relief of pain is strongly determined by the local context of the market, that is, by recent intensities of pain or immediately disposable income (but not overall wealth). The absence of a stable valuation metric suggests that the dynamic behavior of health markets is not predictable from the static behavior of individuals. We conclude that the results follow the dynamics of habit-formation models of economic theory, and thus, this study provides the first scientific basis for this type of preference modeling.
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic--the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. Bayesian Rationality argues that rationality is defined instead by the ability to reason about uncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning. In Chapters 1-4 of Bayesian Rationality (Oaksford & Chater 2007), the case is made that cognition in general, and human everyday reasoning in particular, is best viewed as solving probabilistic, rather than logical, inference problems. In Chapters 5-7 the psychology of "deductive" reasoning is tackled head-on: It is argued that purportedly "logical" reasoning problems, revealing apparently irrational behaviour, are better understood from a probabilistic point of view. Data from conditional reasoning, Wasons selection task, and syllogistic inference are captured by recasting these problems probabilistically. The probabilistic approach makes a variety of novel predictions which have been experimentally confirmed. The book considers the implications of this work, and the wider "probabilistic turn" in cognitive science and artificial intelligence, for understanding human rationality.
A key challenge for theories of language evolution is to explain why language is the way it is and how it came to be that way. It is clear that how we learn and use language is governed by genetic constraints. However, the nature of these innate constraints has been the subject of much debate. Although many accounts of language evolution have emphasized the importance of biological adaptations specific to language, we discuss evidence from computer simulations pointing to strong restrictions on such adaptations. Instead, we argue that processes of cultural evolution have been the primary factor affecting the evolution of linguistic structure, suggesting that the genetic constraints on language largely predate the emergence of language itself.
Language acquisition and processing are governed by genetic constraints. A crucial unresolved question is how far these genetic constraints have coevolved with language, perhaps resulting in a highly specialized and species-specific language "module," and how much language acquisition and processing redeploy preexisting cognitive machinery. In the present work, we explored the circumstances under which genes encoding language-specific properties could have coevolved with language itself. We present a theoretical model, implemented in computer simulations, of key aspects of the interaction of genes and language. Our results show that genes for language could have coevolved only with highly stable aspects of the linguistic environment; a rapidly changing linguistic environment does not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes. We argue that this rules out the possibility that arbitrary properties of language, including abstract syntactic principles governing phrase structure, case marking, and agreement, have been built into a "language module" by natural selection. The genetic basis of human language acquisition and processing did not coevolve with language, but primarily predates the emergence of language. As suggested by Darwin, the fit between language and its underlying mechanisms arose because language has evolved to fit the human brain, rather than the reverse.
Objective: A standard view in health economics is that, although there is no market that determines the "prices" for health states, people can nonetheless associate health states with monetary values (or other scales, such as quality adjusted life year [QALYs] and disability adjusted life year [DALYs]). Such valuations can be used to shape health policy, and a major research challenge is to elicit such values from people; creating experimental "markets" for health states is a theoretically attractive way to address this. We explore the possibility that this framework may be fundamentally flawed-because there may not be any stable values to be revealed. Instead, perhaps people construct ad hoc values, influenced by contextual factors, such as the observed decisions of others. Method: The participants bid to buy relief from equally painful electrical shocks to the leg and arm in an experimental health market based on an interactive second-price auction. Thirty subjects were randomly assigned to two experimental conditions where the bids by "others" were manipulated to follow increasing or decreasing price trends for one, but not the other, pain. After the auction, a preference test asked the participants to choose which pain they prefer to experience for a longer duration. Results: Players remained indifferent between the two pain-types throughout the auction. However, their bids were differentially attracted toward what others bid for each pain, with overbidding during decreasing prices and underbidding during increasing prices. Conclusion: Health preferences are dissociated from market prices, which are strongly referenced to others choices. This suggests that the price of health care in a free-market has the capacity to become critically detached from peoples underlying preferences. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
In contrast with animal communication systems, diversity is characteristic of almost every aspect of human language. Languages variously employ tones, clicks, or manual signs to signal differences in meaning; some languages lack the noun-verb distinction (e.g., Straits Salish), whereas others have a proliferation of fine-grained syntactic categories (e.g., Tzeltal); and some languages do without morphology (e.g., Mandarin), while others pack a whole sentence into a single word (e.g., Cayuga). A challenge for evolutionary biology is to reconcile the diversity of languages with the high degree of biological uniformity of their speakers. Here, we model processes of language change and geographical dispersion and find a consistent pressure for flexible learning, irrespective of the language being spoken. This pressure arises because flexible learners can best cope with the observed high rates of linguistic change associated with divergent cultural evolution following human migration. Thus, rather than genetic adaptations for specific aspects of language, such as recursion, the coevolution of genes and fast-changing linguistic structure provides the biological basis for linguistic diversity. Only biological adaptations for flexible learning combined with cultural evolution can explain how each child has the potential to learn any human language.
In a series of experiments, Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011) studied relative-frequency judgments of items drawn from two distinct categories. The experiments showed that the judged frequencies of categories of sequentially encountered stimuli are affected by the properties of the experienced sequences. Specifically, a first-run effect was observed, whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence. Here, we (1) interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns, (2) present mathematical definitions of the sequences used in Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011), and (3) present a mathematical formalization of the first-run effect-the judgments-relative-to-patterns model-to account for the judged frequencies of sequentially encountered stimuli. The model parameter w accounts for the effect of the length of the first run on frequency estimates, given the total sequence length. We fitted data from Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011) to the model parameters, so that with increasing values of w, subsequent items in the first run have less influence on judgments. We see the role of the model as essential for advancing knowledge in the psychology of judgments, as well as in other disciplines, such as computer science, cognitive neuroscience, artificial intelligence, and human-computer interaction.
Human choice behavior exhibits many paradoxical and challenging patterns. Traditional explanations focus on how values are represented, but little is known about how values are integrated. Here we outline a psychophysical task for value integration that can be used as a window on high-level, multiattribute decisions. Participants choose between alternative rapidly presented streams of numerical values. By controlling the temporal distribution of the values, we demonstrate that this process underlies many puzzling choice paradoxes, such as temporal, risk, and framing biases, as well as preference reversals. These phenomena can be explained by a simple mechanism based on the integration of values, weighted by their salience. The salience of a sampled value depends on its temporal order and momentary rank in the decision context, whereas the direction of the weighting is determined by the task framing. We show that many known choice anomalies may arise from the microstructure of the value integration process.
Bowers and Davis (2012) criticize Bayesian modelers for telling "just so" stories about cognition and neuroscience. Their criticisms are weakened by not giving an accurate characterization of the motivation behind Bayesian modeling or the ways in which Bayesian models are used and by not evaluating this theoretical framework against specific alternatives. We address these points by clarifying our beliefs about the goals and status of Bayesian models and by identifying what we view as the unique merits of the Bayesian approach.
Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.