Chronic neural recording in behaving animals is an essential method for studies of neural circuit function. However, stable recordings from small, densely packed neurons remains challenging, particularly over time-scales relevant for learning.
Bird songs range in form from the simple notes of a Chipping Sparrow to the rich performance of the nightingale. Non-adjacent correlations can be found in the syntax of some birdsongs, indicating that the choice of what to sing next is determined not only by the current syllable, but also by previous syllables sung. Here we examine the song of the domesticated canary, a complex singer whose song consists of syllables, grouped into phrases that are arranged in flexible sequences. Phrases are defined by a fundamental time-scale that is independent of the underlying syllable duration. We show that the ordering of phrases is governed by long-range rules: the choice of what phrase to sing next in a given context depends on the history of the song, and for some syllables, highly specific rules produce correlations in song over timescales of up to ten seconds. The neural basis of these long-range correlations may provide insight into how complex behaviors are assembled from more elementary, stereotyped modules.
All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brains What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans.
Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia.
Multiple options are available for closure of hysterectomy incisions. This study compared postoperative clinical and economic outcomes using topical skin adhesive (2-octyl cyanoacrylate; OCA) vs. conventional skin closure in women undergoing total abdominal hysterectomy.
A neural model is described of how spontaneous retinal waves are formed in infant mammals, and how these waves organize activity-dependent development of a topographic map in the lateral geniculate nucleus, with connections from each eye segregated into separate anatomical layers. The model simulates the spontaneous behavior of starburst amacrine cells and retinal ganglion cells during the production of retinal waves during the first few weeks of mammalian postnatal development. It proposes how excitatory and inhibitory mechanisms within individual cells, such as Ca(2+)-activated K(+) channels, and cAMP currents and signaling cascades, can modulate the spatiotemporal dynamics of waves, notably by controlling the after-hyperpolarization currents of starburst amacrine cells. Given the critical role of the geniculate map in the development of visual cortex, these results provide a foundation for analyzing the temporal dynamics whereby the visual cortex itself develops.
Stereotyped sequences of neural activity underlie learned vocal behavior in songbirds; principle neurons in the cortical motor nucleus HVC fire in stereotyped sequences with millisecond precision across multiple renditions of a song. The geometry of neural connections underlying these sequences is not known in detail though feed-forward chains are commonly assumed in theoretical models of sequential neural activity. In songbirds, a well-defined cortical-thalamic motor circuit exists but little is known the fine-grain structure of connections within each song nucleus. To examine whether the structure of song is critically dependent on long-range connections within HVC, we bilaterally transected the nucleus along the anterior-posterior axis in normal-hearing and deafened birds. The disruption leads to a slowing of song as well as an increase in acoustic variability. These effects are reversed on a time-scale of days even in deafened birds or in birds that are prevented from singing post-transection. The stereotyped song of zebra finches includes acoustic details that span from milliseconds to seconds--one of the most precise learned behaviors in the animal kingdom. This detailed motor pattern is resilient to disruption of connections at the cortical level, and the details of song variability and duration are maintained by offline homeostasis of the song circuit.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.