JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Paleoindian unifacial stone tool spurs: intended accessories or incidental accidents?
PUBLISHED: 01-01-2013
Paleoindian unifacial stone tools frequently exhibit distinct, sharp projections, known as "spurs". During the last two decades, a theoretically and empirically informed interpretation-based on individual artifact analysis, use-wear, tool-production techniques, and studies of resharpening-suggested that spurs were sometimes created intentionally via retouch, and other times created incidentally via resharpening or knapping accidents. However, more recently Weedman strongly criticized the inference that Paleoindian spurs were ever intentionally produced or served a functional purpose, and asserted that ethnographic research "demonstrates that the presence of so called graver spurs does not have a functional significance." While ethnographic data cannot serve as a direct test of the archaeological record, we used Weedmans ethnographic observations to create two quantitative predictions of the Paleoindian archaeological record in order to directly examine the hypothesis that Paleoindian spurs were predominantly accidents occurring incidentally via resharpening and reshaping. The first prediction is that the frequency of spurs should increase as tool reduction proceeds. The second prediction is that the frequency of spurs should increase as tool breakage increases. An examination of 563 unbroken tools and 629 tool fragments from the Clovis archaeological record of the North American Lower Great Lakes region showed that neither prediction was consistent with the notion that spurs were predominately accidents. Instead, our results support the prevailing viewpoint that spurs were sometimes created intentionally via retouch, and other times, created incidentally via resharpening or knapping accidents. Behaviorally, this result is consistent with the notion that unifacial stone tools were multifunctional implements that enhanced the mobile lifestyle of Pleistocene hunter-gatherers.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
29 Related JoVE Articles!
Play Button
EEG Mu Rhythm in Typical and Atypical Development
Authors: Raphael Bernier, Benjamin Aaronson, Anna Kresse.
Institutions: University of Washington, University of Washington.
Electroencephalography (EEG) is an effective, efficient, and noninvasive method of assessing and recording brain activity. Given the excellent temporal resolution, EEG can be used to examine the neural response related to specific behaviors, states, or external stimuli. An example of this utility is the assessment of the mirror neuron system (MNS) in humans through the examination of the EEG mu rhythm. The EEG mu rhythm, oscillatory activity in the 8-12 Hz frequency range recorded from centrally located electrodes, is suppressed when an individual executes, or simply observes, goal directed actions. As such, it has been proposed to reflect activity of the MNS. It has been theorized that dysfunction in the mirror neuron system (MNS) plays a contributing role in the social deficits of autism spectrum disorder (ASD). The MNS can then be noninvasively examined in clinical populations by using EEG mu rhythm attenuation as an index for its activity. The described protocol provides an avenue to examine social cognitive functions theoretically linked to the MNS in individuals with typical and atypical development, such as ASD. 
Medicine, Issue 86, Electroencephalography (EEG), mu rhythm, imitation, autism spectrum disorder, social cognition, mirror neuron system
Play Button
Chemotactic Response of Marine Micro-Organisms to Micro-Scale Nutrient Layers
Authors: Justin R. Seymour, Marcos, Roman Stocker.
Institutions: MIT - Massachusetts Institute of Technology.
The degree to which planktonic microbes can exploit microscale resource patches will have considerable implications for oceanic trophodynamics and biogeochemical flux. However, to take advantage of nutrient patches in the ocean, swimming microbes must overcome the influences of physical forces including molecular diffusion and turbulent shear, which will limit the availability of patches and the ability of bacteria to locate them. Until recently, methodological limitations have precluded direct examinations of microbial behaviour within patchy habitats and realistic small-scale flow conditions. Hence, much of our current knowledge regarding microbial behaviour in the ocean has been procured from theoretical predictions. To obtain new information on microbial foraging behaviour in the ocean we have applied soft lithographic fabrication techniques to develop 2 microfluidic devices, which we have used to create (i) microscale nutrient patches with dimensions and diffusive characteristics relevant to oceanic processes and (ii) microscale vortices, with shear rates corresponding to those expected in the ocean. These microfluidic devices have permitted a first direct examination of microbial swimming and chemotactic behaviour within a heterogeneous and dynamic seascape. The combined use of epifluorescence and phase contrast microscopy allow direct examinations of the physical dimensions and diffusive characteristics of nutrient patches, while observing the population-level aggregative response, in addition to the swimming behaviour of individual microbes. These experiments have revealed that some species of phytoplankton, heterotrophic bacteria and phagotrophic protists are adept at locating and exploiting diffusing microscale resource patches within very short time frames. We have also shown that up to moderate shear rates, marine bacteria are able to fight the flow and swim through their environment at their own accord. However, beyond a threshold high shear level, bacteria are aligned in the shear flow and are less capable of swimming without disturbance from the flow. Microfluidics represents a novel and inexpensive approach for studying aquatic microbial ecology, and due to its suitability for accurately creating realistic flow fields and substrate gradients at the microscale, is ideally applicable to examinations of microbial behaviour at the smallest scales of interaction. We therefore suggest that microfluidics represents a valuable tool for obtaining a better understanding of the ecology of microorganisms in the ocean.
Microbiology, issue 4, microbial community, chemotaxis, microfluidics
Play Button
A Behavioral Assay to Measure Responsiveness of Zebrafish to Changes in Light Intensities
Authors: Farida Emran, Jason Rihel, John E. Dowling.
Institutions: Harvard.
The optokinetic reflex (OKR) is a basic visual reflex exhibited by most vertebrates and plays an important role in stabilizing the eye relative to the visual scene. However, the OKR requires that an animal detect moving stripes and it is possible that fish that fail to exhibit an OKR may not be completely blind. One zebrafish mutant, the no optokinetic response c (nrc) has no OKR under any light conditions tested and was reported to be completely blind. Previously, we have shown that OFF-ganglion cell activity can be recorded in these mutants. To determine whether mutant fish with no OKR such as the nrc mutant can detect simple light increments and decrements we developed the visual motor behavioral assay (VMR). In this assay, single zebrafish larvae are placed in each well of a 96-well plate allowing the simultaneous monitoring of larvae using an automated video-tracking system. The locomotor responses of each larva to 30 minutes light ON and 30 minutes light OFF were recorded and quantified. WT fish have a brief spike of motor activity upon lights ON, known as the startle response, followed by return to lower-than baseline activity, called a freeze. WT fish also sharply increase their locomotor activity immediately following lights OFF and only gradually (over several minutes) return to baseline locomotor activity. The nrc mutants respond similarly to light OFF as WT fish, but exhibit a slight reduction in their average activity as compared to WT fish. Motor activity in response to light ON in nrc mutants is delayed and sluggish. There is a slow rise time of the nrc mutant response to light ON as compared to WT light ON response. The results indicate that nrc fish are not completely blind. Because teleosts can detect light through non-retinal tissues, we confirmed that the immediate behavioral responses to light-intensity changes require intact eyes by using the chokh (chk) mutants, which completely lack eyes from the earliest stages of development. In our VMR assay, the chk mutants exhibit no startle response to either light ON or OFF, showing that the lateral eyes mediate this behavior. The VMR assay described here complements the well-established OKR assay, which does not test the ability of zebrafish larvae to respond to changes in light intensities. Additionally, the automation of the VMR assay lends itself to high-throughput screening for defects in light-intensity driven visual responses.
Developmental Biology, Issue 20, vision, ON- and OFF-responses, behavior, zebrafish
Play Button
Technique and Considerations in the Use of 4x1 Ring High-definition Transcranial Direct Current Stimulation (HD-tDCS)
Authors: Mauricio F. Villamar, Magdalena Sarah Volz, Marom Bikson, Abhishek Datta, Alexandre F. DaSilva, Felipe Fregni.
Institutions: Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Pontifical Catholic University of Ecuador, Charité University Medicine Berlin, The City College of The City University of New York, University of Michigan.
High-definition transcranial direct current stimulation (HD-tDCS) has recently been developed as a noninvasive brain stimulation approach that increases the accuracy of current delivery to the brain by using arrays of smaller "high-definition" electrodes, instead of the larger pad-electrodes of conventional tDCS. Targeting is achieved by energizing electrodes placed in predetermined configurations. One of these is the 4x1-ring configuration. In this approach, a center ring electrode (anode or cathode) overlying the target cortical region is surrounded by four return electrodes, which help circumscribe the area of stimulation. Delivery of 4x1-ring HD-tDCS is capable of inducing significant neurophysiological and clinical effects in both healthy subjects and patients. Furthermore, its tolerability is supported by studies using intensities as high as 2.0 milliamperes for up to twenty minutes. Even though 4x1 HD-tDCS is simple to perform, correct electrode positioning is important in order to accurately stimulate target cortical regions and exert its neuromodulatory effects. The use of electrodes and hardware that have specifically been tested for HD-tDCS is critical for safety and tolerability. Given that most published studies on 4x1 HD-tDCS have targeted the primary motor cortex (M1), particularly for pain-related outcomes, the purpose of this article is to systematically describe its use for M1 stimulation, as well as the considerations to be taken for safe and effective stimulation. However, the methods outlined here can be adapted for other HD-tDCS configurations and cortical targets.
Medicine, Issue 77, Neurobiology, Neuroscience, Physiology, Anatomy, Biomedical Engineering, Biophysics, Neurophysiology, Nervous System Diseases, Diagnosis, Therapeutics, Anesthesia and Analgesia, Investigative Techniques, Equipment and Supplies, Mental Disorders, Transcranial direct current stimulation, tDCS, High-definition transcranial direct current stimulation, HD-tDCS, Electrical brain stimulation, Transcranial electrical stimulation (tES), Noninvasive Brain Stimulation, Neuromodulation, non-invasive, brain, stimulation, clinical techniques
Play Button
Computerized Dynamic Posturography for Postural Control Assessment in Patients with Intermittent Claudication
Authors: Natalie Vanicek, Stephanie A. King, Risha Gohil, Ian C. Chetter, Patrick A Coughlin.
Institutions: University of Sydney, University of Hull, Hull and East Yorkshire Hospitals, Addenbrookes Hospital.
Computerized dynamic posturography with the EquiTest is an objective technique for measuring postural strategies under challenging static and dynamic conditions. As part of a diagnostic assessment, the early detection of postural deficits is important so that appropriate and targeted interventions can be prescribed. The Sensory Organization Test (SOT) on the EquiTest determines an individual's use of the sensory systems (somatosensory, visual, and vestibular) that are responsible for postural control. Somatosensory and visual input are altered by the calibrated sway-referenced support surface and visual surround, which move in the anterior-posterior direction in response to the individual's postural sway. This creates a conflicting sensory experience. The Motor Control Test (MCT) challenges postural control by creating unexpected postural disturbances in the form of backwards and forwards translations. The translations are graded in magnitude and the time to recover from the perturbation is computed. Intermittent claudication, the most common symptom of peripheral arterial disease, is characterized by a cramping pain in the lower limbs and caused by muscle ischemia secondary to reduced blood flow to working muscles during physical exertion. Claudicants often display poor balance, making them susceptible to falls and activity avoidance. The Ankle Brachial Pressure Index (ABPI) is a noninvasive method for indicating the presence of peripheral arterial disease and intermittent claudication, a common symptom in the lower extremities. ABPI is measured as the highest systolic pressure from either the dorsalis pedis or posterior tibial artery divided by the highest brachial artery systolic pressure from either arm. This paper will focus on the use of computerized dynamic posturography in the assessment of balance in claudicants.
Medicine, Issue 82, Posture, Computerized dynamic posturography, Ankle brachial pressure index, Peripheral arterial disease, Intermittent claudication, Balance, Posture, EquiTest, Sensory Organization Test, Motor Control Test
Play Button
Thinned-skull Cortical Window Technique for In Vivo Optical Coherence Tomography Imaging
Authors: Jenny I. Szu, Melissa M. Eberle, Carissa L. Reynolds, Mike S. Hsu, Yan Wang, Christian M. Oh, M. Shahidul Islam, B. Hyle Park, Devin K. Binder.
Institutions: University of California, Riverside , University of California, Riverside .
Optical coherence tomography (OCT) is a biomedical imaging technique with high spatial-temporal resolution. With its minimally invasive approach OCT has been used extensively in ophthalmology, dermatology, and gastroenterology1-3. Using a thinned-skull cortical window (TSCW), we employ spectral-domain OCT (SD-OCT) modality as a tool to image the cortex in vivo. Commonly, an opened-skull has been used for neuro-imaging as it provides more versatility, however, a TSCW approach is less invasive and is an effective mean for long term imaging in neuropathology studies. Here, we present a method of creating a TSCW in a mouse model for in vivo OCT imaging of the cerebral cortex.
Neuroscience, Issue 69, Bioengineering, Medicine, Biomedical Engineering, Anatomy, Physiology, Thinned-skull cortical window (TSCW), Optical coherence tomography (OCT), Spectral-domain OCT (SD-OCT), cerebral cortex, brain, imaging, mouse model
Play Button
Adjustable Stiffness, External Fixator for the Rat Femur Osteotomy and Segmental Bone Defect Models
Authors: Vaida Glatt, Romano Matthys.
Institutions: Queensland University of Technology, RISystem AG.
The mechanical environment around the healing of broken bone is very important as it determines the way the fracture will heal. Over the past decade there has been great clinical interest in improving bone healing by altering the mechanical environment through the fixation stability around the lesion. One constraint of preclinical animal research in this area is the lack of experimental control over the local mechanical environment within a large segmental defect as well as osteotomies as they heal. In this paper we report on the design and use of an external fixator to study the healing of large segmental bone defects or osteotomies. This device not only allows for controlled axial stiffness on the bone lesion as it heals, but it also enables the change of stiffness during the healing process in vivo. The conducted experiments have shown that the fixators were able to maintain a 5 mm femoral defect gap in rats in vivo during unrestricted cage activity for at least 8 weeks. Likewise, we observed no distortion or infections, including pin infections during the entire healing period. These results demonstrate that our newly developed external fixator was able to achieve reproducible and standardized stabilization, and the alteration of the mechanical environment of in vivo rat large bone defects and various size osteotomies. This confirms that the external fixation device is well suited for preclinical research investigations using a rat model in the field of bone regeneration and repair.
Medicine, Issue 92, external fixator, bone healing, small animal model, large bone defect and osteotomy model, rat model, mechanical environment, mechanobiology.
Play Button
Absolute Quantum Yield Measurement of Powder Samples
Authors: Luis A. Moreno.
Institutions: Hitachi High Technologies America.
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry. Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material. For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system. Measurement of quantum yield in powder samples is performed following these steps: Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations. Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements. Reference and Sample measurement using direct excitation and indirect excitation. Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation. Corrected quantum yield calculation. Chromaticity coordinates calculation using Report Generator program. The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this application, as follows: High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate. Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions. High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function. Measurement of quantum yield over a wide wavelength range from 240 to 800 nm. Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample. Large selection of calculated parameters provided by dedicated and easy to use software. During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Molecular Biology, Issue 63, Powders, Quantum, Yield, F-7000, Quantum Yield, phosphor, chromaticity, Photo-luminescence
Play Button
High Throughput Single-cell and Multiple-cell Micro-encapsulation
Authors: Todd P. Lagus, Jon F. Edd.
Institutions: Vanderbilt University.
Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of controlled sizes. By combining drop generation techniques with cell and particle ordering, we demonstrate controlled encapsulation of cell-sized particles for efficient, continuous encapsulation. Using an aqueous particle suspension and immiscible fluorocarbon oil, we generate aqueous drops in oil with a flow focusing nozzle. The aqueous flow rate is sufficiently high to create ordering of particles which reach the nozzle at integer multiple frequencies of the drop generation frequency, encapsulating a controlled number of cells in each drop. For representative results, 9.9 μm polystyrene particles are used as cell surrogates. This study shows a single-particle encapsulation efficiency Pk=1 of 83.7% and a double-particle encapsulation efficiency Pk=2 of 79.5% as compared to their respective Poisson efficiencies of 39.3% and 33.3%, respectively. The effect of consistent cell and particle concentration is demonstrated to be of major importance for efficient encapsulation, and dripping to jetting transitions are also addressed. Introduction Continuous media aqueous cell suspensions share a common fluid environment which allows cells to interact in parallel and also homogenizes the effects of specific cells in measurements from the media. High-throughput encapsulation of cells into picoliter-scale drops confines the samples to protect drops from cross-contamination, enable a measure of cellular diversity within samples, prevent dilution of reagents and expressed biomarkers, and amplify signals from bioreactor products. Drops also provide the ability to re-merge drops into larger aqueous samples or with other drops for intercellular signaling studies.1,2 The reduction in dilution implies stronger detection signals for higher accuracy measurements as well as the ability to reduce potentially costly sample and reagent volumes.3 Encapsulation of cells in drops has been utilized to improve detection of protein expression,4 antibodies,5,6 enzymes,7 and metabolic activity8 for high throughput screening, and could be used to improve high throughput cytometry.9 Additional studies present applications in bio-electrospraying of cell containing drops for mass spectrometry10 and targeted surface cell coatings.11 Some applications, however, have been limited by the lack of ability to control the number of cells encapsulated in drops. Here we present a method of ordered encapsulation12 which increases the demonstrated encapsulation efficiencies for one and two cells and may be extrapolated for encapsulation of a larger number of cells. To achieve monodisperse drop generation, microfluidic "flow focusing" enables the creation of controllable-size drops of one fluid (an aqueous cell mixture) within another (a continuous oil phase) by using a nozzle at which the streams converge.13 For a given nozzle geometry, the drop generation frequency f and drop size can be altered by adjusting oil and aqueous flow rates Qoil and Qaq. As the flow rates increase, the flows may transition from drop generation to unstable jetting of aqueous fluid from the nozzle.14 When the aqueous solution contains suspended particles, particles become encapsulated and isolated from one another at the nozzle. For drop generation using a randomly distributed aqueous cell suspension, the average fraction of drops Dk containing k cells is dictated by Poisson statistics, where Dk = λk exp(-λ)/(k!) and λ is the average number of cells per drop. The fraction of cells which end up in the "correctly" encapsulated drops is calculated using Pk = (k x Dk)/Σ(k' x Dk'). The subtle difference between the two metrics is that Dk relates to the utilization of aqueous fluid and the amount of drop sorting that must be completed following encapsulation, and Pk relates to the utilization of the cell sample. As an example, one could use a dilute cell suspension (low λ) to encapsulate drops where most drops containing cells would contain just one cell. While the efficiency metric Pk would be high, the majority of drops would be empty (low Dk), thus requiring a sorting mechanism to remove empty drops, also reducing throughput.15 Combining drop generation with inertial ordering provides the ability to encapsulate drops with more predictable numbers of cells per drop and higher throughputs than random encapsulation. Inertial focusing was first discovered by Segre and Silberberg16 and refers to the tendency of finite-sized particles to migrate to lateral equilibrium positions in channel flow. Inertial ordering refers to the tendency of the particles and cells to passively organize into equally spaced, staggered, constant velocity trains. Both focusing and ordering require sufficiently high flow rates (high Reynolds number) and particle sizes (high Particle Reynolds number).17,18 Here, the Reynolds number Re =uDh and particle Reynolds number Rep =Re(a/Dh)2, where u is a characteristic flow velocity, Dh [=2wh/(w+h)] is the hydraulic diameter, ν is the kinematic viscosity, a is the particle diameter, w is the channel width, and h is the channel height. Empirically, the length required to achieve fully ordered trains decreases as Re and Rep increase. Note that the high Re and Rep requirements (for this study on the order of 5 and 0.5, respectively) may conflict with the need to keep aqueous flow rates low to avoid jetting at the drop generation nozzle. Additionally, high flow rates lead to higher shear stresses on cells, which are not addressed in this protocol. The previous ordered encapsulation study demonstrated that over 90% of singly encapsulated HL60 cells under similar flow conditions to those in this study maintained cell membrane integrity.12 However, the effect of the magnitude and time scales of shear stresses will need to be carefully considered when extrapolating to different cell types and flow parameters. The overlapping of the cell ordering, drop generation, and cell viability aqueous flow rate constraints provides an ideal operational regime for controlled encapsulation of single and multiple cells. Because very few studies address inter-particle train spacing,19,20 determining the spacing is most easily done empirically and will depend on channel geometry, flow rate, particle size, and particle concentration. Nonetheless, the equal lateral spacing between trains implies that cells arrive at predictable, consistent time intervals. When drop generation occurs at the same rate at which ordered cells arrive at the nozzle, the cells become encapsulated within the drop in a controlled manner. This technique has been utilized to encapsulate single cells with throughputs on the order of 15 kHz,12 a significant improvement over previous studies reporting encapsulation rates on the order of 60-160 Hz.4,15 In the controlled encapsulation work, over 80% of drops contained one and only one cell, a significant efficiency improvement over Poisson (random) statistics, which predicts less than 40% efficiency on average.12 In previous controlled encapsulation work,12 the average number of particles per drop λ was tuned to provide single-cell encapsulation. We hypothesize that through tuning of flow rates, we can efficiently encapsulate any number of cells per drop when λ is equal or close to the number of desired cells per drop. While single-cell encapsulation is valuable in determining individual cell responses from stimuli, multiple-cell encapsulation provides information relating to the interaction of controlled numbers and types of cells. Here we present a protocol, representative results using polystyrene microspheres, and discussion for controlled encapsulation of multiple cells using a passive inertial ordering channel and drop generation nozzle.
Bioengineering, Issue 64, Drop-based microfluidics, inertial microfluidics, ordering, focusing, cell encapsulation, single-cell biology, cell signaling
Play Button
A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision
Authors: Amy Catherine Nau, Christine Pintar, Christopher Fisher, Jong-Hyeon Jeong, KwonHo Jeong.
Institutions: University of Pittsburgh, University of Pittsburgh.
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS)
Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision
Play Button
Construction of Vapor Chambers Used to Expose Mice to Alcohol During the Equivalent of all Three Trimesters of Human Development
Authors: Russell A. Morton, Marvin R. Diaz, Lauren A. Topper, C. Fernando Valenzuela.
Institutions: University of New Mexico Health Sciences Center.
Exposure to alcohol during development can result in a constellation of morphological and behavioral abnormalities that are collectively known as Fetal Alcohol Spectrum Disorders (FASDs). At the most severe end of the spectrum is Fetal Alcohol Syndrome (FAS), characterized by growth retardation, craniofacial dysmorphology, and neurobehavioral deficits. Studies with animal models, including rodents, have elucidated many molecular and cellular mechanisms involved in the pathophysiology of FASDs. Ethanol administration to pregnant rodents has been used to model human exposure during the first and second trimesters of pregnancy. Third trimester ethanol consumption in humans has been modeled using neonatal rodents. However, few rodent studies have characterized the effect of ethanol exposure during the equivalent to all three trimesters of human pregnancy, a pattern of exposure that is common in pregnant women. Here, we show how to build vapor chambers from readily obtainable materials that can each accommodate up to six standard mouse cages. We describe a vapor chamber paradigm that can be used to model exposure to ethanol, with minimal handling, during all three trimesters. Our studies demonstrate that pregnant dams developed significant metabolic tolerance to ethanol. However, neonatal mice did not develop metabolic tolerance and the number of fetuses, fetus weight, placenta weight, number of pups/litter, number of dead pups/litter, and pup weight were not significantly affected by ethanol exposure. An important advantage of this paradigm is its applicability to studies with genetically-modified mice. Additionally, this paradigm minimizes handling of animals, a major confound in fetal alcohol research.
Medicine, Issue 89, fetal, ethanol, exposure, paradigm, vapor, development, alcoholism, teratogenic, animal, mouse, model
Play Button
Characterization of Electrode Materials for Lithium Ion and Sodium Ion Batteries Using Synchrotron Radiation Techniques
Authors: Marca M. Doeff, Guoying Chen, Jordi Cabana, Thomas J. Richardson, Apurva Mehta, Mona Shirpour, Hugues Duncan, Chunjoong Kim, Kinson C. Kam, Thomas Conry.
Institutions: Lawrence Berkeley National Laboratory, University of Illinois at Chicago, Stanford Synchrotron Radiation Lightsource, Haldor Topsøe A/S, PolyPlus Battery Company.
Intercalation compounds such as transition metal oxides or phosphates are the most commonly used electrode materials in Li-ion and Na-ion batteries. During insertion or removal of alkali metal ions, the redox states of transition metals in the compounds change and structural transformations such as phase transitions and/or lattice parameter increases or decreases occur. These behaviors in turn determine important characteristics of the batteries such as the potential profiles, rate capabilities, and cycle lives. The extremely bright and tunable x-rays produced by synchrotron radiation allow rapid acquisition of high-resolution data that provide information about these processes. Transformations in the bulk materials, such as phase transitions, can be directly observed using X-ray diffraction (XRD), while X-ray absorption spectroscopy (XAS) gives information about the local electronic and geometric structures (e.g. changes in redox states and bond lengths). In situ experiments carried out on operating cells are particularly useful because they allow direct correlation between the electrochemical and structural properties of the materials. These experiments are time-consuming and can be challenging to design due to the reactivity and air-sensitivity of the alkali metal anodes used in the half-cell configurations, and/or the possibility of signal interference from other cell components and hardware. For these reasons, it is appropriate to carry out ex situ experiments (e.g. on electrodes harvested from partially charged or cycled cells) in some cases. Here, we present detailed protocols for the preparation of both ex situ and in situ samples for experiments involving synchrotron radiation and demonstrate how these experiments are done.
Physics, Issue 81, X-Ray Absorption Spectroscopy, X-Ray Diffraction, inorganic chemistry, electric batteries (applications), energy storage, Electrode materials, Li-ion battery, Na-ion battery, X-ray Absorption Spectroscopy (XAS), in situ X-ray diffraction (XRD)
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Play Button
The ex vivo Isolated Skeletal Microvessel Preparation for Investigation of Vascular Reactivity
Authors: Joshua T. Butcher, Adam G. Goodwill, Jefferson C. Frisbee.
Institutions: West Virginia University .
The isolated microvessel preparation is an ex vivo preparation that allows for examination of the different contributions of factors that control vessel diameter, and thus, perfusion resistance1-5. This is a classic experimental preparation that was, in large measure, initially described by Uchida et al.15 several decades ago. This initial description provided the basis for the techniques that was extensively modified and enhanced, primarily in the laboratory of Dr. Brian Duling at the University of Virginia6-8, and we present a current approach in the following pages. This preparation will specifically refer to the gracilis arteriole in a rat as the microvessel of choice, but the basic preparation can readily be applied to vessels isolated from nearly any other tissue or organ across species9-13. Mechanical (i.e., dimensional) changes in the isolated microvessels can easily be evaluated in response to a broad array of physiological (e.g., hypoxia, intravascular pressure, or shear) or pharmacological challenges, and can provide insight into mechanistic elements comprising integrated responses in an intact, although ex vivo, tissue. The significance of this method is that it allows for facile manipulation of the influences on the integrated regulation of microvessel diameter, while also allowing for the control of many of the contributions from other sources, including intravascular pressure (myogenic), autonomic innervation, hemodynamic (e.g., shear stress), endothelial dependent or independent stimuli, hormonal, and parenchymal influences, to provide a partial list. Under appropriate experimental conditions and with appropriate goals, this can serve as an advantage over in vivo or in situ tissue/organ preparations, which do not readily allow for the facile control of broader systemic variables. The major limitation of this preparation is essentially the consequence of its strengths. By definition, the behavior of these vessels is being studied under conditions where many of the most significant contributors to the regulation of vascular resistance have been removed, including neural, humoral, metabolic, etc. As such, the investigator is cautioned to avoid over-interpretation and extrapolation of the data that are collected utilizing this preparation. The other significant area of concern with regard to this preparation is that it can be very easy to damage cellular components such as the endothelial lining or the vascular smooth muscle, such that variable source of error can be introduced. It is strongly recommended that the individual investigator utilize appropriate measurements to ensure the quality of the preparation, both at the initiation of the experiment and periodically throughout the course of a protocol.
Physiology, Issue 62, isolated microvessel preparation, skeletal muscle arterioles, resistance arteriole, microcirculation, arteriolar wall mechanics
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Inducing Plasticity of Astrocytic Receptors by Manipulation of Neuronal Firing Rates
Authors: Alison X. Xie, Kelli Lauderdale, Thomas Murphy, Timothy L. Myers, Todd A. Fiacco.
Institutions: University of California Riverside, University of California Riverside, University of California Riverside.
Close to two decades of research has established that astrocytes in situ and in vivo express numerous G protein-coupled receptors (GPCRs) that can be stimulated by neuronally-released transmitter. However, the ability of astrocytic receptors to exhibit plasticity in response to changes in neuronal activity has received little attention. Here we describe a model system that can be used to globally scale up or down astrocytic group I metabotropic glutamate receptors (mGluRs) in acute brain slices. Included are methods on how to prepare parasagittal hippocampal slices, construct chambers suitable for long-term slice incubation, bidirectionally manipulate neuronal action potential frequency, load astrocytes and astrocyte processes with fluorescent Ca2+ indicator, and measure changes in astrocytic Gq GPCR activity by recording spontaneous and evoked astrocyte Ca2+ events using confocal microscopy. In essence, a “calcium roadmap” is provided for how to measure plasticity of astrocytic Gq GPCRs. Applications of the technique for study of astrocytes are discussed. Having an understanding of how astrocytic receptor signaling is affected by changes in neuronal activity has important implications for both normal synaptic function as well as processes underlying neurological disorders and neurodegenerative disease.
Neuroscience, Issue 85, astrocyte, plasticity, mGluRs, neuronal Firing, electrophysiology, Gq GPCRs, Bolus-loading, calcium, microdomains, acute slices, Hippocampus, mouse
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
Flexible Colonoscopy in Mice to Evaluate the Severity of Colitis and Colorectal Tumors Using a Validated Endoscopic Scoring System
Authors: Tomohiro Kodani, Alex Rodriguez-Palacios, Daniele Corridoni, Loris Lopetuso, Luca Di Martino, Brian Marks, James Pizarro, Theresa Pizarro, Amitabh Chak, Fabio Cominelli.
Institutions: Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland.
The use of modern endoscopy for research purposes has greatly facilitated our understanding of gastrointestinal pathologies. In particular, experimental endoscopy has been highly useful for studies that require repeated assessments in a single laboratory animal, such as those evaluating mechanisms of chronic inflammatory bowel disease and the progression of colorectal cancer. However, the methods used across studies are highly variable. At least three endoscopic scoring systems have been published for murine colitis and published protocols for the assessment of colorectal tumors fail to address the presence of concomitant colonic inflammation. This study develops and validates a reproducible endoscopic scoring system that integrates evaluation of both inflammation and tumors simultaneously. This novel scoring system has three major components: 1) assessment of the extent and severity of colorectal inflammation (based on perianal findings, transparency of the wall, mucosal bleeding, and focal lesions), 2) quantitative recording of tumor lesions (grid map and bar graph), and 3) numerical sorting of clinical cases by their pathological and research relevance based on decimal units with assigned categories of observed lesions and endoscopic complications (decimal identifiers). The video and manuscript presented herein were prepared, following IACUC-approved protocols, to allow investigators to score their own experimental mice using a well-validated and highly reproducible endoscopic methodology, with the system option to differentiate distal from proximal endoscopic colitis (D-PECS).
Medicine, Issue 80, Crohn's disease, ulcerative colitis, colon cancer, Clostridium difficile, SAMP mice, DSS/AOM-colitis, decimal scoring identifier
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Authors: Kristen K. McCampbell, Kristin N. Springer, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
Play Button
Tissue-simulating Phantoms for Assessing Potential Near-infrared Fluorescence Imaging Applications in Breast Cancer Surgery
Authors: Rick Pleijhuis, Arwin Timmermans, Johannes De Jong, Esther De Boer, Vasilis Ntziachristos, Gooitzen Van Dam.
Institutions: University Medical Center Groningen, Technical University of Munich.
Inaccuracies in intraoperative tumor localization and evaluation of surgical margin status result in suboptimal outcome of breast-conserving surgery (BCS). Optical imaging, in particular near-infrared fluorescence (NIRF) imaging, might reduce the frequency of positive surgical margins following BCS by providing the surgeon with a tool for pre- and intraoperative tumor localization in real-time. In the current study, the potential of NIRF-guided BCS is evaluated using tissue-simulating breast phantoms for reasons of standardization and training purposes. Breast phantoms with optical characteristics comparable to those of normal breast tissue were used to simulate breast conserving surgery. Tumor-simulating inclusions containing the fluorescent dye indocyanine green (ICG) were incorporated in the phantoms at predefined locations and imaged for pre- and intraoperative tumor localization, real-time NIRF-guided tumor resection, NIRF-guided evaluation on the extent of surgery, and postoperative assessment of surgical margins. A customized NIRF camera was used as a clinical prototype for imaging purposes. Breast phantoms containing tumor-simulating inclusions offer a simple, inexpensive, and versatile tool to simulate and evaluate intraoperative tumor imaging. The gelatinous phantoms have elastic properties similar to human tissue and can be cut using conventional surgical instruments. Moreover, the phantoms contain hemoglobin and intralipid for mimicking absorption and scattering of photons, respectively, creating uniform optical properties similar to human breast tissue. The main drawback of NIRF imaging is the limited penetration depth of photons when propagating through tissue, which hinders (noninvasive) imaging of deep-seated tumors with epi-illumination strategies.
Medicine, Issue 91, Breast cancer, tissue-simulating phantoms, NIRF imaging, tumor-simulating inclusions, fluorescence, intraoperative imaging
Play Button
Determining heat and mechanical pain threshold in inflamed skin of human subjects
Authors: Martin S Angst, Martha Tingle, Nicholas G Phillips, Brendan Carvalho.
Institutions: Stanford University School of Medicine.
In a previous article in the Journal of Visualized Experiments we have demonstrated skin microdialysis techniques for the collection of tissue-specific nociceptive and inflammatory biochemicals in humans. In this article we will show pain-testing paradigms that are often used in tandem with microdialysis procedures. Combining pain tests with microdialysis provides the critical link between behavioral and biochemical data that allows identifying key biochemicals responsible for generating and propagating pain. Two models of evoking pain in inflamed skin of human study participants are shown. The first model evokes pain with aid of heat stimuli. Heat evoked pain as described here is predominantly mediated by small, non-myelinated peripheral nociceptive nerve fibers (C-fibers). The second model evokes pain via punctuated pressure stimuli. Punctuated pressure evoked pain is predominantly mediated by small, myelinated peripheral nociceptive nerve fibers (A-delta fibers). The two models are mechanistically distinct and independently examine nociceptive processing by the two major peripheral nerve fiber populations involved in pain signaling. Heat pain is evoked with aid of the TSA II, a commercially available thermo-sensory analyzer (Medoc Advanced Medical Systems, Durham, NC). Stimulus configuration and delivery is handled with aid of specific software. Thermodes vary in size and shape but in principle consist of a metal plate that can be heated or cooled at various rates and for different periods of time. Algorithms assessing heat-evoked pain are manifold. In the experiments shown here, study participants are asked to indicate at what point they start experiencing pain while the thermode in contact with skin is heated at a predetermined rate starting at a temperature that does not evoke pain. The thermode temperature at which a subject starts experiencing pain constitutes the heat pain threshold. Mechanical pain is evoked with punctuated probes. Such probes are commercially available from several manufacturers (von Frey hairs). However, the accuracy of von Frey hairs has been criticized and many investigators use custom made punctuated pressure probes. In the experiments shown here eight custom-made punctuated probes of different weights are applied in consecutive order, a procedure called up-down algorithm, to identify perceptional deflection points, i.e., a change from feeling no pain to feeling pain or vice versa. The average weight causing a perceptional deflection constitutes the mechanical pain threshold.
Medicine, Issue 23, Experimental pain, experimental inflammation, human, skin, heat stimuli, mechanical stimuli, pain threshold, psychophysics, non-myelinated nociceptive nerve fiber, small myelinated nociceptive nerve fiber
Play Button
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Authors: David C. Shih, Kevin C. Ho, Kyle M. Melnick, Ronald A. Rensink, Tobias R. Kollmann, Edgardo S. Fortuno III.
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Authors: Gauthier Julie, Fadi F. Hamdan, Guy A. Rouleau.
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1 and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo mutations. This is the case for autism and schizophrenia3. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo mutations would more frequently come from males, particularly older males4. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
Play Button
Expired CO2 Measurement in Intubated or Spontaneously Breathing Patients from the Emergency Department
Authors: Franck Verschuren, Maidei Gugu Kabayadondo, Frédéric Thys.
Institutions: Universit Catholique de Louvain Cliniques Universitaires Saint-Luc.
Carbon dioxide (CO2) along with oxygen (O2) share the role of being the most important gases in the human body. The measuring of expired CO2 at the mouth has solicited growing clinical interest among physicians in the emergency department for various indications: (1) surveillance et monitoring of the intubated patient; (2) verification of the correct positioning of an endotracheal tube; (3) monitoring of a patient in cardiac arrest; (4) achieving normocapnia in intubated head trauma patients; (5) monitoring ventilation during procedural sedation. The video allows physicians to familiarize themselves with the use of capnography and the text offers a review of the theory and principals involved. In particular, the importance of CO2 for the organism, the relevance of measuring expired CO2, the differences between arterial and expired CO2, the material used in capnography with their artifacts and traps, will be reviewed. Since the main reluctance in the use of expired CO2 measurement is due to lack of correct knowledge concerning the physiopathology of CO2 by the physician, we hope that this explanation and the video sequences accompanying will help resolve this limitation.
Medicine, Issue 47, capnography, CO2, emergency medicine, end-tidal CO2
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.