Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
21 Related JoVE Articles!
Using an EEG-Based Brain-Computer Interface for Virtual Cursor Movement with BCI2000
Institutions: University of Wisconsin-Madison, New York State Dept. of Health.
A brain-computer interface (BCI) functions by translating a neural signal, such as the electroencephalogram (EEG), into a signal that can be used to control a computer or other device. The amplitude of the EEG signals in selected frequency bins are measured and translated into a device command, in this case the horizontal and vertical velocity of a computer cursor. First, the EEG electrodes are applied to the user s scalp using a cap to record brain activity. Next, a calibration procedure is used to find the EEG electrodes and features that the user will learn to voluntarily modulate to use the BCI. In humans, the power in the mu (8-12 Hz) and beta (18-28 Hz) frequency bands decrease in amplitude during a real or imagined movement. These changes can be detected in the EEG in real-time, and used to control a BCI (,). Therefore, during a screening test, the user is asked to make several different imagined movements with their hands and feet to determine the unique EEG features that change with the imagined movements. The results from this calibration will show the best channels to use, which are configured so that amplitude changes in the mu and beta frequency bands move the cursor either horizontally or vertically. In this experiment, the general purpose BCI system BCI2000 is used to control signal acquisition, signal processing, and feedback to the user .
Neuroscience, Issue 29, BCI, EEG, brain-computer interface, BCI2000
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Easy Measurement of Diffusion Coefficients of EGFP-tagged Plasma Membrane Proteins Using k-Space Image Correlation Spectroscopy
Institutions: Aarhus University, McGill University.
Lateral diffusion and compartmentalization of plasma membrane proteins are tightly regulated in cells and thus, studying these processes will reveal new insights to plasma membrane protein function and regulation. Recently, k-Space Image Correlation Spectroscopy (kICS)1
was developed to enable routine measurements of diffusion coefficients directly from images of fluorescently tagged plasma membrane proteins, that avoided systematic biases introduced by probe photophysics. Although the theoretical basis for the analysis is complex, the method can be implemented by nonexperts using a freely available code to measure diffusion coefficients of proteins. kICS calculates a time correlation function from a fluorescence microscopy image stack after Fourier transformation of each image to reciprocal (k-) space. Subsequently, circular averaging, natural logarithm transform and linear fits to the correlation function yields the diffusion coefficient. This paper provides a step-by-step guide to the image analysis and measurement of diffusion coefficients via kICS.
First, a high frame rate image sequence of a fluorescently labeled plasma membrane protein is acquired using a fluorescence microscope. Then, a region of interest (ROI) avoiding intracellular organelles, moving vesicles or protruding membrane regions is selected. The ROI stack is imported into a freely available code and several defined parameters (see Method section) are set for kICS analysis. The program then generates a "slope of slopes" plot from the k-space time correlation functions, and the diffusion coefficient is calculated from the slope of the plot. Below is a step-by-step kICS procedure to measure the diffusion coefficient of a membrane protein using the renal water channel aquaporin-3 tagged with EGFP as a canonical example.
Biophysics, Issue 87, Amino Acids, Peptides and Proteins, Computer Programming and Software, Diffusion coefficient, Aquaporin-3, k-Space Image Correlation Spectroscopy, Analysis
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro
model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro
model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Multi-electrode Array Recordings of Human Epileptic Postoperative Cortical Tissue
Institutions: CNRS UMR 7241, INSERM U1050, Collège de France, Paris Descartes University, Sorbonne Paris Cité, CEA, Paris Descartes University, Paris Descartes University, La Pitié-Salpêtrière Hospital, AP-HP, Sorbonne and Pierre and Marie Curie University.
Epilepsy, affecting about 1% of the population, comprises a group of neurological disorders characterized by the periodic occurrence of seizures, which disrupt normal brain function. Despite treatment with currently available antiepileptic drugs targeting neuronal functions, one third of patients with epilepsy are pharmacoresistant. In this condition, surgical resection of the brain area generating seizures remains the only alternative treatment. Studying human epileptic tissues has contributed to understand new epileptogenic mechanisms during the last 10 years. Indeed, these tissues generate spontaneous interictal epileptic discharges as well as pharmacologically-induced ictal events which can be recorded with classical electrophysiology techniques. Remarkably, multi-electrode arrays (MEAs), which are microfabricated devices embedding an array of spatially arranged microelectrodes, provide the unique opportunity to simultaneously stimulate and record field potentials, as well as action potentials of multiple neurons from different areas of the tissue. Thus MEAs recordings offer an excellent approach to study the spatio-temporal patterns of spontaneous interictal and evoked seizure-like events and the mechanisms underlying seizure onset and propagation. Here we describe how to prepare human cortical slices from surgically resected tissue and to record with MEAs interictal and ictal-like events ex vivo
Medicine, Issue 92, electrophysiology, multi-electrode array, human tissue, slice, epilepsy, neocortex
Real-time Electrophysiology: Using Closed-loop Protocols to Probe Neuronal Dynamics and Beyond
Institutions: University of Antwerp.
Experimental neuroscience is witnessing an increased interest in the development and application of novel and often complex, closed-loop protocols, where the stimulus applied depends in real-time on the response of the system. Recent applications range from the implementation of virtual reality systems for studying motor responses both in mice1
and in zebrafish2
, to control of seizures following cortical stroke using optogenetics3
. A key advantage of closed-loop techniques resides in the capability of probing higher dimensional properties that are not directly accessible or that depend on multiple variables, such as neuronal excitability4
and reliability, while at the same time maximizing the experimental throughput. In this contribution and in the context of cellular electrophysiology, we describe how to apply a variety of closed-loop protocols to the study of the response properties of pyramidal cortical neurons, recorded intracellularly with the patch clamp technique in acute brain slices from the somatosensory cortex of juvenile rats. As no commercially available or open source software provides all the features required for efficiently performing the experiments described here, a new software toolbox called LCG5
was developed, whose modular structure maximizes reuse of computer code and facilitates the implementation of novel experimental paradigms. Stimulation waveforms are specified using a compact meta-description and full experimental protocols are described in text-based configuration files. Additionally, LCG has a command-line interface that is suited for repetition of trials and automation of experimental protocols.
Neuroscience, Issue 100, Electrophysiology, cellular neurobiology, dynamic clamp, Active Electrode Compensation, command-line interface, real-time computing, closed-loop, scripted electrophysiology.
Closed-loop Neuro-robotic Experiments to Test Computational Properties of Neuronal Networks
Institutions: Istituto Italiano di Tecnologia.
Information coding in the Central Nervous System (CNS) remains unexplored. There is mounting evidence that, even at a very low level, the representation of a given stimulus might be dependent on context and history. If this is actually the case, bi-directional interactions between the brain (or if need be a reduced model of it) and sensory-motor system can shed a light on how encoding and decoding of information is performed. Here an experimental system is introduced and described in which the activity of a neuronal element (i.e.
, a network of neurons extracted from embryonic mammalian hippocampi) is given context and used to control the movement of an artificial agent, while environmental information is fed back to the culture as a sequence of electrical stimuli. This architecture allows a quick selection of diverse encoding, decoding, and learning algorithms to test different hypotheses on the computational properties of neuronal networks.
Neuroscience, Issue 97, Micro Electrode Arrays (MEA), in vitro cultures, coding, decoding, tetanic stimulation, spike, burst
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Polymalic Acid-based Nano Biopolymers for Targeting of Multiple Tumor Markers: An Opportunity for Personalized Medicine?
Institutions: Cedars-Sinai Medical Center.
Tumors with similar grade and morphology often respond differently to the same treatment because of variations in molecular profiling. To account for this diversity, personalized medicine is developed for silencing malignancy associated genes. Nano drugs fit these needs by targeting tumor and delivering antisense oligonucleotides for silencing of genes. As drugs for the treatment are often administered repeatedly, absence of toxicity and negligible immune response are desirable. In the example presented here, a nano medicine is synthesized from the biodegradable, non-toxic and non-immunogenic platform polymalic acid by controlled chemical ligation of antisense oligonucleotides and tumor targeting molecules. The synthesis and treatment is exemplified for human Her2-positive breast cancer using an experimental mouse model. The case can be translated towards synthesis and treatment of other tumors.
Chemistry, Issue 88, Cancer treatment, personalized medicine, polymalic acid, nanodrug, biopolymer, targeting, host compatibility, biodegradability
How to Culture, Record and Stimulate Neuronal Networks on Micro-electrode Arrays (MEAs)
Institutions: Emory University School of Medicine, University School of Medicine, Emory University School of Medicine.
For the last century, many neuroscientists around the world have dedicated their lives to understanding how neuronal networks work and why they stop working in various diseases. Studies have included neuropathological observation, fluorescent microscopy with genetic labeling, and intracellular recording in both dissociated neurons and slice preparations. This protocol discusses another technology, which involves growing dissociated neuronal cultures on micro-electrode arrays (also called multi-electrode arrays, MEAs).
There are multiple advantages to using this system over other technologies. Dissociated neuronal cultures on MEAs provide a simplified model in which network activity can be manipulated with electrical stimulation sequences through the array's multiple electrodes. Because the network is small, the impact of stimulation is limited to observable areas, which is not the case in intact preparations. The cells grow in a monolayer making changes in morphology easy to monitor with various imaging techniques. Finally, cultures on MEAs can survive for over a year in vitro which removes any clear time limitations inherent with other culturing techniques.1
Our lab and others around the globe are utilizing this technology to ask important questions about neuronal networks. The purpose of this protocol is to provide the necessary information for setting up, caring for, recording from and electrically stimulating cultures on MEAs. In vitro
networks provide a means for asking physiologically relevant questions at the network and cellular levels leading to a better understanding of brain function and dysfunction.
Neuroscience, Issue 39, micro-electrode, multi-electrode, neural, MEA, network, plasticity, spike, stimulation, recording, rat
Assessment of Cerebral Lateralization in Children using Functional Transcranial Doppler Ultrasound (fTCD)
Institutions: University of Oxford.
There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1
. Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2
Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3
The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4
, but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6
. Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen.
Neuroscience, Issue 43, functional transcranial Doppler ultrasound, cerebral lateralization, language, child
Multi-electrode Array Recordings of Neuronal Avalanches in Organotypic Cultures
Institutions: National Institute of Mental Health.
The cortex is spontaneously active, even in the absence of any particular input or motor output. During development, this activity is important for the migration and differentiation of cortex cell types and the formation of neuronal connections1
. In the mature animal, ongoing activity reflects the past and the present state of an animal into which sensory stimuli are seamlessly integrated to compute future actions. Thus, a clear understanding of the organization of ongoing i.e. spontaneous activity is a prerequisite to understand cortex function.
Numerous recording techniques revealed that ongoing activity in cortex is comprised of many neurons whose individual activities transiently sum to larger events that can be detected in the local field potential (LFP) with extracellular microelectrodes, or in the electroencephalogram (EEG), the magnetoencephalogram (MEG), and the BOLD signal from functional magnetic resonance imaging (fMRI). The LFP is currently the method of choice when studying neuronal population activity with high temporal and spatial resolution at the mesoscopic scale (several thousands of neurons). At the extracellular microelectrode, locally synchronized activities of spatially neighbored neurons result in rapid deflections in the LFP up to several hundreds of microvolts. When using an array of microelectrodes, the organizations of such deflections can be conveniently monitored in space and time.
Neuronal avalanches describe the scale-invariant spatiotemporal organization of ongoing neuronal activity in the brain2,3
. They are specific to the superficial layers of cortex as established in vitro4,5
, in vivo
in the anesthetized rat 6
, and in the awake monkey7
. Importantly, both theoretical and empirical studies2,8-10
suggest that neuronal avalanches indicate an exquisitely balanced critical state dynamics of cortex that optimizes information transfer and information processing.
In order to study the mechanisms of neuronal avalanche development, maintenance, and regulation, in vitro
preparations are highly beneficial, as they allow for stable recordings of avalanche activity under precisely controlled conditions. The current protocol describes how to study neuronal avalanches in vitro by taking advantage of superficial layer development in organotypic cortex cultures, i.e. slice cultures, grown on planar, integrated microelectrode arrays (MEA; see also 11-14
Neuroscience, Issue 54, neuronal activity, neuronal avalanches, organotypic culture, slice culture, microelectrode array, electrophysiology, local field potential, extracellular spikes
Enabling High Grayscale Resolution Displays and Accurate Response Time Measurements on Conventional Computers
Institutions: The Ohio State University, University of Southern California, University of Southern California, University of Southern California, The Ohio State University.
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ 1
and DataPixx 2
use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher 3
described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network 4
and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements.
Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events.
Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at https://lobes.usc.edu/.
Neuroscience, Issue 60, VideoSwitcher, Visual stimulus, Luminance resolution, Contrast, Trigger, RTbox, Response time
High efficiency, Site-specific Transfection of Adherent Cells with siRNA Using Microelectrode Arrays (MEA)
Institutions: Arizona State University .
The discovery of RNAi pathway in eukaryotes and the subsequent development of RNAi agents, such as siRNA and shRNA, have achieved a potent method for silencing specific genes1-8
for functional genomics and therapeutics. A major challenge involved in RNAi based studies is the delivery of RNAi agents to targeted cells. Traditional non-viral delivery techniques, such as bulk electroporation and chemical transfection methods often lack the necessary spatial control over delivery and afford poor transfection efficiencies9-12
. Recent advances in chemical transfection methods such as cationic lipids, cationic polymers and nanoparticles have resulted in highly enhanced transfection efficiencies13
. However, these techniques still fail to offer precise spatial control over delivery that can immensely benefit miniaturized high-throughput technologies, single cell studies and investigation of cell-cell interactions.
Recent technological advances in gene delivery have enabled high-throughput transfection of adherent cells14-23
, a majority of which use microscale electroporation. Microscale electroporation offers precise spatio-temporal control over delivery (up to single cells) and has been shown to achieve high efficiencies19, 24-26
. Additionally, electroporation based approaches do not require a prolonged period of incubation (typically 4 hours) with siRNA and DNA complexes as necessary in chemical based transfection methods and lead to direct entry of naked siRNA and DNA molecules into the cell cytoplasm. As a consequence gene expression can be achieved as early as six hours after transfection27
. Our lab has previously demonstrated the use of microelectrode arrays (MEA) for site-specific transfection in adherent mammalian cell cultures17-19
. In the MEA based approach, delivery of genetic payload is achieved via localized micro-scale electroporation of cells. An application of electric pulse to selected electrodes generates local electric field that leads to electroporation of cells present in the region of the stimulated electrodes. The independent control of the micro-electrodes provides spatial and temporal control over transfection and also enables multiple transfection based experiments to be performed on the same culture increasing the experimental throughput and reducing culture-to-culture variability.
Here we describe the experimental setup and the protocol for targeted transfection of adherent HeLa cells with a fluorescently tagged scrambled sequence siRNA using electroporation. The same protocol can also be used for transfection of plasmid vectors. Additionally, the protocol described here can be easily extended to a variety of mammalian cell lines with minor modifications. Commercial availability of MEAs with both pre-defined and custom electrode patterns make this technique accessible to most research labs with basic cell culture equipment.
Bioengineering, Issue 67, Genetics, Molecular Biology, Biomedical Engineering, siRNA, transfection, electroporation, microelectrode array, MEA
Intact Histological Characterization of Brain-implanted Microdevices and Surrounding Tissue
Institutions: Purdue University, Purdue University.
Research into the design and utilization of brain-implanted microdevices, such as microelectrode arrays, aims to produce clinically relevant devices that interface chronically with surrounding brain tissue. Tissue surrounding these implants is thought to react to the presence of the devices over time, which includes the formation of an insulating "glial scar" around the devices. However, histological analysis of these tissue changes is typically performed after explanting the device, in a process that can disrupt the morphology of the tissue of interest.
Here we demonstrate a protocol in which cortical-implanted devices are collected intact in surrounding rodent brain tissue. We describe how, once perfused with fixative, brains are removed and sliced in such a way as to avoid explanting devices. We outline fluorescent antibody labeling and optical clearing methods useful for producing an informative, yet thick tissue section. Finally, we demonstrate the mounting and imaging of these tissue sections in order to investigate the biological interface around brain-implanted devices.
Neurobiology, Issue 72, Neuroscience, Biomedical Engineering, Medicine, Central Nervous System, Brain, Neuroglia, Neurons, Immunohistochemistry (IHC), Histocytological Preparation Techniques, Microscopy, Confocal, nondestructive testing, bioengineering (man-machine systems), bionics, histology, brain implants, microelectrode arrays, immunohistochemistry, neuroprosthetics, brain machine interface, microscopy, thick tissue, optical clearing, animal model
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae
, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Test Samples for Optimizing STORM Super-Resolution Microscopy
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Design, Surface Treatment, Cellular Plating, and Culturing of Modular Neuronal Networks Composed of Functionally Inter-connected Circuits
Institutions: Tel-Aviv University, Istituto Italiano di Tecnologia, Tel-Aviv University, Tel-Aviv University, University of Genova.
The brain operates through the coordinated activation and the dynamic communication of neuronal assemblies. A major open question is how a vast repertoire of dynamical motifs, which underlie most diverse brain functions, can emerge out of a fixed topological and modular organization of brain circuits. Compared to in vivo
studies of neuronal circuits which present intrinsic experimental difficulties, in vitro
preparations offer a much larger possibility to manipulate and probe the structural, dynamical and chemical properties of experimental neuronal systems. This work describes an in vitro
experimental methodology which allows growing of modular networks composed by spatially distinct, functionally interconnected neuronal assemblies. The protocol allows controlling the two-dimensional (2D) architecture of the neuronal network at different levels of topological complexity.
A desired network patterning can be achieved both on regular cover slips and substrate embedded micro electrode arrays. Micromachined structures are embossed on a silicon wafer and used to create biocompatible polymeric stencils, which incorporate the negative features of the desired network architecture. The stencils are placed on the culturing substrates during the surface coating procedure with a molecular layer for promoting cellular adhesion. After removal of the stencils, neurons are plated and they spontaneously redirected to the coated areas. By decreasing the inter-compartment distance, it is possible to obtain either isolated or interconnected neuronal circuits. To promote cell survival, cells are co-cultured with a supporting neuronal network which is located at the periphery of the culture dish. Electrophysiological and optical recordings of the activity of modular networks obtained respectively by using substrate embedded micro electrode arrays and calcium imaging are presented. While each module shows spontaneous global synchronizations, the occurrence of inter-module synchronization is regulated by the density of connection among the circuits.
Neuroscience, Issue 98, In vitro, patterning, PDMS stencils, SU8-2075, silicon wafer, calcium imaging, Micro Electrode Array