To investigate the effect of Biejiajian Pills on the expressions of the signal molecules and target genes of Wnt signal pathway in HepG2 cells and explore the mechanisms by which Biejiajian pills suppress the invasiveness of hepatocellular carcinoma.
Recent advances in mobile devices have made profound changes in people's daily lives. In particular, the impact of easy access of information by the smartphone has been tremendous. However, the impact of mobile devices on healthcare has been limited. Diagnosis and treatment of diseases are still initiated by occurrences of symptoms, and technologies and devices that emphasize on disease prevention and early detection outside hospitals are under-developed. Besides healthcare, mobile devices have not yet been designed to fully benefit people with special needs, such as the elderly and those suffering from certain disabilities, such blindness. In this paper, an overview of our research on a new wearable computer called eButton is presented. The concepts of its design and electronic implementation are described. Several applications of the eButton are described, including evaluating diet and physical activity, studying sedentary behavior, assisting the blind and visually impaired people, and monitoring older adults suffering from dementia.
In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.
Accurate estimation of food portion size is of paramount importance in dietary studies. We have developed a small, chest-worn electronic device called eButton which automatically takes pictures of consumed foods for objective dietary assessment. From the acquired pictures, the food portion size can be calculated semi-automatically with the help of computer software. The aim of the present study is to evaluate the accuracy of the calculated food portion size (volumes) from eButton pictures.
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
A new method for constructing an accurate disparity space image and performing an efficient cost aggregation in stereo matching based on local affine model is proposed in this paper. The key algorithm includes a new self-adapting dissimilarity measurement used for calculating the matching cost and a local affine model used in cost aggregation stage. Different from the traditional region-based methods, which try to change the matching window size or to calculate an adaptive weight to do the aggregation, the proposed method focuses on obtaining the efficient and accurate local affine model to aggregate the cost volume while preserving the disparity discontinuity. Moreover, the local affine model can be extended to the color space. Experimental results demonstrate that the proposed method is able to provide subpixel precision disparity maps compared with some state-of-the-art stereo matching methods.
Propelled by rapid technological advances in smart phones and other mobile devices, indoor navigation for the blind and visually impaired individuals has become an active field of research. A reliable positioning and navigation system will reduce suffering of these individuals, help them live more independently, and promote their employment. Although much progress has been made, localization of the floor level in a multistory building is largely an unsolved problem despite its high significance in helping the blind to find their ways. In this paper, we present a novel approach using a miniature barometer in the form of a low-cost MEMS chip. The relationships among the atmospheric pressure, the absolute height, and the floor location are described along with a real-time calibration method and a hardware platform design. Our experiments in a building of twelve floors have shown high performance of our approach.
In order to evaluate peoples lifestyle for health maintenance, this paper presents a segmentation method based on multi-sensor data recorded by a wearable computer called eButton. This device is capable of recording more than ten hours of data continuously each day in multimedia forms. Automatic processing of the recorded data is a significant task. We have developed a two-step summarization method to segment large datasets automatically. At the first step, motion sensor signals are utilized to obtain candidate boundaries between different daily activities in the data. Then, visual features are extracted from images to determine final activity boundaries. It was found that some simple signal measures such as the combination of a standard deviation measure of the gyroscope sensor data at the first step and an image HSV histogram feature at the second step produces satisfactory results in automatic daily life event segmentation. This finding was verified by our experimental results.
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.
To investigate the effect of Biejiajian Pills on Wnt signal pathway and its inhibitory gene (DKK-1 and FrpHe) expressions and explore the mechanism underlying the action of Biejiajian Pills to suppress the invasiveness of hepatocellular carcinoma.
Active contour models are used to extract object boundary from digital image, but there is poor convergence for the targets with deep concavities. We proposed an improved approach based on existing gradient vector flow methods. Main contributions of this paper are a new algorithm to determine the false part of active contour with higher accuracy from the global force of gradient vector flow and a new algorithm to update the external force field together with the local information of magnetostatic force. Our method has a semidynamic external force field, which is adjusted only when the false active contour exists. Thus, active contours have more chances to approximate the complex boundary, while the computational cost is limited effectively. The new algorithm is tested on irregular shapes and then on real images such as MRI and ultrasound medical data. Experimental results illustrate the efficiency of our method, and the computational complexity is also analyzed.
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram ?-division and minimum error. Based on minimum error principle and 2D color histogram, the ?-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate ?-division segmentations, and the optimal angle ? is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both ?-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation.
Measuring food volume (portion size) is a critical component in both clinical and research dietary studies. With the wide availability of cell phones and other camera-ready mobile devices, food pictures can be taken, stored or transmitted easily to form an image based dietary record. Although this record enables a more accurate dietary recall, a digital image of food usually cannot be used to estimate portion size directly due to the lack of information about the scale and orientation of the food within the image. The objective of this study is to investigate two novel approaches to provide the missing information, enabling food volume estimation from a single image. Both approaches are based on an elliptical reference pattern, such as the image of a circular pattern (e.g., circular plate) or a projected elliptical spotlight. Using this reference pattern and image processing techniques, the location and orientation of food objects and their volumes are calculated. Experiments were performed to validate our methods using a variety of objects, including regularly shaped objects and food samples.
A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances.
A new image based activity recognition method for a person wearing a video camera below the neck is presented in this paper. The wearable device is used to capture video data in front of the wearer. Although the wearer never appears in the video, his or her physical activity is analyzed and recognized using the recorded scene changes resulting from the motion of the wearer. Correspondence features are extracted from adjacent frames and inaccurate matches are removed based on a set of constraints imposed by the camera model. Motion histograms are defined and calculated within a frame and we define a new feature called accumulated motion distribution derived from motion statistics in each frame. A Support Vector Machine (SVM) classifier is trained with this feature and used to classify physical activities in different scenes. Our results show that different types of activities can be recognized in low resolution, field acquired real-world video.
An automatic detector that finds circular dining plates in chronically recorded images or videos is reported for the study of food intake and obesity. We first detect edges from input images. After a number of processing steps that convert edges into curves, arc filtering and grouping algorithms are applied. Then, convex hulls are identified and the ones that fit the description of ellipses corresponding to dining plates are determined. Our experiments using real-world images indicate that this detector is highly reliable and robust even when the input images contain complex background scenes and the dining plates are severely occluded.
The human rewards network is a complex system spanning both cortical and subcortical regions. While much is known about the functions of the various components of the network, research on the behavior of the network as a whole has been stymied due to an inability to detect signals at a high enough temporal resolution from both superficial and deep network components simultaneously. In this paper, we describe the application of magnetoencephalographic imaging (MEG) combined with advanced signal processing techniques to this problem. Using data collected while subjects performed a rewards-related gambling paradigm demonstrated to activate the rewards network, we were able to identify neural signals which correspond to deep network activity. We also show that this signal was not observable prior to filtration. These results suggest that MEG imaging may be a viable tool for the detection of deep neural activity.
We introduce a spatial filtering method in the spherical harmonics domain for constraining magnetoencephalographic (MEG) multichannel measurements to any user-specified spherical region of interest (ROI) inside the head. The method relies on a linear transformation of the signal space separation inner coefficients that represent the MEG signal generated by sources located inside the head. The spatial filtering is achieved effectively by constructing a spherical harmonics basis vector that is dependent on the center of the targeted ROI and it does not require any discrete division of the headspace into grids like the traditional MEG spatial filtering approaches. The validation and the performance of the method are demonstrated through both simulated and actual bilateral auditory-evoked data experiments.
A novel method to estimate the 3D location of a circular feature from a 2D image is presented and applied to the problem of objective dietary assessment from images taken by a wearable device. Instead of using a common reference (e.g., a checkerboard card), we use a food container (e.g., a circular plate) as a necessary reference before the volumetric measurement. In this paper, we establish a mathematical model formulating the system involving a camera and a circular object in a 3D space and, based on this model, the food volume is calculated. Our experiments showed that, for 240 pictures of a variety of regular objects and food replicas, the relative error of the image-based volume estimation was less than 10% in 224 pictures.
Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.
Inspired by the extraordinary object grabbing ability of certain insects (e.g., a grasshopper), we have developed a novel dry EEG electrode, called the skin screw electrode. Unlike the traditional disc electrode which requires several minutes to install, the installation of the skin screw electrode can be completed within seconds since no skin preparation and electrolyte application are required. Despite the drastic improvement in the installation time, our experiments have demonstrated that the skin screw electrode has a similar impedance value to that of the disc electrode. The skin screw electrode has a wide range of applications, such as clinical EEG diagnosis, epilepsy monitoring, emergency medicine, and home-based human-computer interface.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.