Behavior
A subscription to JoVE is required to view this content.
You will only be able to see the first 2 minutes.
The JoVE video player is compatible with HTML5 and Adobe Flash. Older browsers that do not support HTML5 and the H.264 video codec will still use a Flash-based video player. We recommend downloading the newest version of Flash here, but we support all versions 10 and above.
If that doesn't help, please let us know.
A Step-by-Step Implementation of DeepBehavior, Deep Learning Toolbox for Automated Behavior Analysis
Chapters
Summary February 6th, 2020
The purpose of this protocol is to utilize pre-built convolutional neural nets to automate behavior tracking and perform detailed behavior analysis. Behavior tracking can be applied to any video data or sequences of images and is generalizable to track any user-defined object.
Transcript
Performing a detailed behavior analysis is crucial to understanding brain behavior relationship. One of the best ways to evaluate behavior is through careful observations. However, quantifying the observed behavior is time consuming and challenging.
Classical methods of behavior analysis are not easily quantifiable and are inherently subjective. Recent developments in deep learning, a branch of machine learning and artificial intelligence fields, provide opportunities for automated and objective quantification of images and videos. Here we present our recently developed methods utilizing deep neural networks to perform detailed behavior analysis in rodents and humans.
The main advantage of this technique are its flexibility and applicability to any imaging data for behavior analysis. The DeepBehavior Toolbox supports single object identification, multi object detection, and human pose tracking. We also provide the post processing code in MATLAB for more in-depth kinematic analysis methods.
Begin by setting up Tensor Box. Activate the environment, then use GitHub to clone Tensor Box and install it on the machine and on additional dependencies. Next, launch the labeling graphical user interface and label at least 600 images from a wide distribution of behavior frames.
To label an image click the top left corner of the object of interest and then the bottom right corner. Then make sure that the bounding box captures the entire object. Click next to move to the next frame.
To link the training images to a network hyper parameters file, open overfeat_rezoom. json in a text editor and replace the file path under train_idl to labels.json. Then add the same file path under test-idl and save the changes.
Initiate the training script which will begin training for 600, 000 iterations and generate the resulting trained weights of the convolutional neural network in the output folder. Then perform prediction on new images, and view the outputs of the network as labeled images and as bounding box coordinates. Install YOLOv3.
Then label the training data with Yolo_mark by placing the images in the Yolo_mark-data-obga folder and labeling them one by one in the graphical user interface. Label approximately 200 images. Next, set up the configuration file.
To modify the configuration file open the YOLO-obj. cfg folder. Modify the batch, subdivision, and classes lines.
Then change the filter for each convolution layer before a YOLO layer. Download the network weights and place them into the darknet-build. x64 folder.
Run the training algorithm, and once it is complete view the iterations. To track multiple body parts in a human subject, install OpenPose then use it to process the desired video. The capabilities of the DeepBehavior Toolbox were demonstrated on videos of mice performing a food pellet reaching task.
Their right paws were labeled and movement was tracked with front and side view cameras. After post processing with camera calibration, 3D trajectories of the reach were obtained. The outputs of YOLOv3 are multiple bounding boxes because multiple objects can be tracked.
The bounding boxes are around the objects of interest which can be parts of the body. In OpenPose, the network detected the joint positions and after post processing with camera calibration, a 3D model of the subject was created. One critical step not covered in this protocol is ensuring that your device has the appropriate Python versions and dependencies as well as a GPU configured device before beginning.
After successfully obtaining the track behavior from the network additional post processing can be done to further analyze the kinematics and patterns of the behavior. Why the DeepBehavior Toolbox is applicable for diagnostic approaches in disease models of rodents and human subjects is not direct therapeutic benefit. Use of these techniques as a diagnostic or prognostic tool is under active research within our laboratory.
This technique is being used to investigate the neural mechanisms of skilled motor behavior in rodents as well as being used in clinical studies to evaluate motor recovery in patients with neurological diseases.
Related Videos
You might already have access to this content!
Please enter your Institution or Company email below to check.
has access to
Please create a free JoVE account to get access
Login to access JoVE
Please login to your JoVE account to get access
We use/store this info to ensure you have proper access and that your account is secure. We may use this info to send you notifications about your account, your institutional access, and/or other related products. To learn more about our GDPR policies click here.
If you want more info regarding data storage, please contact gdpr@jove.com.
Please enter your email address so we may send you a link to reset your password.
We use/store this info to ensure you have proper access and that your account is secure. We may use this info to send you notifications about your account, your institutional access, and/or other related products. To learn more about our GDPR policies click here.
If you want more info regarding data storage, please contact gdpr@jove.com.
Your JoVE Unlimited Free Trial
Fill the form to request your free trial.
We use/store this info to ensure you have proper access and that your account is secure. We may use this info to send you notifications about your account, your institutional access, and/or other related products. To learn more about our GDPR policies click here.
If you want more info regarding data storage, please contact gdpr@jove.com.
Thank You!
A JoVE representative will be in touch with you shortly.
Thank You!
You have already requested a trial and a JoVE representative will be in touch with you shortly. If you need immediate assistance, please email us at subscriptions@jove.com.
Thank You!
Please enjoy a free 2-hour trial. In order to begin, please login.
Thank You!
You have unlocked a 2-hour free trial now. All JoVE videos and articles can be accessed for free.
To get started, a verification email has been sent to email@institution.com. Please follow the link in the email to activate your free trial account. If you do not see the message in your inbox, please check your "Spam" folder.