Back to
Projects List
AMP SCZ Facial expression feature extraction for video interviews
Key Investigators
- Eduardo Castro (IBM Research, USA)
- Kevin Cho (BWH, USA)
- Ofer Pasternak (BWH, USA)
- Guillermo Cecchi (IBM Research, USA)
Presenter location: In-person
Project Description
Put together code that 1) performs facial expression feature extraction for video interviews stored on a data aggregation server, and 2) transfers them to a local directory. It would be based on existing scripts for facial expression feature extraction and an existing data management tool. This project is part of the AMP SCZ program, an initiative for early detection of risk for schizophrenia (https://www.ampscz.org).
Objective
- Objective 1: Adapt our existing code for facial expression analysis to extract features through a proper video pipeline, including running this task for upcoming videos in the data aggregation server.
- Objective 2: Adapt our data management tool to incorporate the files generated by this pipeline for data transfer.
Approach and Plan
- Discuss how the data management tool (Lochness) retrieves data from the aggregate server.
- Define facial expression features of interest.
- Set up the facial expression analysis code to be run as a proper pipeline.
Progress and Next Steps
Since we could only stay at project week for a couple of days and this was our first face-to-face interaction as a group, we mainly focused on figuring out specifics of the project that are difficult to fully grasp via zoom meetings. Some of the points being discussed were:
- How Lochness is currently set up to copy processed information from the aggregation server to the server at Brigham and Women’s Hospital.
- Defined formatting of the filenames were extracted features would be stored and their path structure.
- Decided to process only videos that were compliant with the Standard Operating Procedures (SOPs).
- Decided to include a log file with the list of frames that were not successfully processed for each video.
- Made a decision about what information not to include in the generated feature extraction csv files (no dates, filenames or facial landmarks)
- Decided to enforce a face confidence threshold (faces detected with less than 0.6 confidence would be discarded).
- Decided to include an extra feature with the number of detected faces per frame (sanity check to flag interviews incorrectly recorded in speaker mode, not following SOPs).
We started to incorporate those adjustments in the code and will continue to work on this after project week. Other considerations that will be taken into account for future work are:
- Adjust the code to run in batch mode only for those interviews that have not been processed yet.
- Figure out how to cron this job so that it runs continuously.
Illustrations
No response
Background and References
Facial expression code: https://github.com/ecastrow/face-feats
Data Management tool: https://github.com/AMP-SCZ/lochness