Ha Tran Hong Phan, Ashnil Kumar, David Feng, Michael Fulham, Jinman Kim

We propose an automatic unsupervised cell event detection and classificationmethod, which expands convolutional Long Short-Term Memory (LSTM) neuralnetworks, for cellular events in cell video sequences. Cells in images that arecaptured from various biomedical applications usually have different shapes andmotility, which pose difficulties for the automated event detection in cellvideos. Current methods to detect cellular events are based on supervisedmachine learning and rely on tedious manual annotation from investigators withspecific expertise. So that our LSTM network could be trained in anunsupervised manner, we designed it with a branched structure where one branchlearns the frequent, regular appearance and movements of objects and the secondlearns the stochastic events, which occur rarely and without warning in a cellvideo sequence. We tested our network on a publicly available dataset ofdensely packed stem cell phase-contrast microscopy images undergoing celldivision. This dataset is considered to be more challenging that a dataset withsparse cells. We compared our method to several published supervised methodsevaluated on the same dataset and to a supervised LSTM method with a similardesign and configuration to our unsupervised method. We used an F1-score, whichis a balanced measure for both precision and recall. Our results show that ourunsupervised method has a higher or similar F1-score when compared to two fullysupervised methods that are based on Hidden Conditional Random Fields (HCRF),and has comparable accuracy with the current best supervised HCRF-based method.Our method was generalizable as after being trained on one video it could beapplied to videos where the cells were in different conditions. The accuracy ofour unsupervised method approached that of its supervised counterpart.