publications
publications in reverse chronological order.
2023
- EngageMe: Assessing Student Engagement in Online Learning Environment Using Neuropsychological TestsSaumya Yadav, Momin Naushad Siddiqui, and Jainendra ShuklaIn Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky, 2023
In the proposed research, we investigated whether the standardized neuropsychological tests commonly used to assess attention can be used to measure students’ engagement in online learning settings. Accordingly, we employed 73 students in three clinically relevant neuropsychological tests to assess three types of attention. Students’ engagement performance, as evidenced by their facial video, was also annotated by three independent annotators. The manual annotations observed a high level of inter-annotator reliability (Krippendorffs’ Alpha of 0.864). Further, by obtaining a correlation value of 0.673 (Spearmans’ Rank Correlation) between manual annotation and neuropsychological tests score, our results show construct validity to prove neuropsychological test scores’ significance as a latent variable for measuring students’ engagement. Finally, using non-intrusive behavioral cues, including facial action unit and eye gaze data collected via webcam, we propose a machine learning method for engagement analysis in online learning settings, achieving a low mean squared error value (0.022). The findings suggest a neuropsychological test-based machine learning technique could effectively assess students’ engagement in online education.
- SCS-Net: An efficient and practical approach towards Face Mask DetectionUmar Masud, Momin Siddiqui, Mohd. Sadiq, and 1 more authorProcedia Computer Science, Jan 2023
Much work has been done in the computer vision domain for the problem of facial mask detection to curb the spread of the Coronavirus disease (COVID-19). Preventive measures developed using deep learning-based models have got enormous attention. With the state-of-the-art results touching perfect accuracies on various models and datasets, two very practical problems are still not addressed - the deployability of the model in the real world and the crucial cases of incorrectly worn masks. To this end, our method proposes a lightweight deep learning model with just 0.12M parameters having up to 496 times reduction as compared to some of the existing models. Our novel architecture of the deep learning model is designed for practical implications in the real world. We also augment an existing dataset with a large set of incorrectly masked face images leading to a more balanced three-class classification problem. A large collection of 25296 synthetically designed incorrect face mask images are provided. This is the first of its kind of data to be proposed with equal diversity and quantity. The proposed model achieves a competitive accuracy of 95.41% on two class classification and 95.54% on the extended three class classification with minimum number of parameters in comparison. The performance of the proposed system is assessed with various state-of-the-art literature and experimental results indicate that our solution is more realistic and rational than many existing works which use overly massive models unsuitable for practical deployability.
2022
- Detecting Human Embryo Cleavage Stages Using YOLO V5 Object Detection AlgorithmAkriti Sharma, Mette H. Stensen, Erwan Delbarre, and 4 more authorsIn Nordic Artificial Intelligence Research and Development, Jan 2022
Assisted reproductive technology (ART) refers to treatments of infertility which include the handling of eggs, sperm and embryos. The success of ART procedures depends on several factors, including the quality of the embryo transferred to the woman. The assessment of embryos is mostly based on the morphokinetic parameters of their development, which include the number of cells at a given time point indicating the cell stage and the duration of each cell stage. In many clinics, time-lapse imaging systems are used for continuous visual inspection of the embryo development. However, the analysis of time-lapse data still requires the evaluation, by embryologists, of the morphokinetic parameters and cleavage patterns, making the assessment subjective. Recently the application of object detection in the field of medical imaging enabled the accurate detection of lesion or object of interest. Motivated by this research direction, we proposed a methodology to detect and track cells present inside embryos in time-lapse image series. The methodology employed an object detection technique called YOLO v5 and annotated the start of observed cell stages based on the cell count. Our approach could identify cell division to detect cell cleavage or start of next cell stage accurately up to the 5-cell stage. The methodology also highlighted instances of embryos development with abnormal cell cleavage patterns. On an average the methodology used 8 s to annotate a video frame (20 frames per second), which will not pose any delay for the embryologists while assessing embryo quality. The results were validated by embryologists, and they considered the methodology as a useful tool for their clinical practice.
- P-243 Automating tracking of cell division for human embryo development in time lapse videosA Sharma, R Kakulavarapu, V Thambawita, and 5 more authorsHuman Reproduction, Jul 2022
Can tracking cell division and predicting human embryo cleavage stages be automated in time-lapse videos (TLV) using AI object detection methods?We developed software predicting blastomere count and tracking cell cleavages up until 4-5 stage. The software employs object detection technique called YOLOv5 to detect cells.Embryo morphology plays an important part in determining viability. Parameters such as number of cells present following fertilization, abnormal cell division (reverse/direct) and evaluating cleavage stages have correlation with pregnancy rates. However, continuous manual evaluation can be time-consuming, and automation will assist in embryo viability assessment. YOLOv5 has proven to accurately detect objects in videos. YOLOv5 uses mean average precision (mAP) as a metric to quantify the portions of frames in videos having the correct count of the objects.We have developed a software that uses YOLOv5 to detect cells present in frames of TLV, then marks each cell boundary with different colored circular overlays using OpenCV. We trained YOLOv5 to detect objects: cell, morula and blastocyst using 150 images of different cell-stages, morula, blastocyst. For object cell mAP was 0.65. Annotated location of objects in images and YOLOv5 predictions were reviewed by embryologists. We evaluated the software on TLV from 11 patients.After YOLOv5 detects cells in frames of TLV, our software computes cell count and assigns each cell a different color which is maintained until cell division into daughter cells. Later, daughter cells were also assigned different colors. If the frame has a preceding frame, software calculates detected cells’ proximity with each cell in the preceding frame and copies color scheme provided proximity is within some threshold. The software provides TLV with colored overlays as output.In starting frames of TLV with single cell, software accurately detected 1-cell (high precision=0.99, high recall=0.83, high F1-score=0.90). We observed some misclassification between 1-cell and morula. The reason could be that compacted morula looks like 1-cell. Best performance is observed for 2-cells (high precision=0.91, high recall=0.98, high F1-score=0.95). 4-cells were sometimes misclassified with 3 or 5-cells (high precision=0.88, low recall=0.59, high F1-score=0.71). One reason for the misclassification can be that overlapping between cells increases with number of cells. 3-cell and 5-cell are confused with other stages, still cleavage stage detection is better than random: 3-cell (average precision=0.43, high recall=0.83, average F1-score=0.49), 5-cell (average precision=0.44, average recall=0.40, average F1-score=0.40). For cell-stages>5, YOLOv5 detects less cells than actual count and software predicts cleavage later than actual by 9-10 frames on average. The proximity threshold used was 0.10 for cell-count<4 and 0.05 for count>4.In 5 TLV, overlay color for cells changes abruptly between frames, possibly because once YOLOv5 detected a stage, in consecutive frames less cell-number was recorded, and then again reported correct count. Sometimes, software selected the wrong parent for daughter cells (incorrect colored overlay). 2 TLV had direct and reverse cleavages and software could detect these two patterns.Overall, our software can precisely detect cells, cell divisions and cleavage stages up to 4-cell stages. We hypothesize that training YOLOv5 on a bigger dataset and including several focal plane information will enable our software to detect overlapping cells and cleavage stages > =5.Object detection proved to be pragmatic for ART and tracking cell division using our software will reduce time consumed in manual annotations, easier prediction of abnormal cleavages and more objective assessments. Qualitative evaluation by embryologists resulted in the overall verdict that this is useful and promising for further development.not applicable