Deepwound: Automated Postoperative Wound Assessment and Surgical Site Surveillance through Convolutional Neural Networks
Author(s):
Varun Shenoy; Elizabeth Foster; Lauren Aalami; Bakar Majeed; Oliver Aalami
Background:
The incidence of postoperative wound infections after lower extremity bypass can be as high as 10%-20%. An automated method of diagnosing wound complications would serve to limit the expense of time and money from hospitals, doctors, insurers, and patients. The algorithmic classification of wound images, due to variability in the appearance of wound sites, is a challenge. Deep convolutional neural networks (CNNs), a subgroup of artificial neural networks that exhibit great promise in the analysis of of visual imagery, may be leveraged to categorize surgical site wounds. We present Deepwound, a multilabel CNN trained to classify wound images with image pixels and labels as the sole inputs. Mobile devices paired with deep neural networks have the ability to provide real-time clinical insight into the state of post-operative wounds. The ubiquitous nature of smartphones provides an ideal means through which professional-grade wound assessment and triaging may be delivered.
Hypothesis:
CNNs can be used to label the state of post-surgical wounds and provide a wound infection risk score.
Methods:
Over 1,000 smartphone images of postoperative wounds were collected and individually labeled by three medical experts. The variability and sheer number of images necessitated augmentation through several random rotations and translations. Three CNNs were built. Each utilized transfer learning on the VGG-16 CNN architecture, pretrained on over 1.2 million images over 1,000 classes from the Imagenet database. Deepwound (our final algorithm) is a majority-voting ensemble composed of these CNNs, with weights frozen at three different layers for optimal generalization. An output layer equipped with a sigmoid activation function replaced the final layer of the CNNs, which were developed using Python. A mobile app, able to track various clinical variables pertinent to postoperative wound progression, was created using Deepwound.
Results:
The Receiver Operating Characteristic AUC served as our evaluation metric, achieving scores of 0.85, 0.92, 0.92, 0.87, 0.93, 0.96, 0.93, 0.90, and 0.92 across our 9 labels: the presence of drainage, fibrinous exudate, granulation tissue, a surgical site infection (SSI), an open wound, staples, steri-strips, or sutures.
Conclusions:
Through our research we have built a deep learning algorithm, Deepwound, that can accurately identify the presence or absence of common post-operative wound findings.