Original Source Here
Machine Learning in Public Health
There are four sub-categories to this topic that are worth noting.
Identification of Factors & Their Relations to Health Outcome
There are a few examples here. ML can be used to identify the genetic factors that influence disease outcomes . In other news, ML has been used to locate regions most burdened by dengue, a mosquito-transmitted viral infection .
Design of Intervention
Some progress has been done in this area. ML has been used to address depression management , self-efficacy in weight loss , smoking cessation , and personalized nutrition based on glycaemic response .
Prediction of Outcomes
Oh boy, a lot has been done here. Is it me or does this category feel like it’s powered by Kaggle competitions? Below is a small sample of what’s been done.
In 2017, a group of researchers designed a multistep modeling strategy to predict heart-failure readmission rates with an AUC of 0.78, and an accuracy of 83.19% on electronic-medical record-wide data. This is a step-up from existing predictive models of the time which had AUCs in the range 0.6–0.7. 
In a similar vein, in 2018, to bypass the labor-intensive task of extracting curated predictor variables from normalized electronic health record (EHR) data, a group of researchers proposed a representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR). They then show that deep learning models trained on such data could achieve high accuracy for tasks such as predicting in-hospital mortality, 30-day unplanned readmission, prolonged length of stay, and all of a patient’s final discharge diagnoses. 
Also, in 2018, a deep convolutional neural network (inception v3) was trained on whole-slide images from The Cancer Genome Atlas to accurately classify them into LUAD, LUSC, or normal lung tissue, with a whopping AUC of 0.97. The performance is comparable to that of pathologists, according to the authors. 
Allocation of Resources
Well, Eva, a reinforcement learning system that allocates scarce covid tests, falls into this category . Similarly, in 2019, Chun-Hao change and colleagues presented a deep reinforcement learning (deep RL) model that was trained on MIMIC-III , a database of over 40 000 patients who stayed in critical care units in the Beth Isreal Deaconess Medical Center between 2001 and 2012. For these patients, you need to run tests to forecast detrimental events. However, tests are costly. By learning policies that minimize test costs and maximize predictive gain, the deep RL model reduces the total number of measurements by 31%. 
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot