This week, my overall goal was to ensure that Gieia’s algorithms are scoring the PHQ-9 (assessing mental health) and AUDIT (assessing substance use) screening tools accurately. In order to ensure this, I sought to confirm that the final diagnoses obtained based on Gieia’s algorithms were consistent with the diagnoses obtained with the manual scoring guides for these instruments.
I began by accessing copies of the PHQ-9 and AUDIT screening tools as well as their respective scoring guides. Next, I used these screening tools to determine scores manually based on different responses inputted for each question. I conducted this process iteratively by inputting 50 different individual responses and computing 50 composite scores for both the PHQ-9 and AUDIT manually. Based on these scores, I used each screening tool’s scoring guide to create a diagnosis regarding the severity of mental health and substance use concerns, respectively. Next, I inputted these 50 iterative responses (for each screening tool, respectively) in Gieia and recorded the produced scores and diagnoses.
During the following week, I will be comparing the composite scores and diagnoses obtained with Gieia to the manual scores and diagnoses using the PHQ-9 and AUDIT manual screening tools and scoring guides. However, as I recorded the manually-computed scores and Gieia-produced scores for the PHQ-9 and AUDIT, I began an initial comparison of these scores. I was surprised to find that while the PHQ-9’s Gieia-produced scores differed from its manually-computed scores, the AUDIT’s Gieia-produced scores were primarily identical to its manually-computed scores.
As I investigated the discrepancy between Gieia’s PHQ-9 scores and the manually produced PHQ-9 scores, I found a distinct pattern: Gieia’s scores were consistently seven greater than the manual scores. I look forward to analyzing this further in conjunction with the PHQ-9’s scoring guide and Gieia’s algorithms next week.