About this episode
A Facebook algorithm that tracks posts for suicidal thoughts; an app that monitors the speed of keyboard strokes for signs of depression; a computer program that analyzes our facial expressions and tone of voice when we Facetime. These are a few of the thousands of algorithms tracking our mental health that some experts say could revolutionize how we diagnose and treat mental illness. They say that our 24/7 use of digital devices is generating a goldmine of information about our mental state that must be accessible to mental health practitioners if psychiatric medicine is to operate like a scientific discipline in the 21st century. Instagram posts, text logs, Google searches, and GPS data, and not psychiatrists’ observations and intuitions based on conversation, offer the detail and time stamped precision we need to generate tailored and effective treatments to the millions of individuals who desperately need help in the post pandemic world.
Critics say the problems with this big data approach go far beyond the obvious privacy issues that come with outsourcing mental health monitoring to digital monopolies like Google and Apple. The push for mental health algorithms reflects a reductive view of human emotions that undermines the strengths of the traditionally human centred field of psychiatric medicine. Diagnoses based on dialogue between two individuals and grounded in intuition and empathy will always be better than machine intelligence at drawing out the personal histories that explain trauma and generate helpful treatment. Engaging machines to address the mental health crisis is nothing but a quick fix solution that only helps the deeply under resourced health systems of our world today.