A piece in The Atlantic highlighting Helen Nissenbaum’s approach to privacy has been whipping around the twittersphere over the past couple of days. The breathless tone of the piece is a little off-putting, but the content, at first glance, is intriguing:
Nissenbaum argues that the real problem "is the inappropriateness of the flow of information due to the mediation of technology." In her scheme, there are senders and receivers of messages, who communicate different types of information with very specific expectations of how it will be used. Privacy violations occur not when too much data accumulates or people can't direct it, but when one of the receivers or transmission principles change. The key academic term is "context-relative informational norms." Bust a norm and people get upset.
However, after reading this piece (and, admittedly, not having read Nissenbaum’s academic papers), the contention that this is the first and last word on the question of context-sensitive privacy and sharing — “What you tell your bank, you might not tell your doctor. What you tell your friend, you might not tell your father-in-law.” — rings hollow for me (as it has for the Wall Street Journal Ideas Market blog, as well).
A whole 'nother issue is the issue of whether norms have any lasting value: How long before today's privacy norms — even assuming there are some shared norms in this arena — are replaced by tomorrow's norms? (On a related note, even the status of evidence-based medicine as a gold standard for guiding clinical practice has been questioned; contrarians hold that personalized medicine for an individual may require approaches that run counter to EBM as proven out over a population.)
Facebook and Google+ tout their context-sensitive sharing tools, which allow for limited sharing of posts to segmented audiences, and most of us understand that we barter personal data for the “free” services they provide; furthermore, this barter exchange usually benefits us as individuals as well — we get better-targeted messages online as a result. I would certainly prefer to see Facebook and Google+ be a little more transparent about their use of personal data, and other sites and services also need to be transparent. At least some folks out in the wild are pretty sophisticated about their wants and needs when it comes to health care social media privacy and security, and I’m just not sure that we need a new paradigm fueled by jargon from the ivory tower — though perhaps further inquiry would lead me to conclude otherwise.
In the health care and health care social media context, we all need to be aware of our own needs and desires concerning sharing of personal information, and we all need to be aware of the ways in which personal information is shared and used, and re-shared and re-used, by the platforms and repositories that we use. Armed with this knowledge, we can work to establish our own context-sensitive norms, and work to ensure that they are honored.
Many users of social media tools for health care purposes have already internalized context-relative informational norms that must be layered on top of existing privacy and security concerns unique to the health care arena. To those who have not: the HealthBlawger hopes that this post will alert those who have not to avail themselves of the plethora of resources available to them: other health care social media privacy and security content here on HealthBlawg, the Mayo Clinic Center for Social Media (disclosure: I sit on its external advisory board), among many others — please share any favorites in the comments. These resources should help folks fine-tune individual and institutional approaches to the use of these powerful tools.
David Harlow
The Harlow Group LLC
Health Care Law and Consulting