I am the Online Community Builder for New Tactics in Human Rights, a project of the Center for Victims of Torture. We offer human rights practitioners a platform for peer-to-peer exchange on tactics – sharing new ideas, challenges and experiences. We are beginning a process to improve our online platform, which has motivated me to reach out to my network of peers for advice on the best way to ensure an appropriate level of responsibility when it comes to the privacy of our online community members.
As I reflect on New Tactics’ ability to bring human rights practitioners together online in our community of practice and our efforts to ensure the privacy of those practitioners, I find that questions still remain in how to strike that balance between providing privacy and building community. In other words, how do we balance our responsibility to protect our members with the mission to connect our members?
As an online service provider, we have a level of responsibility to protect the privacy of our online community members. The Electronic Frontier Foundation has done a great job of documenting best practices around the legal and technical responsibilities (logging practices, data retention, etc) of online service providers in protecting the privacy of their users. I wonder if others who manage or have managed online environments can go a step further and brainstorm on best practices on balancing privacy with community?
To start this conversation, I see a number of areas of tension between privacy best practices and online community engagement best practices. Here are a few examples:
Anonymity vs. verification of identity (and the issue of trust) — If you have an exclusive online community, you probably verify the members, which builds a level of trust around who you (as the user) are sharing information with. If it is not exclusive, you probably allow users to be anonymous (for their privacy). This anonymity could potentially be a barrier to building trust among the community members (how do you know who you’re talking to?). This lack of trust could make it difficult to reach your community objectives (peer-to-peer exchange of experiences and ideas). I realize that there is a spectrum here – with complete anonymity on one end of proven identify on the other, but I would still welcome your thoughts on where to meet in the middle.
Email notifications vs. raised suspicion — You may allow your users to be anonymous on the website, but if they receive email notifications, they could be connected to your organization if the notifications are found in the user’s email box (which in some cases is enough to raise suspicion, especially for human rights activists). Allowing users to receive notifications is an important component of a successful online community. How do you balance these two things? If the user is given the choice to receive email notifications from the online community platform, how do we know when we have successfully communicated the risk to the user. When do we know when we have really achieved informed consent?
Users encouraged to share information vs. users are warned of the risk each step of the way — How do we encourage users to share information in order to create a strong community while at the same time ensure that they understand the risk? We want users to share information (on their profile, in discussions, etc) — but we don’t want them to share too much. What is the responsibility of the online community manager is articulating and managing this balance?
Collecting useful analytics vs. block third parties from collecting user information – Online community managers love analyzing data about their online community and they really love tools like Google Analytics that can help them do it! However, if you use third parties (like Google) to track your users, it would be impossible to guarantee that their data will not be used in undesired ways. Is it ever OK to use third parties to collect data on your users? Could we notify each user that they could turn off their cookies to avoid these third parties on our website? One approach is to use a self-hosted analytics software, like Piwik (hat tip to Samir Nassar for this recommendation).
The key, of course, is to really understand the risks that your users face when they are participating in your online community, and to balance that risk with the goals of the community. But how do we know we know what the risk is? And how do we create a process in which our online community can help us make these decisions?
I would love to hear how you have balanced the need for privacy for your online community with the need for trust and engagement within your online community. Please share your thoughts and ideas on the challenges I have listed, or share other challenges that you are facing regarding balancing privacy with community.
These are not new issues, as I’m sure you’re aware. The way these things were managed on CompuServe and other such closed environments was to have layers – and many Web forums now sort of emulate this by having some material not available to people who lack a set number of postings (one of the common ones has steps at 25 posts and 100 posts). Have areas that are open to new members but others that are only open to members who have been around for a while and have demonstrated that they are trustworthy. Have a clear set of criteria for unacceptable behavior and be prepared to warn, then jettison people who violate them. Make sure posters understand the difference between the open and closed sections. Remind people that even in the closed sections material can be copied and archived by people they don’t know (while making sure those pages don’t have social network sharing buttons, easy email functions, etc.). Collect analytics on the open pages but not the closed ones. Allow email notifications from the open sections but not the closed ones. And so on.
That also to some extent solves the anonymity problem – people can build a reputation for themselves by posting in the open areas. However, you need moderators who are good at spotting fakers.