I report, write and design news things. I get a lot of feedback from readers. Some is good. Some is bad. Some is threatening, completely insane or both.

At my publication, the emails and phone numbers of writers are attached to our content, so we occasionally get some quality one-on-one time with our readers. It can be valuable and enriching to get that kind of feedback. But other times, the interaction can be frustrating, and getting some people off the phone without being completely rude is nigh impossible. But for those who don’t like talking to people, the news media might be the wrong industry to work in.

It’s the other kind of feedback that’s created a real problem though, one that’s been with us for a while and entirely borne of the digital age: online comments.

Those two words fill many a digital editor with both dread and amusement and have simultaneously become a joke and a major concern in newsrooms across the world.

The window of discussion on many comments often ranges from passive-aggressive to outright hostile and deranged, touching upon conspiracy theories, all the things we can thank or blame President Obama for and how every topic — no matter how seemingly benign — touches on political, cultural, racial, gender and other divisions.

None of this is news to anyone, particularly those in the media.

We frequently have had to shut down comment threads that have descended into personal attacks, threats and disgusting, bigoted language, only to have the same discussion move away from a political story and continue on stories somewhere like the Home and Garden section. There are some kinds of stories — particularly those involving crime or race — that simply have comments disabled from the get-go. And the emails sent to content listservs whenever we shutdown comments on a controversial story can lead to us being compared to Hitler via email, proving again and again the validity of Godwin’s Law.1

The other law that frequently applies is Poe’s Law,2 where extreme views and parodies of those who actually hold such views can seem indistinguishable, devolving discourse into a mush pile of smirks and rage.

This plague of trolls, hatemongers and other assorted habitually angry readers has prompted some major publications like Popular Science3 and Bloomberg4 to kill comments all together, while the Huffington Post5 and others vanquished anonymity in discussion sections. Some have chosen simply not to engage with the problem: Vox and the Verge launched without in-house comments at all. Newsrooms across the country are navel-gazing on what to do about this daily problem on their websites and social media pages.

And therein lies a struggle as the free press deals with balancing open dialogue against the public incitement of hatred by commenters taking place on their digital turf. Journalism, the only profession explicitly protected by the First Amendment, has to delicately walk the line between protecting the spirit of its sister rights and protecting itself and its readers from a groundswell of hateful and oftentimes personal commentary.

The issue might be“resolved” across the pond where the European Court of Human Rights ruled in summer 2015 that news websites can be fined for the content of their comments sections.6 But in the United States, that would be a dicey solution.

Of course, First Amendment rights pertain to the government’s ability to make and enforce laws and don’t necessarily apply to private entities. But for journalism, which is viewed as a public trust — aka the Fourth Estate — there does seem to be an implicit expectation of free and open communications that run both ways between media entities and the citizenry they claim to serve.

In a culture full of safe spaces and growing sensitivity and awareness of oppressed and marginalized communities, it’s easy to forget that vitriol can be a constructive part of public discourse. Pathos, Ethos and Logos are each important parts of making an argument with different impacts on different audiences even if they can sometimes descend into ad hominem attacks

That’s not an excuse for racism or bigotry or threats or illegal content, of course. Those things we could do without. But even trolls and the harshest of pundits can make a point.

Those looking to strike a compromise to keep comments sections while reducing the poison found therein have advocated axing anonymous comments7, which about 25 percent of people have made.8 Research does show that people are more likely to say horrible things about individuals and groups if they can hide behind the shroud of an anonymous screen name.9 Surely if they have to stand by their comments with their real name and perhaps even their photo, discussions would moderate themselves.

That logic seems sound. But upon entering the fray of the endless digital rage war, that proposition starts to lose steam. One has to look no further than Facebook on any given hour to see people have absolutely no problem making the same kinds of dreadful commentary using fully identifying credentials. It’s lead to media organizations, even those that have abandoned native commenting threads, to hire stringers or assign editors to constantly police Facebook comments — which can be an exhausting and repetitive exercise.

There have long been tensions between the advocacy of free speech and a social desire to crackdown on hate speech. Every time the Westboro Baptist Church shows up at a soldier’s funeral, there are always those vocally wishing their presence wasn’t legal. SCOTUS hasn’t outlined restrictions on hate speech like they have against defamation, fighting words, incitement, obscenity and other First Amendment exceptions. Part of that is rooted difficulties defining exactly what hate speech is.

So where do we draw the line? When it comes to discussion communities, on one extreme rests sites like 4chan with nearly complete anonymity and few posting rules. On the other extreme are mainstream news websites that have disabled comments entirely. And when it comes to trolls themselves, there are those merrily playing devil’s advocate trying to push discussions and reveal people’s attitudes and kneejerk reactions — and then there are the racists, bigots and assorted crazy people spewing threats and hatred that promote a chilling effect on constructive discussion.

Is there a middle ground?

I have no idea for certain and smarter people than myself may eventually come up with some good solutions. A couple things cross my mind though that are drawn from my own experience:

1.) For several of my childhood years, I was a moderator for gaming-related bulletin boards — yeah, good ol’ BBS, a long-forgotten yet still-present throwback compared to today’s social media landscape. Boards like those I worked for and sites like Something Awful — a granddaddy of online communities — knew that digital communities thrived best when those who care the most about them were allowed to tend them. The best and most constructive commenters were promoted to community moderators and bestowed powers to police threads and to wield their banhammers with discernment. They knew who the real destructive elements in the community were and who were merely the harmless trolls poking the bear. They had a vested interest in keeping communities going and actively shepherding constructive, on-topic commentary. Some — including myself — even got paid a little something, depending upon the website.

I have seen it tried at various new media websites but lack the data to know how well it worked.

Is it a model that can be adopted in a world of Disqus, Facebook-driven comments and other social media platforms? It’s hard to say and might require some adjustments to commenting platforms, and perhaps it’s better suited for websites that have retained in-house commenting systems.

But I’m interested to know whether it’s worth a try to combat a growing problem of toxic commentary while preserving the constructive, open discussions vital for our democracy.

2.) I’ve also kicked around the potential value of installing a quiz at the end of each article that would-be commenters must answer before posting.

It’s been clear over the years that the worst comments I get on my articles are from those who clearly have not read a single word of the piece beyond perhaps the headline. Could a basic quiz about facts from the story act both as a filter against those who just want to incite hatred while also providing a brief cooling down period before people write anything?

Many comment threads seem to be drowning in ill-conceived hot takes and staggering misconceptions about the subject matter, which cause emotions and words to run wild. Maybe being required to take a moment to breathe and ponder before posting could help?

These suggestions could be completely off base, but might be ways to mitigate at least some of the hatred while preserving online discourse surrounding the news.

Further discussion welcomed. Feel free to leave a comment.

  1. “Godwin’s law” Wikipedia. Link 

  2. “Poe’s law” Wikipedia. Link 

  3. LaBarre, Suzanne. “Why we’re shutting off our comments” Popular Science. September 24, 2013. Link 

  4. Bode, Karl. “Bloomberg Latest To Kill Comments Because Really, Who Gives A Damn About Localized User Communities?” TechDirt. February 2, 2015. Link 

  5. Soni, Jimmy. “The Reason HuffPost Is Ending Anonymous Accounts” The Huffington Post. October 26, 2013. Link 

  6. Vaas, Lisa. “News sites can be held responsible for user comments” naked security. June 18, 2015. Link 

  7. Wallsten, Kevin and Tarsi, Melinda. “It’s time to end anonymous comments sections” The Washington Post. August 19, 2014. Link 

  8. Beaujon, Andrew. “25% of people have posted anonymous comments, Pew finds” Poynter. September 5, 2013. [Link](http://www.poynter.org/news/mediawire/222912/25-of-people-have-posted-anonymous-comments-pew-finds/0 

  9. Santana, Arthur D. “Virtuous or Vitriolic: The effect of anonymity on civility in online newspaper reader comment boards” Journalism Practice. Vol. 8 , Iss. 1,2014 Link