The rise of fear-based social media like Nextdoor, Citizen, and now Amazon’s Neighbors

Published on June 20, 2019

Nextdoor bills itself as the “world’s largest social network for the neighborhood,” where you can ask for nearby restaurant recommendations, buy used furniture, or report a stolen bike. In practice, its “crime and safety” section has been a hotbed for racial stereotyping that’s forced the company to rewrite its software and policies.

Citizen — whose previous form was called Vigilante and which appeared to encourage users to stop crimes in action — sends users 9-1-1 alerts for crimes happening nearby. It also allows users to livestream footage they record of the crime scene, “chat with other Citizen users as situations develop” and “build out your Inner Circle of family and friends to create your own personal safety network, and receive alerts whenever they’re close to danger.”

Now Amazon has thrown its hat in the ring — with Ring. It recently advertised an editorial position that would coordinate news coverage on crime, specifically based around its Ring video doorbell and Neighbors, its attendant social media app. Neighbors alerts users to local crime news from “unconfirmed sources” and is full of Amazon Ring videos of people stealing Amazon packages and “suspicious” brown people on porches. “Neighbors is more than an app, it’s the power of your community coming together to keep you safe and informed,” it boasts.

Nextdoor was the ninth most-downloaded lifestyle app in the US on iPhones at the end of April, according to App Annie, a mobile data and analytics provider; that’s up from No. 27 a year ago in the social networking category. (Nextdoor changed its app category from social to lifestyle on April 30; on April 29 it was ranked 14th in social, according to App Annie.) Amazon Ring’s Neighbors is the 36th most-downloaded social app. When it launched last year, it was 115th. Citizen, which considers itself a news app, was the seventh most-downloaded news app on iOS at the end of April, up from ninth last year and 29th in 2017.

Apps can fuel a vicious cycle of fear and violence

These apps have become popular because of — and have aggravated — the false sense that danger is on the rise. Americans seem to think crime is getting worse, according to data from both Gallup and Pew Research Center. In fact, crime has fallen steeply in the last 25 years according to both the FBI and the Bureau of Justice Statistics.

Of course, unjustified fear, nosy neighbors, and the neighborhood watch are nothing new. But the proliferation of smart homes and smart devices is putting tools like cameras and sensors in doorbells, porches, and hallways across America.

And as with all things technology, the reporting and sharing of the information these devices gather is easier than it used to be and its reach is wider.

These apps foment fear around crime, which feeds into existing biases and racism and largely reinforces stereotypes around skin color, according to David Ewoldsen, professor of media and information at Michigan State University.

“There’s very deep research saying if we hear about or read a crime story, we’re much more likely to identify a black person than a white person [as the perpetrator],” Ewoldsen said, regardless of who actually committed the crime.

As Steven Renderos, senior campaigns director at the Center for Media Justice, put it, “These apps are not the definitive guides to crime in a neighborhood — it is merely a reflection of people’s own bias, which criminalizes people of color, the unhoused, and other marginalized communities.”

Examples abound of racism on these types of apps, usually in the form of who is identified as criminal.

A recent Motherboard article found that the majority of people posted as “suspicious” on Neighbors in a gentrified Brooklyn neighborhood were people of color.

Nextdoor has been plagued by this sort of stereotyping.

Citizen is full of comments speculating on the race of people in 9-1-1 alerts.

While being called “suspicious” isn’t of itself immediately harmful, the repercussions of that designation can be. People of color are not only more likely to be presumed criminals, they are also more likely to be arrested, abused, or killed by law enforcement, which in turn reinforces the idea that these people are criminals in the first place.

“These apps can lead to actual contact between people of color and the police, leading to arrests, incarceration and other violent interactions that build on biased policing practices by law enforcement agencies across the country,” Renderos said. “And in the digital age, as police departments shift towards ‘data-driven policing’ programs, the data generated from these interactions including 9-1-1 calls and arrests are parts of the historic crime data often used by predictive policing algorithms. So the biases baked in to the decisions around who is suspicious and who is arrested for a crime ends up informing future policing priorities and continuing the cycle of discrimination.”

Apps didn’t create bias or unfair policing, but they can exacerbate it

“To me, the danger with these apps is it puts the power in the hands of the individual to decide who does and doesn’t belong in a community,” Renderos said. “That increases the potential for communities of color to come in contact with police. Those types of interactions have wielded deadly results in the past.

“Look what happened to Trayvon Martin. George Zimmerman was the watchdog. He saw someone who looked out of place and decided to do something about it.”

These apps can also be psychologically detrimental to the people who use them.

It’s natural for people to want to know more about the world around them in order to decrease their uncertainty and increase their ability to cope with danger, Ewoldsen said, so people turn to these apps.

“You go on because you’re afraid and you want to feel more competent, but now you’re seeing crime you didn’t know about,” Ewoldsen said. “The long-term implication is heightened fear and less of a sense of competence. ... It’s a negative spiral.”

“Focusing on these things you’re interpreting as danger can change your perception of your overall safety,” Pamela Rutledge, director of the Media Psychology Research Center, told Recode. “Essentially you’re elevating your stress level. There’s buckets of research that talks about the dangers of stress, from high blood pressure to decreased mental health.”

These apps are particularly scary since they’re discussing crime nearby, within your neighborhood or Zip code.

“Because it’s so close, my guess is it has a bigger impact on fear,” Ewoldsen said.

Why is this happening now?

Technology has essentially enabled people to do what they always wished they could: know what’s going on and where the danger is. Security cameras and their associated apps — like smart devices in general — are getting better and cheaper, and they’re finding their ways into more and more people’s homes.

Entertainment devices like smart TVs and streaming devices are the biggest segment of smart devices sales, but smart security devices are a close second, according to data from research firm IDC. The smart security segment’s annualized growth rate is expected to be nearly 30 percent for the next three years.

Like all new technology, we’re struggling to use it correctly.

“When anything is new, we have a hard time figuring out how to use it,” Rutledge said. “We jump in the deep end of the pool and slowly walk to a place that makes sense.”

But why would we use something that plays on demonstrably false fears and has so many negative side effects? Some say: evolution.

“We are preparing ourselves to understand the nature of our environment to increase our chances of survival,” Rutledge told Recode. “Our instinct is to get as much information as possible to figure out what’s a danger.

“Wandering around on the savanna, it was much more important to know where tigers are than flowers,” she added.

So even if you’re statistically safe, the instinct is to look for what could go wrong.

“You might know that only four out of 10,000 people get congenital heart disease,” Rutledge explained. “But if you’ve been one of the four that’s not reassuring. Similarly, if in your neighborhood you’re aware of things happening, the fact that crime is down 20 percent is not going to cut the mustard.”

The issue is compounded by the media, Ewoldsen says.

“If you see more coverage of crime you think it’s more of an issue, even if real-world statistics say it isn’t,” Ewoldsen said.

And all this is happening at a very contentious point in time, both politically and socially.

“Some of this has to do with the general level of discord and lack of comfort societally right now,” Rutledge said.

As Ewoldsen put it, “The president screaming about crime all the time — creating a fake crisis at the border and saying immigrants are stealing jobs, that Mexico and other countries are sending criminals — is reinforcing the idea that crime is going through the roof.”

The rise of fear-based social media apps might also have to do with the decline of local news. Cuts to and closings of local newspapers over the past few decades have led to news deserts: areas that no longer have reporters to cover goings-on that aren’t on a national scale.

For better and usually worse, social media has stepped in to fill the void.

“It’s about how people are exposed to news today,” Renderos said. “Social media is increasingly where people identify their source of news.”

These local social and news apps, with their air of authority and neighborhood-watch ethos, can seem like a good alternative. It’s not as though local news was immune to fear-based coverage, but new technology has the ability to amplify that type of information.

What do we do about it?

Ewoldsen argues it’s a matter of media literacy and how people choose to consume media.

“We need to pay attention and to be more mindful in our consumption of the news,” he said.

That means realizing what posts on these apps are and aren’t relevant to you, perhaps by decreasing the radius for which these apps display crime or by turning off notifications. People also need to be aware of how their biases and those of their fellow app users could skew reporting and the reaction to that reporting.

The responsibility also falls on the makers of the technology.

“It would help if they eliminated all unverified reports,” Rutledge said. In practice, however, this would mean having to verify crimes before they could be posted, which would be very difficult if not impossible.

“It would help if it would say ‘these are three important geolocated things and these things aren’t important — these aren’t threats. Here are some tips.’”

Cynics might say that these companies are only trying to sell more door cameras and wireless security devices and encourage more app downloads, so breeding fear is in their best interest, but Rutledge thinks the companies should take the long view.

“A sustainable company will want to think about the longterm wellbeing of their customers,” Rutledge said. “Always going for a quick buck is not going to make you a sustainable company.”

 

Source: IlluminAge AgeWise with Recode and Vox