Could Apple’s Child Safety Feature Work Against You? New Research Shows Warnings May Increase Risk Sharing, Telecom News, ET Telecom

0

By Bennett Bertenthal, Apu Kapadia & Kurt Hugenberg

Bloomington: Apple’s plan to deploy tools to limit the dissemination of child pornography has garnered praise from some privacy and security experts as well as child protection advocacy groups. There was also an uproar over the privacy breaches.

These concerns overshadowed another even more troublesome issue that has received very little attention: Apple’s new feature uses research-proven design cues to backfire.

One of these new features adds a parental control option to posts that blocks viewing of sexually explicit images. Parental monitoring of the child’s behavior is expected to decrease viewing or sending sexually explicit photos, but this is highly questionable.

We are two psychologists and a computer scientist. We have conducted extensive research into why people share risky images online. Our recent research shows that social media privacy warnings do not reduce photo sharing or increase privacy concerns. In fact, these warnings, including new child safety features from Apple, can increase rather than reduce risky photo sharing.

Apple Child Safety Features

Apple announced on August 5, 2021 that it plans to introduce new child safety features in three areas. The first, relatively uncontroversial feature is that Apple’s search app and virtual assistant Siri will provide parents and children with resources and help if they come across potentially dangerous material.

The second feature will analyze images on devices of people who are also stored in iCloud Photos to find matches in a database of child sexual abuse images provided by the National Center for Missing and Exploited Children and others. ‘other child safety organizations. Once a threshold for these matches is reached, Apple manually reviews each machine match to confirm the content of the photo, then deactivates the user’s account and sends a report to the center. This feature has sparked a lot of controversy.

The latest feature adds a parental control option to Messages, Apple’s texting app, which blurs sexually explicit images when kids attempt to view them. It also alerts kids to content, introduces helpful resources, and assures them that it’s okay if they don’t want to see the photo. If the child is 12 or under, parents will receive a message if the child sees or shares an at-risk photo.

There has been little public discussion of this feature, perhaps because conventional wisdom dictates that parental controls are necessary and effective. This is not always the case, however, and such warnings can backfire.

When the warnings turn around

In general, people are more likely than not to avoid risky sharing, but it is important to reduce the sharing that does occur. An analysis of 39 studies found that 12% of young people transmitted a sext, or a sexually explicit image or video, without consent, and 8.4% had a sext of themselves transmitted without consent. Warnings may seem like an appropriate way to do this. Contrary to expectations, we have found that warnings about privacy breaches often backfire.

In a series of experiments, we tried to reduce the likelihood of sharing embarrassing or degrading photos on social media by reminding participants that they should consider the privacy and safety of others. In several studies, we have tried different reminders about the consequences of sharing photos, similar to the warnings to be introduced in Apple’s new child safety tools.

Remarkably, our research often reveals paradoxical effects. Participants who received warnings as simple as stating that they should consider the privacy of others were more likely to share photos than participants who did not receive this warning. When we started this research, we were sure these privacy measures would reduce risky photo sharing, but they don’t.

The results have been consistent since our first two studies showed the warnings backfired. We have now observed this effect on several occasions and have found that several factors, such as a person’s style of humor or their experience of sharing photos on social media, influence their willingness to share photos and how. she could react to warnings.

While it’s not clear why the warnings backfire, it’s possible that individuals’ privacy concerns are mitigated when they underestimate the risks of sharing. Another possibility is reactance, or the tendency of seemingly unnecessary rules or incentives to cause the opposite effect of what was intended. Just as a forbidden fruit gets sweeter, constant reminders about privacy concerns can make risky photo sharing more appealing.

Will Apple’s warnings work?

Some children may be more inclined to send or receive sexually explicit photos after receiving a warning from Apple. There are many reasons why this behavior can occur, ranging from curiosity – teens often learn the sex of their peers – to challenging parental authority and reputation concerns, such as being seen as cool in the dark. sharing seemingly risky photos. At a point in life where risk-taking tends to peak, it’s not hard to see how teens might find that an Apple warning is a badge of honor rather than a real cause for concern. .

Apple announced on September 3, 2021 that it was delaying the rollout of these new CSAM tools due to concerns expressed by the privacy and security community. The company plans to take more time over the next few months to gather feedback and make improvements before releasing these child-safe features.

This plan is not sufficient, however, without also knowing whether Apple’s new features will have the desired effect on children’s behavior. We encourage Apple to work with researchers to ensure that their new tools will reduce rather than encourage problematic photo sharing.


Source link

Share.

About Author

Comments are closed.