AllInfo

FAQ: Apple's Anti-Child Abuse Features; privacy threat or a necessary evil?

Apple has been committed to user privacy for years. That is also a reason for many people to choose the iPhone, because Apple promises never to look at your messages, photos and not track your location and know where you are. The announced measures to protect children seem to conflict with this. There is therefore a lot of criticism of the measures, not only from users. Security experts also say that with these measures, Apple is opening the door to a far-reaching invasion of users' privacy. But how does the new system really work? And what are the dangers for your own private photos, for example of your children? In this FAQ about Apple's upcoming CSAM discovery, we hope to shed some light by presenting the facts to you while also highlighting the criticism.

  1. What exactly is Apple going for? do?
  2. What are other companies doing?
  3. Is this new, or has it been happening for some time at Apple?
  4. Does this affect all users?
  5. < li>Is Apple going to scan all my photos?

  6. Why is this potentially dangerous?
  7. Does Apple see all my photos in the library?
  8. Are my children's photos safe?
  9. Is this system waterproof?
  10. Can I turn this off?
  11. Criticism and privacy
  12. Backdoor
  13. What are the implications for the future?
  14. #1 What exactly will Apple do?

    Apple has officially announced that several measures will be introduced in the United States to combat child abuse and the distribution of images thereof. In fact, there are three different measures involved. First, in the US, Apple is going to apply CSAM detection to users' iCloud Photo Library.

    CSAM stands for Child Sexual Abuse Material.

    Basically, what Apple is going to do is compare photos to a database of proven child abuse footage. Apple does not look at what is in your photos (and therefore does not scan for the amount of exposure in photos), but performs a calculation on the footage. The outcome of this (hash) is compared with the outcome of known child abuse photos, on which the same calculation has been performed. If the hash matches, this can be reported to the US authorities. Google also works with this hash-based method.

    Other measures include increased security in iMessage, with children under 13 being warned if they are sent a potentially offensive photo. This can be turned off. The third measure is that Siri and Spotlight provide additional information to protect parents and children.

    #2 What are other companies doing?

    Google has been scanning online files for CSAM content for some time, and the same goes for Twitter, Microsoft and Facebook. The main difference is that its online content scanning, while Apple promises to do it locally. PhotoDNA has been used since 2013, a method developed by Microsoft with hash technology.

    #3 Is this new, or has it been happening for a while at Apple?

    It's good to realize that Apple has been scanning for CSAM content for some time, at least since 2019. In that year, Apple applied the privacy policy to emphasize that any uploaded content would be scanned for “potentially illegal content, including material that sexually exploits children”.

    Apple's head of privacy Jane Horvath reaffirmed in 2020 that Apple is using screening technology to detect images with CSAM. In that case, an account will be disabled.

    The measures that have been announced make it especially clear that Apple will continue to scale these activities and make users more aware that they are scanning for CSAM. According to some experts, the current criticism is a storm in a teacup because everyone should have known it was already happening.

    Furthermore, your local photo library has been scanned for much longer, for example to recognize faces, animals, landscapes and the like in photos. However, this happens locally and is not reported to Apple or agencies.

    #4 Does this affect all users?

    First of all, it's good to know that all three measures are initially introduced in the US. Apple will therefore not (yet) introduce these measures to European users. However, that does not mean that Apple can do that in the long run.

    They are also not effective immediately, but are part of iOS 15, iPadOS 15 and macOS Monterey. These updates will appear in the fall of 2021.

    For CSAM detection, it only happens if you're using iCloud Photos. If you only store the photos offline on your device, so without synchronization with the cloud, Apple will not apply the detection.

    #5 Will Apple scan all my photos?

    The CSAM detection works locally on your device itself. Apple checks if the hash of a photo matches the hash of a known CSAM image. Matching hashes is all done on the device itself, not in the cloud. Apple therefore does not send photos to a server.

    So the iPhone does not analyze the content of photos on your device and does not look for naked body parts, skin-colored surfaces or faces of minors, but subjects all photos to a calculation, from which the outcome is compared. The term ‘scanning’ is therefore somewhat misleading. You can read how it technically works in this explanation. Here you will also find a good explanation of how Apple's system of Private Set Intersection works.

    When analyzing photos that children share via iMessage, the content is scanned for possible exposure. This is separate from detecting CSAM content and can be disabled.

    #6 Why is this potentially dangerous?

    The majority of users will only welcome measures against child pornography. But under the guise of ‘child protection’ a system is introduced, which can also be abused in other ways. If Apple can scan for CSAM content, it opens the door to detecting all kinds of other content.

    If the system works well, it is a small step to also detect other photos and compare them with a database of photos of well-known political activists, photos of protest rallies, action posters and the like. Users trust Apple will make ethical choices and not cooperate, but could bend under pressure from ‘wrong’ regimes. Organizations such as EFF are therefore opposed and speak of a “backdoor to your private life”.

    #7 Does Apple see all my photos in the library?

    Because matching with the CSAM database only happens on the device itself, Apple does not see which photos you have in your library. Apple also uses a cryptographic technology. This determines whether there is indeed a match, but does not look at what can be seen in the photos. Apple can't see the photo either, even if a match is found. In case of one match, the system will not immediately sound an alarm.

    Only after a certain number of matches with CSAM content will Apple take action and share the results with authorities, if there are indeed a large number of positive matches.

    #8 Are the pictures of my children safe?

    Yes. There are many users who are now afraid that innocent pictures of their children will now be viewed by Apple or labeled as child pornography. Because the system compares hashes of proven child abuse photos, there is little chance that an innocent photo of your child in the bath will exactly match known child pornography. This can only happen if you have published the photo yourself on Facebook and the like, after which the photo has been further distributed in child abuse circles. Photos that you handle with care will not be included in the database of proven CSAM photos.

    Apple says that one in a trillion positive matches per year turns out to be a false positive. Although Apple will do everything to prevent this, in theory it can happen that a photo ends up in the database incorrectly.

    #9 Is this system waterproof?

    New. As the margin of error shows, this system is not foolproof. A false positive match can emerge for any reason, which can have far-reaching consequences for a wrongly accused user. Some of the criticism is based on this as well. Apple claims to protect users' privacy, but at the same time, the reality is that Apple can falsely mark photos from your library as CSAM.

    #10 Can I turn this off?

    h3>

    As mentioned at the beginning, Apple will initially only introduce this in the US. Apple has confirmed that CSAM detection will not be applied if iCloud Photos is disabled. So if you don't want hashes of photos on your device to be matched with hashes from a proven CSAM database, you should disable iCloud Photo Library.

    As far as analyzing images shared via iMessage, this works via opt-in. Parents can enable this for their children to warn children about potentially sensitive images. If you, as a parent, do not want photos to be analyzed in iMessage, you can leave it disabled.

    #11 Apple stands for privacy, right? That seems to have been violated!

    There is a lot of criticism from the angle of experts about the measures Apple has taken here. Experts from the Center for Democracy & Technology warn of the consequences of such a system. They say it weakens users' privacy. iMessage is end-to-end encrypted by default, but this is toned down when Apple analyzes sent photos for content.

    #12 Apple says it will never build back doors?

    The new measures do indeed seem to contrast with earlier statements by Apple. Apple says it never wants to build backdoors into the security of iOS and the iPhone and iPad, because this can be abused. Apple said it has not built any backdoors for China in the past. In a previous case, Apple literally said the following:

    Apple has never built a backdoor of any kind into iOS, or otherwise made the data stored on the iPhone or iCloud more technically accessible to any foreign government.

    This seems to be the case with the new measures, according to privacy experts. “The mechanism that allows Apple to scan messages into iMessage is not a backdoor alternative. It's a back door, experts say.

    #13 What are the consequences for the future?

    Apple now uses this system to identify potential child abuse photos. But this could just be the first step. For years, governments have wanted companies like Apple to build in possibilities to intercept encrypted messages, for example. Apple has resisted this for years (the infamous back doors), but now seems to agree to some extent (albeit for a noble purpose, which is to prevent the distribution of child pornography). But Apple's argument has always been that a backdoor, once built, can have far-reaching consequences. If you give someone a finger, they can just grab your whole hand.

    Governments can (in theory) increasingly demand more from Apple and other tech companies. That is the fear that now lives among critics and security experts. It would open a door to a form of surveillance, especially in countries where governments exercise strict control over citizens. The future will show how this will develop.

Exit mobile version