When Photos Learn Faces: Microsoft’s New AI Feature and What It Means for Privacy

Facial recognition in photo libraries isn’t new. It already helps people organize albums in Google Photos and group faces in Apple Photos. In October 2025, Microsoft introduced a similar capability in OneDrive, a cloud platform used by millions to store and sync their personal files. The launch quickly raised questions about how much control users actually have over their own data.

Inside Microsoft’s new feature

The new feature uses AI to organize photos by person, automatically suggesting groups of similar faces under a new “People” section. It’s designed to make photo search and organization faster within OneDrive. However, privacy researchers and digital rights advocates criticized the rollout for its unclear approach to data retention and the limited ability to control how facial information is processed.

According to Microsoft’s documentation, users can manage this option under Privacy & Permissions → People section. However, the company also set a restriction that the feature can only be turned off three times per year. When Proton Drive highlighted this limitation, many users and privacy researchers questioned why such a rule exists and what happens to data that has already been processed when the setting is toggled.

Microsoft has not clarified whether the facial data is deleted, retained, or re-indexed when users reactivate the feature. The uncertainty around those details has sparked discussion about transparency and user consent in automated photo analysis.

Why this matters

Facial recognition depends on biometric data – one of the most sensitive types of personal information. When a cloud service scans photos to detect faces, it must temporarily process and label that data, even if the results stay private to the account. In practice, this means the platform’s algorithms still analyze every image you upload.

The ability to disable or delete such processing is crucial for user autonomy. But when limits are placed on how often a feature can be turned off, it creates friction for people who want to control their own digital footprint. The OneDrive case illustrates how convenience features can quietly expand the surface of data collection.

5 tips to stay EXTRA SAFE on OneDrive under limited control

The current OneDrive policy leaves a clear gap in privacy control. With limits on how often facial recognition can be disabled, users don’t have a reliable way to stop the system from reprocessing or retaining visual data once it’s uploaded. Still, several choices can help reduce how much of that data becomes visible to automated systems.

  • Store selectively. Keep personal or sensitive photos, such as IDs, documents, or family images, on local drives rather than in cloud folders that use AI to analyze content.

  • Avoid automatic syncing. Disable camera roll or folder backups if you don’t want every photo on your phone to be uploaded to OneDrive automatically. Upload only what you need, when you decide.

  • Encrypt before uploading. If you store files in OneDrive, protect them first with local encryption tools like VeraCrypt or Cryptomator. This keeps the content unreadable to Microsoft’s systems, even if the files are scanned for metadata.

  • Follow updates closely. Microsoft may adjust or expand its AI features over time. Review OneDrive’s privacy documentation periodically, especially after major app updates, to understand how photo analysis, storage, or retention policies evolve.

  • Send files only through safe communication channels. If you need to send your photos, IDs, or private files without using cloud storage, choose safe tools like EXTRA SAFE that use end-to-end encryption and don’t retain copies on servers. It allows you to share content directly between devices outside any cloud storage system.

Download the EXTRA SAFE app for iOS and Android for free. Prefer a browser version? Visit extrasafe.chat