Here’s how Apple’s AI model tries to keep your data private

Apple unveiled Apple Intelligence, a suite designed to bring generative AI features to its devices, including rewriting emails, summarizing notifications, and creating custom emojis. At its WWDC 2024 event, Apple emphasized how these features are built with privacy in mind, combining on-device processing with selective cloud use. The company ensures that these AI tools are friendly to user privacy by not training on private data and by utilizing public and licensed materials.

The core of Apple Intelligence relies on homemade AI models optimized for power efficiency and local execution on Apple devices. This means AI tasks like transcribing calls and organizing schedules are done swiftly on-device. However, for more complex requests requiring personal context, the system leverages cloud servers. Apple employs a philosophy of fine-tuning its models using adapters, enhancing specific tasks such as proofreading and summarizing.

To handle the demands of these features efficiently, Apple has incorporated techniques like speculative decoding, context pruning, and group query attention, leveraging the capabilities of Apple Silicon’s Neural Engine. Only newer devices equipped with M-series chips and the iPhone 15 Pro and Pro Max support these advanced AI features, reminiscent of advancements seen in Windows devices equipped with NPUs.

A significant innovation is the Private Cloud Compute (PCC), Apple’s specialized cloud server system designed with robust security measures. Requests sent to PCC are encrypted end-to-end, ensuring that data remains protected. PCC servers lack persistent storage, and processes are verified for security before deployment, keeping user data private even during cloud processing.

One open question remains: what types of requests will trigger the cloud process? The Orchestration step in Apple Intelligence’s system decides whether a task requires cloud resources. Additionally, Apple’s revamped Siri may use ChatGPT for certain queries, offloading privacy considerations to OpenAI when users make specific requests.

Apple’s approach aims to balance privacy with achieving quality AI experiences, a move that contrasts with how Google and Microsoft handle AI processing. The real-world performance of Apple Intelligence will be evaluated once it becomes available later this year.

Source: Here’s how Apple’s AI model tries to keep your data private.