You can read about their entire stack, its not just belief, they are opening it to researchers:
https://security.apple.com/blog/private-cloud-compute/
Its extremely impressive
That's why I say: if it works in actuality the way they present it in theory, I'm fine with it.
Even if they used half the techniques they described in this post it would already be a huge privacy improvement over the way people are using ai today.
I use my own llms on ollama.ai . 100% private and open source.
Again, they may open their stack for use (APIs or whatever) but can you audit the code?
Did @NVK stole Will’s nsec?