If it’s all in the cloud - yes. If some/most of it runs locally - still yes with a caveat - hardware acceleration can make it run both faster and at a lower battery cost, same how every computer has a dedicated graphics chip, whether as a separate expansion card or a separate module on the CPU. Yes, you technically can do all that math on CPU, it’s called software rendering, but rendering done with a purpose-built chip is so much better. Same logic applies to “AI”.
I get what you mean, but there’s currently nothing planned or in the works to run local AI on phones, also they’re still way too demanding. They can’t even handle an 7b model if we’re talking about ram usage alone.
Futureproofing, I guess, but also I’m sure there are things other than language models that can benefit from a dedicated processing unit, like photo processing, smaller models, Lens, etc.
You don’t need to buy a new phone for this, it’s all software.
If it’s all in the cloud - yes. If some/most of it runs locally - still yes with a caveat - hardware acceleration can make it run both faster and at a lower battery cost, same how every computer has a dedicated graphics chip, whether as a separate expansion card or a separate module on the CPU. Yes, you technically can do all that math on CPU, it’s called software rendering, but rendering done with a purpose-built chip is so much better. Same logic applies to “AI”.
I get what you mean, but there’s currently nothing planned or in the works to run local AI on phones, also they’re still way too demanding. They can’t even handle an 7b model if we’re talking about ram usage alone.
Actually, Speach To Text is done locally on newer pixel devices. So is the audio recognition, the camera processing and a lot more AI features.
AI isn’t just chatgpt and basically every device in the last 4 years has a dedicated AI chip
Futureproofing, I guess, but also I’m sure there are things other than language models that can benefit from a dedicated processing unit, like photo processing, smaller models, Lens, etc.