When it comes to language translation, it seems that the trend is the biggest ML models do the best. Meta's AI lab that has arguably the best translation models found that training a giant NN to translate from any 1k languages as input to any 1k as output, performs much better than a NN model specific to any single pair of languages (like only English <-> Spanish). Anyway, this is all to say that maybe language translation quality vs on-device compute has been a blocker?