“LLMs can only complete sentences” is true of base models, but fine tuning for instructions with RLHF has been a thing for 3-4 years at this point. I’m talking about reading news with something like “What’s the summary of this article?”, “Alright save that for later and go to the next one.” You don’t necessarily have to use LLMs but they seem the easiest way so far to understand relatively complex commands and call functions.