Oddbean new post about | logout
 #Amazon releases details on its Alexa #LLM, which will use its constant surveillance data to "personalize" the model. Like #Google, they're moving away from wakewords towards being able to trigger Alexa contextually - when the assistant "thinks" it should be responding, which of course requires continual processing of speech for content, not just a word.

The consumer page suggests user data is "training" the model, but the developer page describes exactly the augmented LLM, iterative generation process grounded in a personal knowledge graph that Microsoft, Facebook, and Google all describe as the next step in LLM tech. 

https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2023/09/alexa-llm-fall-devices-services-sep-2023

We can no longer think of LLMs on their own when we consider these technologies, that era was brief and has passed. Ive been waving my arms up and down about this since chatGPT was released - criticisms of LLMs that stop short at their current form, arguing about whether the language models themselves can "understand" language miss the bigger picture of what they are intended for. These are surveillance technologies that act as interfaces to knowledge graphs and external services,  putting a human voice on whole-life surveillance

https://jon-e.net/surveillance-graphs/#the-near-future-of-surveillance-capitalism-knowledge-graphs-get-chatbots

#SurveillanceGraphs

https://neuromatch.social/system/media_attachments/files/111/156/283/172/381/510/original/6e068e3dab67d719.jpg

https://neuromatch.social/system/media_attachments/files/111/156/283/227/717/432/original/95e24f091882c224.jpg

https://neuromatch.social/system/media_attachments/files/111/156/283/315/772/081/original/396b334e57989e53.jpg

https://neuromatch.social/system/media_attachments/files/111/156/283/387/702/826/original/db6654fb47cf9928.jpg