Just watched an insanely consistent new methodology (like 3 days old) for creating an animated, persistent character from a prompt and a reference image.
ie. Take a picture of me, then you can video animate me doing whatever but just pose tracking someone else.
ChatGPT is barely in the public knowledge for a year, and I’ve watched maybe 10 major step improvements in image diffusion animation. Literally 6 months ago it was a breakthrough to have anime level consistency in video. The Corridor Digital guys helped develop a method for it. Now I know of 3-4 major text-to-video models, I’ve used a couple of really interesting ones that work on small 1-2 second movements. But in a matter of weeks the game is changing again.
Then in addition I just found an open source tool to run any LLM model I want locally, and run it as a server, and use any program with the OpenAI API used in it by simply redirecting to my local run, because it uses an interchangeable API.
In like 3 days I have a million things to explore.