Just watched an insanely consistent new methodology (like 3 days old) for creating an animated, persistent character from a prompt and a reference image. ie. Take a picture of me, then you can video animate me doing whatever but just pose tracking someone else. ChatGPT is barely in the public knowledge for a year, and I’ve watched maybe 10 major step improvements in image diffusion animation. Literally 6 months ago it was a breakthrough to have anime level consistency in video. The Corridor Digital guys helped develop a method for it. Now I know of 3-4 major text-to-video models, I’ve used a couple of really interesting ones that work on small 1-2 second movements. But in a matter of weeks the game is changing again. Then in addition I just found an open source tool to run any LLM model I want locally, and run it as a server, and use any program with the OpenAI API used in it by simply redirecting to my local run, because it uses an interchangeable API. In like 3 days I have a million things to explore.
Shit is moving so fast it’s insane. A little scary if I’m honest.
Link???
https://youtu.be/8PCn5hLKNu4?si=ZTEkrnod5NyPGnZn
That whole actors guild strike is going to look real dumb in about 18 months when actors are no longer required.
In 3 years I fully bet that I’ll be able to make an entirely different storyline for “The Force Awakens” and “The Last Jedi” and use the same actors in it to make a completely different film basically without needing any crew at all, or at the most difficult sections, having at most 2-3 individuals to shoot or animate with. … and I hate how great of an opportunity that they blew with those films so bad that I might actually do it. Then I’ll probably tackle “The Last Airbender” afterward 🤣
I’d watch it. Redo Solo while you’re at it lol