We shall see. Some applications I am excited about, like fixing stuff in shots or extending stuff, but the completely generative stuff is certainly a bit scary. I am sure some commercials will just be done with generative stuff, and that will likely take some work away from editors like me.
From Emma Roth at the Verge this is an interesting article. Adobe’s technical previews are amazing and I hope this one hits the main stream creative cloud sooner rather than later, as we could all use Project Sound Lift.
Not unexpected but still disgusting. Using people’s personal photos to train AI. Just disgusting. Zuckerberg should be paying us for the money he makes off of us.
From Alex Baker at DIYPhotography. This is about MGIE or Multimodal Large Language Model Guided Image Editing, which is for text based image editing. It is currently an open source project, but adding this to Siri on an iphone could be very powerful, and a huge deal for AI and iPhone.
There is some amazing stuff there, but the generative AI certainly scares me a bit. I mean who will shoot stock footage when you can just generate new shots, and since adobe generative AI is trained off stock footage will we see diminishing returns and higher costs (at the least in electricity and processing power, but also in the subscriptions to various AI models that every editor will need access too).
I can see many DR spots wanting AI generated B-roll and wanting the editor to foot the subscription bill, so they will basically be getting free b-roll.
In some ways I am excited, but still, it is crazy what is going to be happening soon, and what if your internet goes down. And after seeing some on using Sora to make a short, just how badly it responded to film making terms, so it is not going to the panacea we think it is.
Very interesting as I have been fascinated with what a light field camera could do, and i bet better AI will resurrect light field cameras in some way at dome point.