Late last night I got into the Adobe Firefly Generative AI Beta and started playing around

This week Adobe announced their new Firefly Generative AI Beta that you can sign up for. I did on the first day and got in around 8 PM last night. I was playing on my iPad last night (which was fun) though the download buttons seems a little funky on iOS and it didn’t download everything that I asked it to. So I played some more on my Mac today.

There currently 2 active modules, Text to Image and Text Effects.

My name as a text effect with Humpback whales, and the effect set to loose on the characters. You can see the created by Firefly Beta and not for Commercial use bug on the corner.
And my mom’s name done in orchids
And my wife’s name with a tight Steampunk Effect

The type effects are very cool and fast. My biggest complaint was it certainly can’t do one plant or animal per letter, it uses them more as textures than objects, but still I will be using this quite a lot for certain things.

And as for Text to Image, you certainly have to get the prompt right.

For 1960’s psychedelic guitar playing man with beard onstage at woodstock, it went for a modern and disturbing image.

Fairly impressive hands, though those sunglasses, and not sure about the hair in some.

I then tightened it to psychedelic guitar playing man with beard on stage at woodstock in 1968 and things got more interesting.

Love the first and 3rd, then went for graphic instead of photo of the same.

The 2nd one is much more interesting this way, and the 3rd has Floyd from the muppets eyes, or maybe the grateful dead, no idea, but creepy, and the 4th is pretty damn fantastic, though a strangely 7 string guitar with 4 strings.

And then went for an art variation.

These are all pretty damn amazing!

And I do love the controls for image refinement,

And I tried to do some imagery for my short film THE MISADVENTURES OF BEAR, while I do love what it came up with. It certainly isn’t listening to the “dark hallway at night” it is a hallway, But it is certainly lit, and I can’t seem to get it to go dark. The first 4 are art, and then photo afterwards.

And the next are photo.

And then i stayed on photo but tried to go darker, but setting color and tone to cool, and lighting to low lighting.

And then I changed the promp to down a hallway in the dark to see if that would do it, and it didn’t, but I do like the clothing variations it added.

So wow, a really powerful tool I will keep playing with, but it doesn’t seem to know how to do low light or unlit images, or day for night?

Tangent releases a Beta of it’s Warp Engine for Mapping Color grading

Tangent Beta Program – Tangent : Tangent

This new engine in the beta allows you to not only do a custom mapping for DaVinci, but for other programs that don’t use the panel, though you must go in and program the positions on screen as it is basically faking mouse moves. So not full customization (not possible unless Black Magic allows it which is highly unlikely.

And everyone will need to program for their own monitor setup, which is a pain, but once programmed will work.

Adobe’s AI art generation tool Firefly enters Beta

And you can currently sign up for the beta on the Firefly site right here.

AI has made such an explosion lately, especially Midjourney, which I so want to play with, but don’t really want to pay for just to play. This adobe version seems ideal for me as someone who already subscribed to the creative cloud, and with elements that can be used within Adobe products.

And the Text Effect seems really amazing.

I really want to see one of these that you can put your own images into it and get variations on that coming out (I do believe midjourney does that).

No Film School on how you should use ProRAW vs. ProRES on an iPhone

James DeRuvo from No Film School has a great article on ProRAW for Photos and ProRES for Video on the iPhone.

The one thing that doesn’t get touched on, which always drives me nuts about the iPhone is not only that the video frame rate tends to drift, but also that there is a 24FPS but not 23.976. I know 24 is film, but most people shoot everything for video which is 23.976 in the US, and most of Apple’s stuff should be for TV.

Got an e-mail from Maxon, and Red Giant’s PluralEyes is to Enter Limited Maintenance Mode

Got this in an e-mail from Maxon, that basically PluralEyes from Maxon, formerly Red Giant is no longer being developed.

Now this was a revolutionary program that would sync your audio to video (if it had reference audio) and was the first of it’s kind. Now that it is going, every NLE has Waveform audio syncing built in, so it hasn’t been necessary for years. But it is still a little sad to see it go.

Still lets hope that this means Maxon has more time to develop new tools for it’s Plug In suits.