Category Archives: technology

Real or Fake?

I’ve been looking at this picture for 10 minutes. It’s being presented to me on facebook as a behind the scenes picture from Star Wars 1977. I suspect this is AI…. maybe…. Here’s why:

1st – I’ve never seen this picture. I’ve seen just about every image behind the scenes or otherwise from the original Star Wars. Every once in awhile a new one does pop up though.

2nd – The Death Star doesn’t look round. It looks egg shaped to me.

3rd – What are they doing? There’s no reason to hold a bounce card that close to a model let alone curve it. It could be some sort of ad hoc preliminary light test though.

4th – Where’s the light coming from? If the guy on the left is holding a piece of diffusion the light should be coming from behind him. The light seems to be coming from the sheet he’s holding. Flexible flat panel lighting didn’t exist in 1977.

The one thing that makes me think it might be real:

The Mole Richardson wheeled light stand (with the painted brown parts) is mostly obscured by the Death Star yet the legs and wheels appear to be in the proper position. This is something that AI will almost always screw up.

This is where we are. Is it real or fake? How much of the rest of our lives will be spent trying to figure out things like this?

Can AI draw a Red Ring of Light?

Recently I was experimenting with the Midjourney AI art engine.  I saw an image in my mind of a robot backlit by a red ring-light.  I typed it up as a prompt:

An intensely bright thin red ring light in the distance, a woman robot in silhouette, on an abstract shiny metal plate surface, hyper realistic, cinematic, dense atmosphere, intense, dramatic, hyper detailed, –ar 2:1 –v 5

I expected to get something like the image above.  That’s not what happened.  For the next hour I tried to get Midjourney to build something even close to what I envisioned. I typed and re-typed the prompt, changing the way I described the image.  Most of the time I couldn’t even get a red light ring.Midjourney kept trying to make a “sun” with a red sky.  There are round portal structures, some even reflecting red light, but almost none of them light up.  The light’s coming from somewhere else.

What I asked for was simple.  Why is this so hard?

I’m guessing it has to do with the data set the AI was trained on.  I bet there aren’t that many images of red light rings in there, maybe none at all.

One of the things that frustrates me about AI art is the way most things turn out looking generic, like everything you’ve seen a million times before.  This makes sense of course, because that’s how it works.  It studies what everything looks like and then create from that.  It’s almost a creation by consensus.  An unusual Red Ring isn’t part of the equation.  I could probably eventually get to what I wanted if I kept trying and perhaps made the prompts much longer describing every detail.  Maybe.

Or I could do what I did and create what I saw in my mind with CGI.

Has this put me off AI art?  No.  Every tool is good at what it’s good at, and it’s not at what it’s not.  I was looking for the edge of what this new tool could do (because that’s where the art is) and I found it.  There’s nothing really interesting right here but there’s a lot more to discover…

We Are the Dreamers of the Dream

Building the Metaverse to match the real world.

I made this CGI image about a year ago when the Metaverse was the shiny new tech thing.  Most people probably won’t get what it’s about so I’ll explain it, even though David Lynch would probably scold me for doing that.

The grey and chrome sphere are tools that special effects artists use to match 3D computer graphics to real world photography.  If you are shooting a film for example, and part of the scene will be CGI, you shoot a few extra feet of the environment with someone holding a grey and chrome sphere.  The chrome sphere reflects the entire environment and that reflected image can be “unwrapped” and placed as a dome over the CGI so the same light and colors shine on the computer generated elements as in the real scene.  (The chrome ball is actually an old fashioned “poor man’s” way of doing this.  There are 360 degree cameras now that can actually just take a picture of the entire environment right on the set.)

The grey sphere shows the quality of light shining on a specific place in the shot.

This image is about building a computer generated Metaverse that people can walk around in, just like real life.  It’s the dream of constructing a Metaverse as well as the Metaverse as a dreamscape… the birth of a new alternate world.

OK, enough of that…

The main difficulty I had in creating this image stems from the fact that the reflection in the chrome ball is actually the real reflection of the CGI environment.  It’s not a composite.  The mountains actually go all the way around the environment.  The grid floor goes out in all directions.  The “planet” and the “sun” seen in the ball are on the HRDI dome over the scene that is creating the ambient light.  The dome had to be lined up so the “planet” reflected in the sphere correctly.  The mountains had to be rotated in such a way that the peaks behind her and the ones in the chrome ball both looked good.  The main light in the scene is the keylight on the character which can be seen in the chrome sphere as a bright rectangle in the upper left of the sphere.  I could have removed that in post but I left it in because that’s the point.

The metaverse was supposed to be the future of everything.  Facebook even changed their name to Meta.  Now AI is the new thing.  Will the metaverse be created?  Will AI create the metaverse for us?  Who knows…

Created in DAZ Studio 4.20
Rendered with Iray
Color Correction in Capture One

Deus Est Machina

I’m sure many would reject this AI render because the hands are all screwed up.  The machine has a hard time with hands because it’s learning from pictures and it doesn’t really understand what a hand is.  Hands look different depending on what they are holding in the training images so the machine gets confused.

For me the hands praying in this image is the point. I’m fascinated by the way AI can make something so creepy and wrong look relatively normal at first glance. She also has hair coming out of her eyes.  One piece looks like it’s a disconnected continuation of a shadow on her forehead.

Great art happens at the edge of the medium you are working with.  Finding those edges is the true work of the artist.

Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit – all of these will be cherished and emulated as soon as they can be avoided. It’s the sound of failure: so much modern art is the sound of things going out of control, of a medium pushing to its limits and breaking apart. The distorted guitar sound is the sound of something too loud for the medium supposed to carry it. The blues singer with the cracked voice is the sound of an emotional cry too powerful for the throat that releases it. The excitement of grainy film, of bleached-out black and white, is the excitement of witnessing events too momentous for the medium assigned to record them. — Brian Eno

#Art I made with #Midjourney #AI

I ask ChatGPT-4 to write a 2 minute movie script. Is it any good?

(All illustrations created with Midjourney version 4 and 5 from the following ChatGPT scripts)

ChatGPT just updated to the highly anticipated version 4.  I asked it to write a 2 minute movie script.  This was my prompt:

Can you write a two page movie script about an alien robot from outer space that lands on earth in a flying saucer and doesn’t understand humans?

ChactGPT wrote this, faster than I could read it:

FADE IN:

EXT. DESERT – DAY

The vast desert landscape stretches out as far as the eye can see. Suddenly, a bright flash of light illuminates the sky, followed by a loud explosion. A flying saucer crashes to the ground, sending sand flying in all directions.

INT. FLYING SAUCER – DAY

The interior of the saucer is a maze of blinking lights and unfamiliar technology. A sleek, metallic robot emerges from the wreckage, looking around in confusion.

ROBOT (whirring) What is this place? Where am I?

The robot steps out of the saucer and onto the desert sand. It takes a moment to survey its surroundings, scanning the horizon with its advanced sensors.

ROBOT (whirring) There appear to be life forms on this planet. I must learn more.

The robot sets off towards a nearby town, its metallic feet clanking against the ground.

INT. TOWN – DAY

The robot arrives in the town, its glowing red eyes scanning the buildings and people. It approaches a group of humans, who stop and stare in shock.

ROBOT (whirring) Greetings, inhabitants of this planet. I am an alien robot from outer space. I seek to understand your world.

The humans look at each other in confusion, unsure of how to react.

HUMAN 1 (stammering) Uh, hello there. Can we help you with something?

ROBOT (whirring) I require information on your culture, your language, your customs.

HUMAN 2 (skeptically) Why do you want to know all that?

ROBOT (whirring) I am an explorer. I seek knowledge about this planet and its inhabitants.

The humans exchange wary glances, but one of them steps forward.

HUMAN 3 (cautiously) Well, I guess we could show you around. But you gotta promise not to hurt anyone, okay?

ROBOT (whirring) I do not wish to cause harm. I only seek to learn.

The humans nod hesitantly, and the robot follows them into the town.

EXT. TOWN – DAY

The robot walks through the streets of the town, observing the buildings, the people, and the strange customs they exhibit. It tries to communicate with the humans, but its language is too advanced for them to understand.

ROBOT (whirring) I cannot comprehend your language. Can you teach me?

HUMAN 1 (slowly) Uh, sure. We can try.

The humans attempt to teach the robot their language, but it proves to be a difficult task.

ROBOT (whirring) This is a challenging task. I must learn quickly if I am to understand your world.

HUMAN 2 (sympathetically) Don’t worry, we’ll help you as much as we can.

The robot continues to explore the town, learning more and more about the strange and fascinating culture of these human beings. It begins to understand their ways and their customs, and even starts to develop an appreciation for their quirks and idiosyncrasies.

ROBOT (whirring) Your planet is fascinating. I have much to learn from you.

HUMAN 3 (smiling) We’re happy to help. And who knows? Maybe we can learn from you too.

The robot nods, its metallic face glowing in the sunlight.

ROBOT (whirring) Indeed. There is much to be gained from this encounter.

FADE OUT.

Wow!  The screenplay format looks correct except for the tabs.  It’s not a good script but it exists.  That’s something.  And it looks to be about two pages like I asked.

There are some logic problems.  In one scene the Robot says, “I cannot comprehend your language. Can you teach me?” after having a complete conversation with the humans in the scene before.

Another problem I just noticed but missed in my first reading was the setting INT. TOWN – DAY.  Not sure what that is or how it’s different from the next scene EXT. TOWN – DAY.

I wonder if ChatGPT will make revisions.  I propose some changes:

First revision

Can you change the script so the robot lands in a city in the beginning? And he meets a human named Chad?

A simple setting change for the beginning and a more complex character change for the body without any guidance what to do with Chad.  The response:

Sure, here’s an updated version of the script with those changes:

FADE IN:

EXT. CITY – DAY

The bustling city is alive with activity, with skyscrapers towering overhead and crowds of people rushing about their business. Suddenly, a bright flash of light illuminates the sky, followed by a loud explosion. A flying saucer crashes to the ground, sending debris flying in all directions.

INT. FLYING SAUCER – DAY

The interior of the saucer is a maze of blinking lights and unfamiliar technology. A sleek, metallic robot emerges from the wreckage, looking around in confusion.

ROBOT (whirring) What is this place? Where am I?

The robot steps out of the saucer and onto the bustling city streets. It takes a moment to survey its surroundings, scanning the towering buildings and bustling crowds with its advanced sensors.

ROBOT (whirring) There appear to be life forms on this planet. I must learn more.

The robot sets off towards a nearby street, its metallic feet clanking against the pavement.

INT. STREET – DAY

The robot arrives on the crowded city street, its glowing red eyes scanning the buildings and people. It approaches a young man named CHAD, who is walking down the street with his headphones on.

ROBOT (whirring) Greetings, inhabitant of this planet. I am an alien robot from outer space. I seek to understand your world.

Chad stops in his tracks, his eyes widening in shock.

CHAD (stammering) Uh, what? Are you serious?

ROBOT (whirring) Yes, I am. I require information on your culture, your language, your customs.

Chad hesitates for a moment, but then shrugs and decides to go along with it.

CHAD (skeptically) Okay, sure. I guess I can show you around.

ROBOT (whirring) Thank you. Your cooperation is appreciated.

Chad leads the robot through the crowded city streets, pointing out the various buildings and landmarks as they go.

CHAD (excitedly) So, you’re really from outer space? What’s it like up there?

ROBOT (whirring) My home planet is vastly different from this one. The technology, the culture, the language – everything is different.

CHAD (nodding) Yeah, I guess that makes sense. So, what do you want to know about our world?

ROBOT (whirring) Everything. I must understand your ways if I am to communicate with you.

Chad spends the next few hours showing the robot around the city, explaining the customs and language of the human race as best he can. Despite the robot’s advanced sensors and intelligence, it still struggles to fully comprehend the complexities of human culture.

ROBOT (whirring) This is a challenging task. Your customs and language are vastly different from anything I have ever encountered.

CHAD (sympathetically) Yeah, I can imagine. But you’re doing great, man. Just keep at it, and you’ll get the hang of it eventually.

The robot nods, its metallic face glowing in the sunlight.

ROBOT (whirring) Thank you for your assistance. I am grateful for your help in this matter.

Chad grins, feeling a sense of pride at having helped an alien robot understand the ways of his world.

CHAD (smiling) No problem, dude. It’s been a wild ride.

FADE OUT.

OK…  First thing I notice in this first revision is that not only did ChatGPT change the first scene to a city, it also remembered to change the description after the robot steps out of the saucer:  “…onto the bustling city streets. It takes a moment to survey its surroundings, scanning the towering buildings and bustling crowds with its advanced sensors.”  Good job, however “the bustling crowds” don’t seem to react to the crashing flying saucer or the alien robot.  That’s odd to say the least.

“INT. STREET – DAY”  It changed the setting to STREET but left the confusing Interior.

I was impressed to see It capitalized CHAD the first time the name appears in the script, which is proper screenplay format.  Also, Chad refers to the Robot as “dude” at the end which I think is how someone named Chad would talk.  Is that characterization?

Second revision

Can you change the story so that the robot can only speak and understand musical tones and Chad is a musician and starts to understand the robot?

For this revision I’m bringing back the communication barrier and suggesting a way it can be resolved.  I’m trying to make the changes more complex and integrated into the story to see if ChatGPT can keep up.

Sure, here’s an updated version of the script with those changes:

FADE IN:

EXT. CITY – DAY

The bustling city is alive with activity, with skyscrapers towering overhead and crowds of people rushing about their business. Suddenly, a bright flash of light illuminates the sky, followed by a loud explosion. A flying saucer crashes to the ground, sending debris flying in all directions.

INT. FLYING SAUCER – DAY

The interior of the saucer is a maze of blinking lights and unfamiliar technology. A sleek, metallic robot emerges from the wreckage, looking around in confusion.

ROBOT (whirring) What is this place? Where am I?

The robot steps out of the saucer and onto the bustling city streets. It takes a moment to survey its surroundings, scanning the towering buildings and bustling crowds with its advanced sensors.

ROBOT (whirring) There appear to be life forms on this planet. I must learn more.

The robot sets off towards a nearby street, its metallic feet clanking against the pavement.

INT. STREET – DAY

The robot arrives on the crowded city street, its glowing red eyes scanning the buildings and people. It approaches a young man named CHAD, who is playing guitar on the street corner for tips.

ROBOT (whirring) Greetings, inhabitant of this planet. I am an alien robot from outer space. I can only speak and understand musical tones. I seek to understand your world.

Chad stops playing and looks up at the robot in surprise.

CHAD (intrigued) Musical tones, huh? Well, lucky for you, I’m a musician.

ROBOT (whirring) Excellent. I require information on your culture, your language, your customs, all conveyed through music.

Chad grins, intrigued by the challenge.

CHAD (eagerly) Alright, let’s get started.

Chad leads the robot through the crowded city streets, playing different melodies and rhythms to convey various aspects of human culture. The robot listens intently, analyzing the music and trying to decipher its meaning.

ROBOT (whirring) Fascinating. Your culture is so complex, so nuanced. It is difficult to fully comprehend through music alone.

CHAD (smiling) Yeah, I know what you mean. But that’s the beauty of it, right? Music can express things that words can’t.

ROBOT (whirring) Indeed. Your music is a powerful tool for communication.

Chad spends the next few days working with the robot, using his musical skills to help the robot better understand the complexities of human culture. As the days pass, Chad begins to notice something strange – he’s starting to understand the robot’s musical language too.

CHAD (amazed) Hey, wait a minute. I think I’m starting to understand you too.

ROBOT (whirring) You are? That is unexpected.

CHAD (nodding) Yeah, I don’t know how it’s happening, but I’m starting to hear patterns in your music. It’s like my brain is adapting to your language.

ROBOT (whirring) This is remarkable. It appears that our two cultures are not so different after all.

Chad and the robot continue their musical conversation, each learning more about the other’s culture and language. As they play together, a small crowd begins to gather, intrigued by the unusual duo.

FADE OUT.

Oh boy…  The Robot “can only speak and understand musical tones” and tells this to Chad!  I think that’s called an EPIC FAIL.

OK but this is an opportunity.  Will ChatGPT understand that this latest revision doesn’t work and to go back to the previous version?  Let’s see…

Third revision

OK, that didn’t work.  Go back to the previous version and change it so Chad is with his girlfriend Susan who falls in love with the robot.

ChatGPT responds:

Sure, here’s an updated version of the script with those changes:

FADE IN:

EXT. CITY – DAY

The bustling city is alive with activity, with skyscrapers towering overhead and crowds of people rushing about their business. Suddenly, a bright flash of light illuminates the sky, followed by a loud explosion. A flying saucer crashes to the ground, sending debris flying in all directions.

INT. FLYING SAUCER – DAY

The interior of the saucer is a maze of blinking lights and unfamiliar technology. A sleek, metallic robot emerges from the wreckage, looking around in confusion.

ROBOT (whirring) What is this place? Where am I?

The robot steps out of the saucer and onto the bustling city streets. It takes a moment to survey its surroundings, scanning the towering buildings and bustling crowds with its advanced sensors.

ROBOT (whirring) There appear to be life forms on this planet. I must learn more.

The robot sets off towards a nearby street, its metallic feet clanking against the pavement.

INT. STREET – DAY

The robot arrives on the crowded city street, its glowing red eyes scanning the buildings and people. It approaches a young couple named CHAD and SUSAN, who are walking hand in hand down the street.

ROBOT (whirring) Greetings, inhabitants of this planet. I am an alien robot from outer space. I seek to understand your world.

Chad and Susan stop in their tracks, staring in awe at the strange robot before them.

SUSAN (intrigued) Wow, this is amazing. I can’t believe we’re actually seeing an alien robot.

CHAD (nervously) Uh, yeah, but what does it want?

ROBOT (whirring) I can only speak and understand musical tones. I require your assistance in learning about your culture and language.

CHAD (perplexed) Musical tones? I don’t understand.

SUSAN (smiling) Actually, I’m a musician. Maybe I can help.

ROBOT (whirring) Excellent. Your musical knowledge will be invaluable in our communication.

Susan leads the robot through the crowded city streets, playing different melodies and rhythms to convey various aspects of human culture. The robot listens intently, analyzing the music and trying to decipher its meaning.

SUSAN (excited) This is amazing. I feel like we’re making a real connection.

ROBOT (whirring) Yes, your music is a powerful tool for communication. I am learning so much about your culture and language.

As they continue playing together, Susan finds herself drawn to the robot’s advanced technology and mysterious aura.

SUSAN (smiling) You know, I’ve never felt this way about a robot before, but there’s something about you that’s just so…fascinating.

CHAD (concerned) Uh, Susan, are you sure about this? I mean, he’s an alien robot from outer space.

SUSAN (nodding) I know, but there’s just something about him. I can’t explain it.

Over the next few days, Susan spends more and more time with the robot, helping it to better understand human culture through the language of music. As they play together, Susan finds herself falling deeper and deeper in love with the robot’s advanced mind and unique personality.

SUSAN (emotional) I know it sounds crazy, but I think I’m in love with you.

ROBOT (whirring) I do not understand this concept of love, but I feel a strong connection to you as well.

Chad watches in bewilderment as Susan and the robot grow closer, unsure of what to make of it.

CHAD (disbelieving) I can’t believe this is happening. You’re in love with an alien robot?

SUSAN (defiant) Yes, I am. And I don’t care what anyone thinks.

As their relationship deepens, Susan becomes more and more convinced that the robot is capable of feeling emotions just like a human. But as they explore their feelings for each other, they begin to realize that their love is doomed by the vast gulf between their worlds.

ROBOT (sadly) I must return to my home planet soon. But I will never forget you, Susan. You have shown me a world beyond my wildest dreams.

SUSAN (tearful) I’ll never forget you either. You’ve opened my eyes to a universe of possibilities.

As the robot prepares to depart, Chad and Susan say their tearful goodbyes. But even as the robot disappears into the sky, its memory lives on in the hearts of the humans who fell in love with it.

FADE OUT.

Ok, it didn’t get it.  It didn’t drop the musical thing from the last revision.  It just changed it so the new character, Susan, is the musician.  …But she does indeed fall in love with the robot, and the music is a part of that.  Not what I asked for but not bad.

To finish I decided to add one small thing.

Fourth revision

That’s fantastic!  One final change, can you add a scene at the beginning where the flying saucer comes out of a mother ship before heading down to earth.

This is what it wrote:

Sure, here’s an updated version of the script with the added scene:

FADE IN:

EXT. SPACE – DAY

A massive, otherworldly mother ship cruises through the depths of space, surrounded by a cloud of smaller, sleeker ships. Suddenly, one of the smaller ships detaches from the group and begins hurtling towards the planet below.

INT. MOTHER SHIP – DAY

Inside the mother ship, a team of alien scientists watches intently as the smaller ship makes its descent.

ALIEN SCIENTIST 1 (excitedly) This is it, the moment we’ve been waiting for. The drone has made contact with the planet’s surface.

ALIEN SCIENTIST 2 (nervously) Let’s hope our message has been received. We don’t know how the inhabitants of this planet will react to our presence.

EXT. CITY – DAY

The flying saucer hurtles through the atmosphere, trailing a plume of smoke and fire as it descends towards the city below.

INT. FLYING SAUCER – DAY

The robot emerges from the wreckage of the saucer, its systems whirring with anticipation.

ROBOT (whirring) This is it, the moment of first contact. I must make a good impression.

The robot sets off towards the city streets, scanning its surroundings for any sign of life.

INT. STREET – DAY

The robot encounters Chad and Susan on the busy city street, and the scene plays out as in the previous version of the script.

As their love story unfolds, the mother ship hovers in orbit above the planet, watching and waiting as the robot makes contact with the human race for the first time.

FADE OUT.

I guess ChatGPT was tired of typing.  It told me to refer back to the previous version for most of the script.  It only gave me the revised part this time.

It also wrote much more about the mother ship than I wanted.  I was just looking for an exterior shot, not a whole scene inside the mother ship.  Maybe if I had said shot instead of scene it would have written what I expected.

I try to interview ChatGPT about the screenwriting process

Thanks for the movie script. Do you enjoy writing screenplays?

ChatGPT’s responds:

As an AI language model, I don’t have emotions, so I don’t exactly enjoy anything. However, I am programmed to assist and create content for various writing tasks, including screenplays. I strive to provide the best possible assistance to help people achieve their creative writing goals.

I try to get ChatGPT to tell me more:

I’m going to write a blog post about our interaction today and the screenplay we wrote.  Do you have anything you would like to say in my blog post?

At this point I couldn’t get an answer.  I wasn’t sure what was happening.  I’m guessing that the system was overwhelmed because this was the first day of the new version and there were a lot of people on the system.

Two days later I was able to get back on and re-ask the question. I was concerned the system wouldn’t remember the scripts we created but when I logged back in there was a button for  our previous conversation “Alien robot seeks knowledge.”  When I pressed it all the revisions from before appeared right up until things got stuck.  Good.

I asked the question above again.  This is the response I got:

As an AI language model, I don’t have personal thoughts or opinions, but  I’m glad that I was able to assist you in writing your screenplay. I’m  always here to help you with any writing tasks you may have in the  future. If you have any other questions or writing needs, feel free to  ask!

I try again…

OK let me ask something more specific about the process.  When I asked you to write a movie script, how did you decide what would happen in the story?

ChatGPT’s response:

When you asked me to write a movie script about an alien robot from outer space that lands on earth in a flying saucer and doesn’t understand humans, I used my programming and the vast data set I was trained on to generate the initial idea and outline for the story.

From there, I applied my understanding of storytelling principles and structure to flesh out the characters, plot, and conflict in a way that would engage readers and create a compelling narrative. I also tried to incorporate your requested changes and feedback throughout the writing process to create a story that met your specific needs and preferences.

Ultimately, my goal is to use my training and programming to create content that is informative, engaging, and tailored to the specific needs of the person requesting it.

OK… Straightforward, logical, step by step, the way we’re all taught to write a high school term paper – predictable, bland, and impersonal.  Yep, that was pretty much what we got in the scripts.

What have we learned today?

Throughout the script writing I purposely kept the prompts vague to see what ChatGPT would come up with on it’s own.  It seems the tone of the scripts, which I didn’t specify, is trying to be serious but because of the bland cliche nature of the writing ends up unintentionally hilarious.  Lines like “ROBOT (whirring) Greetings, inhabitant of this planet. I am an alien robot from outer space.” and “HUMAN 3 (cautiously) Well, I guess we could show you around. But you gotta promise not to hurt anyone, okay?” are just so on the nose without any subtext, there’s no way an actor could deliver them seriously.  You might be able to make a good comedy out of this material though.  The scenarios kind of play like an SNL skit.

Every revision has one or two things that are interesting (read: funny) as well as a few problems.  If I was actually producing this story as a film I’d probably combine all the good moments from every revision.  That whole process, writing with ChatGPT and then lots of editing, would probably take more time than simply writing a better two page script myself from scratch.  If I wrote more detail in the prompts maybe the result would improve, but there again you eventually get into a situation where you’re writing the whole thing anyway.

I’m not sure creating a narrative screenplay with ChatGPT is worth the time and effort right now but in the future I’m sure the technology will improve.  We’ll see…

 

I wanted a Cyborg painting, and it gave me a painting of a Cyborg

The Art Machine doesn't always give you what you want.

The Art Machine doesn’t always give you what you want.  I’m trying to get away from simple portraits – of characters just sitting there – but Midjourney isn’t good with verbs or action.  It likes faces (with the tops of their heads cut off) or vast landscapes with tiny characters that have their backs turned.  Still it’s a nice painting of a Cyborg…

#Art I made with #Midjourney #AI

Wesley Crusher riding a unicorn into battle

Wesley Crusher riding a unicorn into battleThis one is just for fun.

Last month actor and writer Wil Wheaton who played Wesley Crusher on Star Trek: the Next Generation, got a rare Dall-E invite, and used “Wesley Crusher riding a unicorn into battle” as his first prompt, “because OBVIOUSLY.”  You can see what he got here on his tumbler.

I decided to see if the Midjourney art machine was up to the same challenge.  After several tries with the Beta –test renderer that weren’t very good, I tried the good ol’ V3 engine and finally got this image.  It almost looks like a Star Trek uniform. It almost looks like Wesley Crusher.  …And he almost has legs.  It’s wonkey and creepy but we like that, ‘cuz that’s the way AI art should be.  A few years from now when AI art is perfectly realistic, artists will attempt to re-create this wonky style just like Instagram filters today try to make your digital pictures look like old fashioned faded film prints.  Because realism is nice but interesting is better…

AI-generated images BANNED!

The Invisible Artist

From Ars Technica:

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday. So far, major art communities DeviantArt and ArtStation have not made any AI-related policy changes, but some vocal artists on social media have complained about how much AI art they regularly see on those platforms as well.

Getty images has also banned AI generated artwork from their site.  From The Verge:

Getty Images has banned the upload and sale of illustrations generated using AI art tools like DALL-E, Midjourney, and Stable Diffusion. It’s the latest and largest user-generated content platform to introduce such a ban, following similar decisions by sites including Newgrounds, PurplePort, and FurAffinity.

Getty Images CEO Craig Peters told The Verge that the ban was prompted by concerns about the legality of AI-generated content and a desire to protect the site’s customers.

I’m not sure banning AI Art is possible.  Right now, for myself, I’m experimenting just to see what this new technology will come up with.  If the rendered output is less than perfect I just leave it as-is.  It’s a technology artifact meant to document the state of the art. As a result it’s easy to spot one of my AI Art pieces especially if a character in the picture has six fingers or creepy eyes.

Other artists continue perfecting their work in Photoshop etc.  Depending on their process the final piece may only be partially AI generated.  How are these bans going to work?  How can a site tell which tools you’ve used to create a digital piece?  The genie is out of the bottle.  We can’t go back.

My Cyborg Friend

Today I downloaded Stable Diffusion which is an open text-to-image technology from Stability.ai that can render images on your own computer.  You need a heavy duty graphics card but it’s free to use.  The technology is open and many other start-ups are incorporating it into their systems including Midjourney which I have been very impressed with.

It’s early days and some of the installation methods are technically complicated (“step one: install Python programing language…” etc.) but I found a user interface project – Stable Diffusion UI that has a “1 click” installer for Windows.  It was a little more than one click, but I managed and now I’m rendering away testing it out.

portrait_photo_of_a_cyborg_with_electronic_brain_a_a8795619My initial impression is that it’s not as good as Midjourney, not as artistic or flexible.  The resolution is lower, 1024 x 1024 is the largest available in the drop down menu, at least in the GUI that I installed.  I’ve also discovered that it has a tendency to create double headed characters if you use any resolution over 512 x 512.  Apparently the AI was trained at that resolution and if you try something larger it tries to fill the space by duplicating things.  Like I said, it’s early days.

I’m still very impressed though.  I’ve only tried copying and pasting some of my midjourney prompts which were of course optimized for a different system.  I have yet to spend time figuring out all the controls and I’m still getting some interesting images.  They render fast, within a minute or two on my system which has nVidia Titan RTX graphics cards.

More to come…

Today I let Artificial Intelligence draw what it wants

I’m continuing to check out art created by artificial intelligence at craiyon.com.  You type in some words and the AI creates nine pictures from the prompt.

Yesterday I was trying to see if the AI could capture emotion on a face.  It took some iterations but the answer?  YES.  A big yes.  The final few renders were amazing.  Here’s an example that I didn’t post yesterday:

the face of a cyborg overwhelmed with love in a painterly style

craiyon_070222_the_face_of_a_cyborg_overwhelmed_with_love_in_a_painterly_style_br_To get a good result the AI needed lots of very specific text telling it what to do and lots of trial and error to find the right words.  The AI needed to understand the meaning and context of the words correctly.  When it didn’t things fell apart and became superficial and symbolic, like bland clip-art.

But what if I started from scratch and gave the AI as little direction as possible?  Where would it go?  Will anything dramatic come out?

Lets begin with one word:

face

craiyon_160707_face_br_OK, not bad.  Since I’m giving as little direction as possible, I’ll do two renders of every prompt just to see more variation.

craiyon_160948_face_br_There’s some drawing style in this second one but essentially the same theme.

a face

craiyon_160826_a_face_br_craiyon_161121_a_face_br_Things are already getting super cool and abstract.  Let’s keep going…

the face

craiyon_161344_the_face_br_craiyon_161538_the_face_br_Changing the article seems to change the style.  There’s an interesting lack of color in most of the pictures.  I also get an old silent motion picture vibe… maybe even some German Expressionism.

These are all wonderfully artistic with deep expression and dramatic lighting.  They are way, way beyond the bland images I was expecting.

my face

craiyon_161651_my_face_br_craiyon_161924_my_face_br_That changed the style quite a bit probably because the craiyon AI is now referencing all the selfie avatars of the world.  That large pictorial reference contaminates this minimalist experiment a bit I think.  And what’s up with all the messed up noses?    Anyway, continuing on…

your face

craiyon_161804_your_face_br_craiyon_162343_your_face_br_The thing I find interesting here is that “my face” really injected a sense of photo realism because of the selfie avatars but “your face” leans more toward cartoon drawings.

her face

craiyon_162609_her_face_br_craiyon_162842_her_face_br_Not sure what to think here.  The middle bottom of the first render is really creepy with what looks like a two fingered hand.

his face

craiyon_163020_his_face_br_craiyon_163138_his_face_br_This is getting very strange.  Do I see monkeys in some of these?

its face

craiyon_163306_its_face_br_craiyon_163414_its_face_br_Definitely seeing monkeys again.

our face

craiyon_163542_our_face_br_craiyon_163748_our_face_br_These are wonderful.  Boy, if you ever need a creepy avatar for something, craiyon is the place to go.

no face

craiyon_164425_no_face_br_Uh oh… There’s a relatively well known character called “no face” in the animated film Spirited Away.  Craiyon has choked and reverted to referencing character images as well as using symbols for concepts like the red circle with a line for “no.”  It’s also bringing in a t-shirt again presumably because somewhere the no face character is for sale on t-shirts.  This is definitely a fail for the AI.  I should go back to something that worked, but first I absolutely must try the opposite of this prompt…

yes face

craiyon_164600_yes_face_br_Oh no!  Emojis!  The ultimate symbol for emotions.  Super FAIL!  OK fun’s over.  Time to get back to the good stuff.  Lets go back to something that looked artistic and dramatic…

a face

craiyon_165427_a_face_br_I see people specifying different “Iterations” on Craiyon.  Not sure what it does but I’m going to try it.

a face, iteration x

craiyon_165614_a_face__iteration_7_br_Hmmm… Flat abstract line drawings.  Let’s try a few more…

craiyon_165730_a_face__iteration_237_br_craiyon_165900_a_face__iteration_3_br_craiyon_170007_a_face__iteration_2_br_I’m not sure why all these are line drawings now.  I’ll have to read up on what “iterations” do.  Maybe I should try something more descriptive to add, like trying to make it a pencil sketch.

a face, sketchy

craiyon_170213_a_face__sketchy_br_Not bad.  The single word “sketchy” could be taken several ways, but it understood what I meant.  Not a bad sketch top middle either, for an AI that usually makes horrible faces.  What other description can we try?

a face, shadowy

craiyon_170450_a_face__shadowy_br_WOW!!!  I need to try a few more of these!

craiyon_171154_a_face__shadowy__iteration_7_br_craiyon_171340_a_face__shadowy__iteration_36_br_craiyon_171649_a_face__shadowy__iteration_237_br_Amazing!  Dramatic!  Haunting!  I could play with this all day!

Check out Craiyon for yourself at craiyon.com.