Tag Archives: female

Successful Power Up

I wasn’t trying to make comic book art but that’s what the AI gave me.  This is one of the problems with AI art right now, so much of it is arbitrary.  You can get something pretty good but if you want any sort of control, it’s a roll of the dice.  Roll that dice a lot and you may get close.  This subject of this image is close to what I wanted but the style is not.  Here’s the exact prompt:

Massive machines build a beautiful metallic woman robot in a glass tube with men around controlling the machines, In the style of a 1930s SciFi pulp magazine cover, hyper detailed

It took over a hundred tries to get something close and then I did twenty more variations of this image which all came out essentially the same.  I just picked the best variation and here it is…

#Art I made with #Midjourney #AI

Red Ring

The vision in my head:

An intensely bright thin red ring light in the distance, a woman robot in silhouette, on an abstract shiny metal plate surface, hyper realistic, cinematic, dense atmosphere

To manifest that vision I created a floor plane with a metal shader and another black plane as the backdrop.  A simple torus primitive served as the red ring light.

I wanted the red ring to frame the figure and at first I tried placing it way in the distance, but I found that it dipped below the floor plane and I wanted to see the full ring.  Moving it closer and scaling it down created the same composition with the added bonus of shining a strong rim light onto the robot figure.

The shapes in the background are part of an abstract model that actually goes all the way around the landscape.  I found a part I liked and buried it in the fog to create a little texture in the background.

I spent quite some time adjusting the surfaces on the floor and the robot.  The entire scene is lit by the red ring.  The only other light is in the eyes.  The color and reflectivity of the surfaces really determined the quality of the image.  I wanted the floor to be shiny and reflective but not blown out.  I also wanted the robot to be in shadow.  A chrome or white surface didn’t work so the robot is actually shiny metallic dark grey and black.

Ultimately the original render was quite dark.  I felt the quality of the light was more important than the brightness.  I could always brighten it up later.

It took about seven and a half hours to render because of the fog and the dim light.  (Bright light renders a lot faster in Iray.)  I also rendered it at 14400 x 7200 so I could print it four feet wide and hang it on the wall if I really wanted to.  I’m crazy that way…

This screenshot shows the brightness of the original render just as it finished baking for seven hours.

When color correcting I brought the brightness up quite a bit while trying to maintain the quality of the light. The problem was all the detail was in the red channel since all the light was pure red.  This left even a brightened image still dark.  This is all the brightness I could get out of the original color correction:

Later I went back and tried a few other things to brighten it up more.  I figured if I could get the ring to blow out (overexpose) and become white It would still look OK and be much brighter.  My color correction software, Capture One, is quite good and of course that means it doesn’t clip the high end even if you push it quite far.  I tried all sorts of crazy things, experimenting, just to see what the software could do.

When I was screwing around with a black and white version I hit upon something…

I found that if I messed with the top end of the LUMA channel on the CURVES tool and the top end of the RGB channel on the LEVELS tool they interacted and did exactly what I wanted, blowing out the top of the red channel.  (The LUMA channel in the Curves tool somehow only adjusts contrast without changing saturation.  It’s not the same as adjusting the full RGB.)

You can see the way I set both tools at the bottom of this screenshot:

This brightened up the entire image quite a bit and It’s how I created the final color correction.

Created in DAZ Studio 4.21
Rendered with Iray
Color Correction in Capture One

Music in the Metaverse

I think my CGI images tend to look better when I have something in closeup.  It avoids the “medium shot of a character just standing there” that I struggle with.  For this piece I started with an extreme close up and added cool robot eyes and dramatic flowing hair.

I also wanted a graphic background, something flat, technical.  I have an ongoing issue with backgrounds.  I get creatively stuck and I don’t know what to put back there.  I end up trying scores of different things and nothing works.

What I ended up using here was actually a huge Tron like cityscape.  The shapes and lines are actually building size structures seen from the top.  This is what the cityscape looks like normally.

The entire environment is standing on it’s side waaaaaay far away.  I turned on and off different elements depending on what looked good.  It ended up being a real hassle having the background so far away though.  Making adjustments took a long time.  (I went back and figured it out.  it’s 1.8 miles away!  …or 3 kilometers)  I should have scaled down the whole thing and moved it closer.

I named it Music in the Metaverse because the graphic lines in the background ended up looking similar to a music staff.

Created in DAZ Studio 4.20
Rendered with Iray
Color Correction in Capture One

Can AI draw a Red Ring of Light?

Recently I was experimenting with the Midjourney AI art engine.  I saw an image in my mind of a robot backlit by a red ring-light.  I typed it up as a prompt:

An intensely bright thin red ring light in the distance, a woman robot in silhouette, on an abstract shiny metal plate surface, hyper realistic, cinematic, dense atmosphere, intense, dramatic, hyper detailed, –ar 2:1 –v 5

I expected to get something like the image above.  That’s not what happened.  For the next hour I tried to get Midjourney to build something even close to what I envisioned. I typed and re-typed the prompt, changing the way I described the image.  Most of the time I couldn’t even get a red light ring.Midjourney kept trying to make a “sun” with a red sky.  There are round portal structures, some even reflecting red light, but almost none of them light up.  The light’s coming from somewhere else.

What I asked for was simple.  Why is this so hard?

I’m guessing it has to do with the data set the AI was trained on.  I bet there aren’t that many images of red light rings in there, maybe none at all.

One of the things that frustrates me about AI art is the way most things turn out looking generic, like everything you’ve seen a million times before.  This makes sense of course, because that’s how it works.  It studies what everything looks like and then create from that.  It’s almost a creation by consensus.  An unusual Red Ring isn’t part of the equation.  I could probably eventually get to what I wanted if I kept trying and perhaps made the prompts much longer describing every detail.  Maybe.

Or I could do what I did and create what I saw in my mind with CGI.

Has this put me off AI art?  No.  Every tool is good at what it’s good at, and it’s not at what it’s not.  I was looking for the edge of what this new tool could do (because that’s where the art is) and I found it.  There’s nothing really interesting right here but there’s a lot more to discover…

Robotic Romance – A Colossal Story of Forbidden Affection

This is another image where I tried to get something in the style of a 1930s SciFi pulp magazine cover.  It looks like the 30s especially with the orange background, but I think the robot looks more 70s.  I like it anyway…

#Art I made with #Midjourney #AI

God is a Machine

This picture is the first image in a two part series.  The second image Deus Est Machina (which is the same title in Latin) was posted previously.

Deformed hands and tangled fingers have become an iconic symbol of AI art.  The art machine knows what things look like (most of the time) but it doesn’t know what they are.  It starts making a girl with braided hair then somewhere along the way… does it change it’s mind?  …or does it never really know what it’s trying to make?  The result is almost an optical illusion.  It looks correct at first glance but on closer inspection something isn’t quite right.

#Art I made with #Midjourney #AI

 

We Are the Dreamers of the Dream

Building the Metaverse to match the real world.

I made this CGI image about a year ago when the Metaverse was the shiny new tech thing.  Most people probably won’t get what it’s about so I’ll explain it, even though David Lynch would probably scold me for doing that.

The grey and chrome sphere are tools that special effects artists use to match 3D computer graphics to real world photography.  If you are shooting a film for example, and part of the scene will be CGI, you shoot a few extra feet of the environment with someone holding a grey and chrome sphere.  The chrome sphere reflects the entire environment and that reflected image can be “unwrapped” and placed as a dome over the CGI so the same light and colors shine on the computer generated elements as in the real scene.  (The chrome ball is actually an old fashioned “poor man’s” way of doing this.  There are 360 degree cameras now that can actually just take a picture of the entire environment right on the set.)

The grey sphere shows the quality of light shining on a specific place in the shot.

This image is about building a computer generated Metaverse that people can walk around in, just like real life.  It’s the dream of constructing a Metaverse as well as the Metaverse as a dreamscape… the birth of a new alternate world.

OK, enough of that…

The main difficulty I had in creating this image stems from the fact that the reflection in the chrome ball is actually the real reflection of the CGI environment.  It’s not a composite.  The mountains actually go all the way around the environment.  The grid floor goes out in all directions.  The “planet” and the “sun” seen in the ball are on the HRDI dome over the scene that is creating the ambient light.  The dome had to be lined up so the “planet” reflected in the sphere correctly.  The mountains had to be rotated in such a way that the peaks behind her and the ones in the chrome ball both looked good.  The main light in the scene is the keylight on the character which can be seen in the chrome sphere as a bright rectangle in the upper left of the sphere.  I could have removed that in post but I left it in because that’s the point.

The metaverse was supposed to be the future of everything.  Facebook even changed their name to Meta.  Now AI is the new thing.  Will the metaverse be created?  Will AI create the metaverse for us?  Who knows…

Created in DAZ Studio 4.20
Rendered with Iray
Color Correction in Capture One