Taming a Monster: How to use AI Art and Mid Journey to Create Composites - Creative Photography - BradleyMichael.co

A.I. Art - What the Heck is That???

In case you missed out last writeup on the future of AI, here’s a refresher.

This image of a fish was completely generated by a machine.
What that means is I told the machine I wanted a fish with a certain aesthetic and the machine drew me one.
The technology is incredible, but maybe a bit misleading.
Although it is called Artificial Intelligence, I did find it devoid of any sentience or reason. So you don’t have to worry about fighting an evil computer anytime soon.

Basically, it’s very fancy programming that can interpret your prompts and generate an image based off similar images. (I imagine the image tags and descriptions across the internet help to populate this)

This image used A.I,

This image I used artificial intelligence and an old photo from years ago to create a magical space walk.
My wife enjoys a video game called, “No Man’s Sky” and I figured we could use AI to put her there!

How We Did It

We started with multiple images. The goal was to make conscious decisions and generate something that would fit the aesthetic of what I wanted to create. My goal for this composite was to place my subject Arianna into a city high rise window, with a cyberpunk city in the background. This project took about 10-15 minutes to complete in photoshop. longer to generate the images.

into

Step 1: Generate the Source Image

Information Overload

This image might make it seem like I just quickly typed in what I was looking for and got a result, but in honesty I had to learn to speak the language of the machine. And even then, I still needed a few iterations and some photoshop. Let’s take a closer look at that machine language.

 

Basic Structure of Commands

Parameters options that change how the images generate.

A full /imagine command might contain several things, like an image URL, image weights, algorithm versions, and other switches. /imagine parameters should follow the above referenced order. We will get into the nitty gritty of how to maximize results using this formula in a later blog, for now let us cover the basics.

"Switches" in this context means controls passed to the bot using a "--" parameter. For instance, the command /imagine hi there --w 448 has a text prompt, and a parameter for the width, using the "--w" instruction.

 

Size

Width and Height

--w Width of image. Works better as multiple of 64 (or 128 for --hd)

--h Height of image. Works better as multiple of 64 (or 128 for --hd)

--w and --h values above 512 are unstable and may cause errors.

Aspect Ratio

--aspect or --ar

Sets a desired aspect ratio, instead of manually setting height and width with --h and --w.

Try --ar 9:16

for example, to get a 9:16 aspect ratio (~256x448).

Shortcuts

These "shortcuts" are commands that do the same as the forms following the ":" in the list below.

For example, if you were to type:

/imagine: promptvibrant california poppies--wallpaper

It would be the same as typing the longer form:

/imagine: promptvibrant california poppies--w 1920 --h 1024 --hd

Shortcut equivalences:

--wallpaper: --w 1920 --h 1024 --hd

--sl: --w 320 --h 256

--ml: --w 448 --h 320

--ll: --w 768 --h 512 --hd

--sp: --w 256 --h 320

--mp: --w 320 --h 448

--lp: --w 512 --h 768 --hd

 

Algorithm Modifiers

Version 1

--version 1 or --v 1 uses the original Midjourney algorithm (more abstract, sometimes better for macro or textures). --v 1 corresponds to the button in /settings.

Version 2

--version 2 or --v 2 uses the original Midjourney algorithm in use before July 25th, 2022. --v 2 corresponds to the button in /settings.

Version 3

--version 3 or --v 3 uses the current default Midjourney algorithm. --v 3 corresponds to the

button in /settings.

High Definition

--hd Uses a different algorithm that’s potentially better for larger images, but with less consistent compositions.

Prompt Modifiers

--No

--no Negative prompting (e.g., --no plants would try to remove plants). This is like giving it a weight of -0.5.

Detail Modifiers

-Stop

--stop Stop the generation at an earlier percentage. Must be between 10-100.

--Uplight

--uplight Use "lighter" upscaler for upscales. Light results are closer to the original image with less detail added during upscale. --uplight corresponds to the button in /settings.

Regular upscale (left) vs Light Upscale (right)

 

Quality

--quality <number> , or --q <number> Sets how much rendering quality time you want to spend. Default number is 1. Higher values take more time and cost more.

Specifying a quality of .5 will reduce your cost and image quality:

So now it’s time to put it all together in photoshop. I won’t go into compositing as that could be an entire blog in and of itself but I hope this gives you the knowledge you need to move forward with making the art you love to make.

Don’t get hung up on how, and live in the beauty of what is.

Til Next Time..

Don’t miss the next blog, subscribe to our newsletter today!

Previous
Previous

How It’s Made: AI Art Composite - Portrait - Cyberpunk - Indianapolis, Indiana

Next
Next

Death of the Artist: Is AI Responsible? - - How it works, and what it means - BradleyMichael.co