Generative Image AI: A Picture of B2B Marketing to Come

Generative Image AI: A Picture of B2B Marketing to Come

Brian Surguine
Authored byBrian Surguine

In our previous post, we wrote about how B2B marketers can leverage the new generation of AI tools. If you’re not already experimenting with AI tools such as Chat-GPT, Jasper, or Copy.ai, you need to get started right away. In this post, we’re going to focus specifically on generative image AI tools, and how B2B marketers can use them.

The AI revolution is here. AI is already being used in newsrooms, to design spaceship components, and even to resurrect the voices of long-dead Beatles members to make new music. Here at Cascade Insights, we’ve started adopting AI tools like Chat-GPT into functions ranging from content creation to marketing research.

While large language model (LLM) AI tools such as Chat-GPT and Copy.ai have been relatively easy to incorporate into B2B marketing workflows, generative image AI has been harder to figure out. Yes, tools such as Midjourney and Stable Diffusion are capable of producing stunning images, but a cursory use of these tools is unlikely to produce anything that will convince a B2B audience.

Unlike LLM-based AI tools, which can generate coherent and meaningful content from a simple text prompt, generative image AI requires time and experimentation to obtain desired results. And until generative image AI makes a step forward in usability, only those who figure out how to write the right prompts will maximize these tools’ potential for B2B marketing.

Why is Generative Image AI Hard to Use?

If a picture is worth a thousand words, you should expect to use that many to get a computer to give you one usable picture — at least in the beginning.

One of the main difficulties of using generative image AI is that it can feel overly opaque in use. Text-based AI tools provide instant feedback to your prompts, so it feels like you’re having a conversation. In contrast, you won’t be able to see the decisions a generative image AI is making while it works. So, if you enter a simple prompt like “a boy riding a bike,” generative image AI will likely spit out a good image, but you’ll have no idea how the AI made its stylistic decisions.

However, if you are able to provide more detail in a prompt, such as art style, time period, and even specific artists, generative image AI will produce more predictable results. Thus, the output of generative image AI can be dependent on the creativity of the person using it — but you’ll still need to spend time honing your prompts to get the results you desire.

Computers Don’t Understand Images Well

Computers are (still) not very good at recognizing images. So, while LLMs help computers draw statistical relationships between words and phrases to derive meaning, computers still regularly mistake lifeboats for Scottish terriers.

Computer models cannot predict the content of an image with the same certainty as that of text. Part of the problem is that there are few high-quality images to legally train computer models with, especially for less common objects. For example, if you ask a computer to create an image of a giraffe, you might get a field instead — because all the pictures of giraffes used to train the model featured giraffes in fields. So, the computer associates the prompt “giraffe” with an image of a field.

That said, these models are improving rapidly, so these problems will diminish as generative image AI continues to evolve.

Computer, Draw Me a Handshake

While writing this piece, I experimented with Midjourney, a popular generative image AI tool. I asked Midjourney to generate an image of a handshake between two business people at a meeting. I thought this would have been a relatively simple image for Midjourney to create, and I wanted to see how its version differed from all the stock images I’d seen before.

This is the first set of images I got back:

A stock image of a business handshake
A stock image of a business handshake

Not bad, but the lighting wasn’t great, and Midjourney was clearly having trouble counting fingers. So, I honed my prompt and tried again:

A well-lit photorealistic business handshake
A well-lit photorealistic business handshake

A little better, but those hands still look weird. One more, but this time I’ll spell out how many fingers each hand should have:

A photo of a handshake between two people at a corporate business meeting, 4 fingers and one thumb on each hand.
A photo of a handshake between two people at a corporate business meeting, 4 fingers and one thumb on each hand.

This is… not good.

At this point, I visited the Midjourney subreddit to see what prompts other people were entering to obtain successful images. I noticed that a lot of successful prompts included camera information such as a focal length and a camera model. Since I used to be a professional photographer, I knew exactly what to do:

A photo of a handshake between two people at a corporate business meeting, photographed with a Canon 5D and 50mm lens
A photo of a handshake between two people at a corporate business meeting, photographed with a Canon 5D and 50mm lens

Much better. Getting the prompt right went a long way towards having Midjourney produce an image I could use. But I wanted to see something different, so I modified the prompt slightly again and got this:

A photo of a handshake between two people at a corporate business meeting, one person is Black, one person is White, photographed with a Canon 5D and 50mm lens
A photo of a handshake between two people at a corporate business meeting, one person is Black, one person is White, photographed with a Canon 5D and 50mm lens

As you can see, generating what I thought was a simple image turned out to be not so simple after all.

The key difference here was understanding what prompt Midjourney needed to see to produce images I wanted. It was a struggle at first, but with help and time, I got a usable result in the end. However, it’s obvious I have a lot more practicing to do.

But Why Didn’t You Just Use Google?

Because everyone else uses Google. I wanted to see if I could create something more interesting or attention grabbing in less time than it would take to search for a good stock image.

Here’s another image from an earlier experiment. It’s not perfect, but as a starting point, it works really well:

A business meeting in a colorful cartoon style
A business meeting in a colorful cartoon style

There’s no way I would have been able to find an image as interesting as this with a Google search.

How to Use Generative Image AI in B2B Marketing

Stock images

It’s really difficult to find a good stock image that isn’t frivolous or overused. So instead of spending all that time poring over endless tabs filled with stock images, why not try generative image AI? You might get something more interesting and unique that way.

A colorful, Impressionist painting of cloud computing
A colorful, Impressionist painting of cloud computing

Illustrations

It’s not likely that generative image AI will produce a ready-to-use illustration on the first try — it’s likely to get minor details wrong, like the shape of someone’s head or the position of a chair. But, as an idea generator for illustrations of your own, generative image AI is peerless.

An illustration of a hybrid WFH and office work situation. (Can you spot the mistakes?)
An illustration of a hybrid WFH and office work situation. (Can you spot the mistakes?)

Textures

Similar to stock images, generative image AI can produce more interesting textures than what you might find browsing Google. And, you can tailor the color scheme to match your organization’s visual identity.

A biomorphic texture in navy blue
A biomorphic texture in navy blue

Icons

Icons generated by generative image AI will not pass a UX designer’s sniff test. But, as with illustrations, generative image AI can be used as a rapid prototyping tool to quickly test lots of different ideas, styles, and color schemes.

Icons for a tech website painted by Piet Mondrian
Icons for a tech website painted by Piet Mondrian

Video (But Not Quite Yet)

This use case may not be ready for primetime, but we may be on the verge of being able to produce interesting new videos using generative image AI and AI voice generators. Here, some well-loved fictional characters are reimagined in an unexpected way:

YouTube video

 

How We’re Using Generative Image AI

Here at Cascade Insights, we’re already using generative image AI. The header image for this blog was generated with Midjourney and Photoshop.

We’re also starting to use generative image AI for persona images. Here’s a sample of what Midjourney was able to do:

Persona images generated with Midjourney
Persona images generated with Midjourney

We’re also thinking about how we might use generative image AI in our presentation decks, using images to enhance and accent the stories we try to tell from our marketing research.

Practice Now for Perfection Later

We’re already feeling the impacts of text-based AI in B2B marketing, and generative image AI is surely not far behind. To ride the crest of this wave, the key skill to learn is how to write prompts to get usable results. Then, practice, practice, practice.

You don’t have to work only with Midjourney, as I did. Here are some other highly regarded generative image AI tools that you can try:

Additionally, here are a few resources to help you understand how to get the most out of generative image AI:

Different generative image AI tools will behave slightly differently. But it’s better to try a few of them and find out which ones you’re comfortable with. And, until the AI landscape stabilizes, stay abreast of new tools and techniques and keep finding ways to make them work for you.

Home » B2B Marketing Blog » Generative Image AI: A Picture of B2B Marketing to Come
Share this entry

Get in Touch

"*" indicates required fields

Name*
Cascade Insights will never share your information with third parties. View our privacy policy.
Hidden
This field is for validation purposes and should be left unchanged.