COLUMN: Building or breaking trust: Independent garden centers and AI

Casey Schmidt from Colonial Gardens shares examples of how the lazy application of AI can erode garden centers' status as experts and damage their authenticity, but also where it can give opportunities to build better relationships with customers.

Photo © Seventyfour/Adobestock

Among all the ways that we can claim to live in interesting times, the advent and proliferation of AI certainly has to be one. In the course of the last two years or so, generative AI has infiltrated every corner of our digital world. 

Generative artificial intelligence is a tool in which users provide a prompt for text, images or videos, and the AI deep-learning model produces an output sourced from millions of data points around the internet.

College students are using it to write their essays. Marketers are using it to generate content at a faster and faster rate. Bad actors are using it as a way to scam folks out of their data and money. The democratization of this amazing tool gives us incredible opportunities for growth — and the potential to spell our own doom — if we don’t learn to use it correctly. 

Frankly, retailers and our customers are not being given the guidelines to understand this ever-changing toolkit. We don’t have a universal set of best practices that help us use these tools and understand their limitations. In a world where AI promises to be a game-changer, we should be careful not to see all the world’s problems as a nail to AI’s hammer.

As the world transforms around us, independent garden centers should be thoughtful about how we use these revolutionary tools to our advantage without compromising what makes us special. AI is inevitable; how we use it is not.

In this article, I want to share some examples of how the lazy application of AI can erode our status as experts and damage our authenticity, but also where it can give us opportunities to build better relationships with our customers.

Farming out expertise

I find a customer deeply engrossed in our fertilizer shelves, searching for a strange array of fertilizers and ingredients. He eventually tells me that the list was generated by an AI chatbot — a paid one, he assures me, as if this makes it a more reliable source on gardening. 

My initial instinct: AI has never grown a plant.

The second thought: Yet. AI has not grown a plant yet. 

Generative chatbots are replacing Google searches. Instead of sorting through multiple websites for information they find most useful and trustworthy, people are trusting ChatGPT to sift through that data and give them the most accurate answer. But accurate in what way? If ChatGPT has never grown a plant, what sources is it prioritizing in relaying information to the customer?

I ended up looking at the product mix that the AI had recommended and substituting a few similar products to cut out duplicates so that the customer spent less money. The customer may have inputted their growing requirements, but their AI had not considered their budget or the practicality of purchasing so many products in its recommendations.

My critics may point out that the customer did not ask a specific enough question. My response would be: Who is training us to ask the right questions to get the right answer from ChatGPT?

Last year during a marketing webinar, an expert recommended using ChatGPT or an equivalent to help write blogs for company websites. Constantly chasing SEO and running thin on time, I gave it a shot.

The first blog we produced was about succulent care and came out looking good enough. I edited it to better adhere to our product mix and best practices, but it served its function and cut my writing time down. I took a break from it over the last year as some ethical concerns popped up (more on that later).

This spring, again short on time, I asked ChatGPT to come up with descriptions of each of our 25 varieties of Japanese maples and their sizes within a specific word count. It dutifully complied.

I cross-checked the first type of tree for accuracy. ChatGPT got the size incredibly wrong. I looked at the others. The accuracy varied wildly. 

It turns out that that’s not terribly uncommon. ChatGPT is getting better, but one article cites the accuracy rate of ChatGPT (as of June 2025) as about 87.7%. Digging deeper into that article, ChatGPT is most accurate in English with specific prompts on topics with information available prior to its cutoff learning date.

Most sources available cite ChatGPT-4o as having a cutoff date of June 2024 at the latest. As of this writing, that’s over a year old. While that date is good enough for most casual blogs, it will always be behind on the latest science and the newest varieties of plants.

Another problem you may encounter is that your chatbot probably won’t tell you if it doesn’t know the answer to the question. It will make something up or “hallucinate.”

High-profile cases include the Chicago Sun-Times, whose list of recommended summer reads included books that did not exist. The U.S. Department of Health and Human Services published a paper that included sources that never existed. These absolute screw-ups deeply undermine the trust the public might have in the integrity of these organizations and their processes.

While our industry has lower stakes than HHS, mistakes could still have consequences. AI could recommend a poisonous plant as edible. It could inaccurately recommend plants as drought tolerant, resulting in customers investing in plants that will die. It could misdiagnose a plant disease or offer an ineffective treatment.

When we make these mistakes, we can own them. When you farm out your expertise to a computer, you risk undermining your own authority as an expert.

So, when we use AI to write our blogs or social media descriptions, how are we ensuring the accuracy of its output? When we don’t know the information it prioritizes, and we know that it makes mistakes, how are we ensuring that our content reflects accurate and up-to-date information?

Even if we don’t use AI, we should be asking ourselves these same questions. How are we keeping ourselves up to date with the latest science and trials? What do we use as a trusted source? By keeping these questions top of mind, we can better advise our customers on proven practices and start steering them away from misleading horticultural folklore.

Authenticity and trust

In 2024, after months of speculation that something was wrong with Catherine, Princess of Wales, the British royal family released a photo. The photo featured Prince William, Princess Kate and their children smiling for the camera for Mother’s Day. The message: All is well.

Keen observers began to pick the photo apart, noting inconsistencies in the tile work, the door frame, spaces between the zipper teeth and the angle of a child’s arm. If the photo had not been entirely fabricated by a generative image AI tool, it seemed to certainly have been altered by one.

Suddenly, this photo intended to prove Princess Kate was fine served only to confirm that something was terribly wrong and that the public was being lied to. Indeed, the royal family eventually revealed that Princess Kate was battling cancer.

While you may argue that the royal family has the right to navigate health challenges in private, they further fractured their fragile sense of trust with the community — and the world — by not only denying that there were problems, but falsifying the proof.

Strangely, I have a similar gut reaction every time I receive an email from one of my tropical plant vendors whose cover photo is AI-generated.

On the one hand, I get it. We do not have the time to stage a bunch of plants around Santa’s sled for a simple marketing email.

On the other hand, what are we accomplishing? Anyone glancing for more than one second at the photo will notice that the palm transforms into a Monstera, that stems don’t connect to leaves, and that the bows are strangely alien on the presents.

As the customer, I am not buying an AI-generated plant. I am buying a real plant directly from this vendor. The first photo contained in an email marketing its products does not contain a picture of those products.

Do our customers have the same experience when we use AI image generators with an even less sophisticated degree than the royal family? I have seen similar usage of generative AI images by garden centers, advertising seminars in particular.

Again, I get it. Many of my photos from our seminars include the inevitably messy background of a greenhouse with dusty fertilizer bags. The planters that the customers make have not grown into their full glory as they will in a few months.

But how do we serve our own cause of creating real experiences for our customers by serving them up obviously fake images? Are our customers convinced by glossy, "uncanny valley" Santas and backgrounds that do not feature our own stores or our own staff? How does this imagery set us apart on their social media feeds, where AI images run rampant?

I think that AI imagery can be used in editing, but we should be very thoughtful about how we apply it. We can use it to remove a messy background, for example, with little compromise to a true representation of what we’re presenting. We should be using it to present the best version of ourselves without setting unrealistic expectations. The customer should see the image and be inspired without being disappointed when they show up at our door.

Building trust while busting misinformation

It started a few years ago with a picture of a black bleeding heart. Several customers called asking if we had it in stock. Unfamiliar with such a thing, we searched and find a singular photo, likely from Tumblr, of a black Photoshopped arch of Dicentra blooms.

The scary part was not the photo or the customer thinking it was real. The original photo was likely made as a piece of art. The danger was that that photo was now being circulated on multiple marketplace sites, advertising seeds for the black bleeding heart.

This phenomenon exploded in spring 2024. Before the royal family released their photo, the first iterations of AI-generated photos began to flood social media feeds. To most anyone under the age of 35, the first round of photos were clearly fake. We grew up watching CGI animation, and we know it when we see it. The AI-generated plants looked like animals out of Star Wars rather than real plants. The people in the images had too many teeth and strange hands.

Still, we started to receive many calls from people asking for fake plants, like the "Cat’s Eye Dazzle." Older generations, particularly those who are less married to the Internet, are having a much harder time discerning what is real, even at those early stages. The posts seem benign, originating on banal-sounding Facebook pages like “Creative Gardening.” It’s not until you start looking at the comments that you find the scammers hawking seeds. 

When hearing about a new plant, our staff whipped out their phones to see if we carried something similar. 

“I’m sorry. That plant isn’t real. It’s AI.”

Some callers were simply disappointed. One had bought seeds already. 

Since the days of those Star Wars plants, the AI images have become closer to reality. On a weekly basis, we  encounter customers looking for red hostas. The latest person I informed said, “Wow, you must think I’m a sucker for believing it.”

“I absolutely do not.” 

If glow in the dark petunias exist, how could a red hosta seem unreasonable?

After informing a person on the phone that the plant he was looking for was not real, he responded something along the lines of, “Thank you for telling me. All of the other garden centers I called said that they didn’t have it in stock. I’m going to shop with you since you know what you’re talking about.”

That customer was clarifying for me. Debunking these photos is not just the right thing to do. It’s good for business. Your customers are coming to you as a trusted source. Maybe it’s frowned upon to pull your phone out to Google something, but for me, I’d rather give accurate information than think a fake plant is just a variety that I’ve never heard of.

I shudder to think of all the people who did not have an IGC to call to ask about a plant before they were scammed. We should be part of their community. We should be a trusted source of information. We should be doubling down on our experience and expertise on our subjects. We should be reaching out to our customers to inform them of this danger and remind them that we are invested in them as much as they should be invested in us.

Being consistent with our values

In the furious production of articles about how AI will change the world, an underreported aspect is the sheer amount of energy and resources that it takes to power the system. 

I mentioned earlier that I took a break from AI for ethical reasons. I spent five years of my career in conservation education, talking to folks about the impacts of climate change and the challenges facing endangered species. Since returning to my family’s garden center, we have continued to make strides forward in sustainability, including installing solar panels.

Sustainability is part of our brand now and part of our value to our community. Knowing that, I cannot use ChatGPT to produce a simple blog, considering that I will still need to take time to edit and fact-check it. The amount of water and energy used goes against the values I have and that we’ve built into our business. 

An answer from ChatGPT uses 10 times the amount of energy of a Google search. How does the use of AI align with our messaging? Is there an element of hypocrisy in selling a rain barrel to help conserve water while using AI to write the product description or social media post? While our communities may not yet see the explosion of AI power and water consumption, many will eventually, and some already have.

Go touch grass

There is a phrase on the internet used to tell people that they’re disconnected from reality: “Go touch grass.” While it’s usually used to discredit someone in an argument, it’s actually good advice. Go outside. Touch grass. Play in the dirt. Look at your plants. You are part of the world — a real, living world outside of the screen.

We are blessed in this industry that our products and mission are rooted in caring for our earth, and by extension, our community. When someone needs to “touch grass,” they should first think to come to us for authentic, real-world experiences. We are literally and figuratively selling grass to touch.

We shouldn’t be cheapening our brand by marrying it to the language and imagery of AI for the sake of convenience. We should be doubling down on authenticity. I think that while AI can certainly be used to professionalize our work online, we should be careful with how much it becomes part of our identity. 

Maybe AI will run us all out of a job. Maybe we’ll all live in a virtually driven world. But maybe the opposite will happen.

Perhaps the endless train of AI slop on the internet will drive people to seek out analog experiences, the exact kind of retail experience we specialize in. Maybe people will intentionally move to “touching grass,” immersing themselves in nature and creating peaceful, meaningful spaces with plants.

In a world where that is still possible, let’s be the ones to offer the alternative — by bringing our own authenticity and creativity, rather than relying on a computer to do it for us.

Casey Schmidt is a conservation educator at independent garden center Colonial Gardens in Phoenixville, Pennsylvania.