AI Buzzwords, Demystified: What They Really Mean for Ecommerce Teams
From GenAI to agentic AI to multimodal models, AI buzzwords are everywhere, and ecommerce teams are expected to keep up. But with so much hype and so little clarity, it’s hard to separate what’s truly transformational from what’s just vendor buzz.
Join Constructor and AWS as we unpack four of the most talked-about AI terms that retailers hear from vendors, analysts, and media, but rarely see explained in ways that connect to real ecommerce value.
If you’re looking to use AI technology to create more enjoyable online shopping experiences and drive revenue, not just any vendor will do.
Constructor is the only product discovery platform that’s built from the ground up with advanced AI — no legacy keyword or vector engines in sight.
That’s why we’ve never lost an A/B test when it comes to helping enterprise ecommerce customers hit their most critical KPIs.
Watch the webinar
Watch the full webinar recording with transcript:
Speakers
Achieve Your 2025 Ecommerce Goals
Ditch legacy keyword engines and vector-centric platforms, and hit your revenue goals faster
Say goodbye to:
-
Zero-results searches
-
Shoppers rage-quitting your site
-
Labor-intensive synonym and redirect setup (plus hours manually boosting and slotting items)
-
Reformulated & frustrated searches
-
Lost sales opportunities and negative impact on customer lifetime value
Constructor:
-
Hyper-personalized search results driven by real-time, onsite shopping behavior
-
AI-driven search results optimized for the business KPIs that you define
-
Search learnings inform holistic discovery across your entire site
-
Advanced searchandising functionality and merchandising dashboard off the shelf
-
The only AI-first ecommerce search solution specializing in enterprise commerce
Say goodbye to:
Constructor:
We help enterprise ecommerce brands get ahead, and stay ahead
AI-native product discovery that’s a win-win for companies and customers.
Constructor’s platform was built by data scientists and engineers who wanted to create an advanced shopping experience that accurately displays the most attractive products to every individual.
Test our platform risk-free and see how we can prove tangible ROI in 2-4 weeks, with Constructor’s Proof Schedule.
Our Native Commerce Core™️ (which fuels our entire product discovery experience) learns, adapts, and evolves with every user interaction to optimize your results across all key KPIs — like revenue, conversions, and profit.
Built on proprietary algorithms, large language models, and advanced transformers, the Native Commerce Core™️ is our secret sauce that makes Constructor AI shopping assistant exceptional.
Our Native Commerce Core™️ (which fuels our entire product discovery experience) learns, adapts, and evolves with every user interaction to optimize your results across all key KPIs — like revenue, conversions, and profit.
Built on proprietary algorithms, large language models, and advanced transformers, the Native Commerce Core™️ is our secret sauce that makes Constructor AI shopping assistant exceptional.
Our Native Commerce Core™️ (which fuels our entire product discovery experience) learns, adapts, and evolves with every user interaction to optimize your results across all key KPIs — like revenue, conversions, and profit.
Built on proprietary algorithms, large language models, and advanced transformers, the Native Commerce Core™️ is our secret sauce that makes Constructor AI shopping assistant exceptional.
Our Native Commerce Core™️ (which fuels our entire product discovery experience) learns, adapts, and evolves with every user interaction to optimize your results across all key KPIs — like revenue, conversions, and profit.
Built on proprietary algorithms, large language models, and advanced transformers, the Native Commerce Core™️ is our secret sauce that makes Constructor AI shopping assistant exceptional.
Take our platform for a free test drive with the Constructor Proof Schedule
AI Buzzwords Demystified
Welcome and Introductions
Nate Roy: Welcome everyone and thank you for joining. Today's session is called AI Buzzwords Demystified. Our goal is to cut through the noise and clarify what terms like foundation models, generative AI, agentic systems, and MCP actually mean for e-commerce teams. These concepts are popping up everywhere, but they're often used loosely depending on where you're hearing them.
We're here to break them down in practical terms — what they do, where the value is, and how to separate signal from the hype. We'll move through each topic, share some real-world applications, and hopefully leave you with a clearer sense of what matters most as you evaluate AI for your organization.
Before we dive in, just a quick note on housekeeping. If you have questions during the session, please drop them into the Q&A pod on your screen — please drop them there instead of in the chat. We'll save time at the end to answer as many as we can. And yes, this session is being recorded, so if you have to leave early or want to share it with a team member later on, you'll receive a link to the recording in a follow-up email.
Today we're joined by two really great speakers. The first is David Dorf, Global Head of Retail Industry Solutions at AWS. We also have Eli Finkelshteyn, CEO and co-founder of Constructor. They'll be guiding us through what these AI terms really mean and how they're being applied across the retail landscape. David, would you mind giving a quick intro?
David Dorf: Thanks, Nate. David Dorf — I look after solutions at AWS for retail, CPG, and restaurants. I've been in the industry an awfully long time. I don't want to admit exactly how long.
Nate Roy: And Eli, over to you as well.
Eli Finkelshteyn: I'm Eli Finkelshteyn, the co-founder and CEO of Constructor. My background before this was data science. As you can probably imagine, this is a really exciting time for me to be alive and participating in the industry and being able to geek out on this stuff. Thank you folks for joining us and geeking out on it with us.
Nate Roy: Appreciate both of you being here. My name is Nate Roy, Director of Brand and Content here at Constructor. Really excited to dive into the conversation. Let's get things going.
Foundation Models: How They've Transformed E-Commerce
Nate Roy: The first aspect of AI we wanted to dive into today is foundation models — large pre-trained AI models that serve as a base for a wide variety of applications. Things like search, personalization, customer service, and even content creation. How have foundation models transformed the way e-commerce platforms understand complex customer queries compared to the era before LLMs? Eli, if you wouldn't mind taking this one.
Eli Finkelshteyn: I'll give a bit of background and then try to keep it as practical as I can. I think Google came up with this example in the first place, but I really liked it so I'm going to use it.
The biggest difference between foundational models and what came beforehand — largely traditional vector search — is that you could figure out related ideas with vector search, but you couldn't really understand relationships of words to other words within a certain sentence or body of text.
Here's an example: if somebody searched for "butter," with traditional vector search and embeddings, you could probably figure out that something nearby to that concept is margarine and maybe return it. At the same time, you might also return some other things that are nearby to the concept that aren't as relevant.
What foundational models let you do is understand the relationships that words have to each other, which is really important to how humans understand meaning. If you search for something like "find me a flight from Boston to London" — it's really important which city comes after "from" and which comes after "to." If you switch those two around, it completely changes the meaning. A flight from Boston to London is very different from a flight from London to Boston.
With traditional keyword matching, or even traditional vector search, you wouldn't have a very good way of teaching the system that those are two different things. Foundational models, which are based on transformers — that's what the T in ChatGPT stands for — were the first thing that really consistently let us do that.
The way the average person might see this: when they're speaking to something like ChatGPT, it feels like it really understands them. The words it gives back look a lot like what a human would produce. That's possible because it has this really good understanding from the foundation model of those relationships between words. It opened up really cool new possibilities both in search and discovery and across the entire world around us.
Nate Roy: Super helpful. David, anything you would add from your side?
David Dorf: I think it's all about context — understanding context from one word to another, and that's especially helpful in product search and trying to understand the types of products you're looking for. That context allows foundation models to provide better answers that are more targeted toward the intent of the shopper.
Domain-Specific Foundation Models
Nate Roy: That segues nicely into the next topic. There's this concept of domain-specific foundation models being used in product discovery and recommendations. What makes those more powerful than general-purpose ones? Eli, I'll pass this to you.
Eli Finkelshteyn: This is something we're personally very excited about at Constructor. The general foundational model — I'd say two things about it. One, it knows something about everything. You can throw whatever at it. If you're interfacing with AWS Bedrock, for example, it's got a lot of interfaces to different models. Claude is another good example — you can throw whatever at it and it'll figure a good deal of it out. It's trained on just a ton of general information about the world.
Domain-specific models, on the flip side, are trained on a very specific set of data that's very valuable to a certain domain. Their goal isn't to take whatever you throw at them and figure it out — it's to solve problems within a very specific domain.
We've been very meticulous about collecting a ton of e-commerce clickstream data. This is what we've traditionally used to train our algorithms — basically using it for reinforcement learning. You can have examples of things that you show to a user within a given context, maybe a search query or a set of recommendations on checkout or in an email, and then see which of those things they give positive reinforcement on. They click on it, they add it to cart, they say, "This is great, this is something I want to buy." And which things do they give negative reinforcement on — they scroll right past it, maybe they close the email.
You can use that clickstream data to build a domain-specific model specifically for e-commerce. You're training it in a similar but different way to how most foundational models are trained. Most are trained — and I'm very much oversimplifying — trying to give you the next best word back. After every word, it figures out what's the next best word, remembering all the context beforehand.
But the underlying transformers can help you predict the next best anything. They can help you understand relationships between not just words but between any sort of patterns. For us, this was really interesting with clickstream data because that same thing you can do with words — predicting next best word — with clickstream data you can predict next best action.
Given everything I know about a user, a particular industry, seasonality, a particular time — what can I show next that's most likely to delight that shopper? Maybe somebody has bought organic milk and some organic bread. Based on that, when they're searching for strawberries, there's a good chance the system will surface organic strawberries. Similarly, if you go into something more general like the frozen foods aisle, maybe based on what they just shopped, you'd show frozen vegetables that also happen to be organic.
This domain-specific model creates possibilities that weren't possible before. They're not possible for the vast majority of companies out there because you need a ton of data to train on. But if you have that data, you can create a model that's really special. And this isn't just theoretical — we've A/B tested this extensively. We don't really use traditional vector search anymore. We do everything on top of transformers, on top of these domain-specific models, because they perform so much better at increasing conversions and revenue.
Training vs. Inference: What Do These Terms Mean?
Nate Roy: One of the concepts you mentioned a few times is training. David, maybe you could help define that term — what does training actually mean, and what's the difference between that and inference?
David Dorf: These foundational models are trained on what we call the corpus of data from the internet — just tons and tons of data pulled into these models so they're general purpose. General-purpose models are great at answering lots of different general questions. But as Eli was talking about, for retail and shopping we really want domain-specific models that understand product catalogs. So you want to train your model to have that focus.
By having better training on a particular focus like a product catalog, not only are you giving better answers, but you're also reducing the cost because it's a much smaller memory footprint. One of the big things that allowed foundation models and their training to take off was the GPU. When you're training a model, it takes a lot of parallel threads to pull in all that information and populate your neural network. A typical CPU has a few threads, but it's not nearly enough. A graphical processing unit used for games has tons of these threads, so we typically use GPUs for that sort of training.
However, Nvidia has the most popular ones and they're in high demand, so they're very expensive and hard to get. A lot of companies have started building their own chips purpose-built for AI training. In the case of Amazon, we have AWS Trainium, a purpose-built chip that helps train these models much faster.
Once you've got a model trained, you then need to ask it questions and get answers — that's called inference. We use similar chips for multi-threaded inference, and we have an inference-specific one called Inferentia, which is optimized for AI models. It's really important to address the entire stack — the chips, the models themselves, the infrastructure on top to scale them. It's a large ecosystem required to actually execute on these use cases.
Key Takeaway: Don't Build Your Own Foundation Model
Nate Roy: If you had to give one piece of advice to someone on an e-commerce team, what's the number one thing they need to understand about foundation models? David, we'll start with you.
David Dorf: The number one thing is you don't want to start from scratch and build your own. There are ones out there that you should really rely on. There are general-purpose ones like Llama or Anthropic's Claude. There are task-specific ones that you may have done additional training on to serve a particular task. Then there are domain-specific ones. Find one that works for your particular use case.
Nate Roy: Eli, do you agree?
Eli Finkelshteyn: I'd second that. Unless you've got a really good reason — and maybe in those situations, like David was talking about, there are chips you can use to build it — but if there's something already built by somebody that has a ton of data that you can just use, don't do science projects.
It's actually kind of funny, this used to be an engineering interview question I would ask when interviewing data scientists — just to see if they're going to go off and do science projects on stuff they don't need something as heavy for, or if they're going to use tools that already exist so they can focus their energy on where they'll get the best bang for their buck. Should you build your own foundational model? 99% of the time the answer is no unless you've got a really good reason.
Generative AI: What Are E-Commerce Companies Looking For?
Nate Roy: The next concept we wanted to chat about is generative AI. For folks less familiar, Gen AI is artificial intelligence that can create new content — text, images, code, even music or video — based on patterns it has learned from existing data. This is one of those concepts that's been particularly buzzy, and I do get the sense sometimes that a little bit of the novelty is starting to wear off. What are e-commerce companies looking for now? David, I'll start with you.
David Dorf: It's funny you mentioned "buzzy." I was looking at Gartner's hype cycles, and generative AI is kind of in this trough of disillusionment now, where everyone was excited that it was going to solve all these different problems and they're finding out there are some limitations. At the same time, agentic AI is at the peak of inflated expectations — that's the new thing people are really putting a lot behind.
What I've seen over the last year or so is a real focus on ROI and proven impact. We spent a lot of time doing experimentation and figuring out what works. We've got a lot of information around that now. Retailers are really looking to say, "How do I get the most out of this? How is it really going to help my business?"
I think there are two really big goals. First is improving the customer experience — better product pages, better search, answering questions, increasing the confidence a shopper has in making a purchase. And then the second is making employees more productive.
A couple of examples: I've had a customer doing catalog enrichment, using generative AI to look at product images and make sure attributes are pulled out into the catalog, which actually improves search abilities. They got around 90% accuracy, which led to about a 50% labor savings in getting new products added to the catalog. Then something you don't normally think about — another customer used intelligent document processing and saved $4 million a year processing customs paperwork. All those products you're selling on your website, some come through customs from different countries with a lot of paperwork. Behind the scenes, generative AI can make employees significantly more productive. People are looking at ROI closely now and they're focused on both that customer experience and employee productivity.
Content Creation and the Gap Between "Cool" and "Production-Ready"
Nate Roy: One of the other use cases is how Gen AI is improving the speed and quality of content creation for e-commerce. How has that enabled e-commerce teams to move faster? Eli, maybe I'll have you start.
Eli Finkelshteyn: It's also a little tied to the previous question. I think there's this interesting gulf — and this is maybe a lot of where the trough of disillusionment comes from — between what's good enough to look cool versus what's good enough to productionize and put your name and your brand on.
If we're using some of the numbers in David's example, maybe 90% accuracy is good enough to look cool. You're like, "Wow, this thing can do something I've never seen before." But if 90% of the attributes you're generating are correct and 10% are incorrect, is that good enough to put live? I would probably disagree. If one out of 10 product attributes you're creating is just wrong because it's hallucinating, somebody at your company is probably going to get unhappy with you.
There's this interesting problem of figuring out where those marks are and what it takes to get from one to the other. MIT did a study a few years ago — when it was humans tagging, they got about 96.4% of things right. Which sounds great, but it's still about one out of every 33 things wrong. That's kind of what humans do. So probably what's good enough for generative AI to get to production is something within that same range.
Then the question becomes: can you get there with a general foundational model? Or do you need something customized — maybe a general model with some RAG examples, maybe reinforcement learning, multi-level LLMs? Teaching it not just on a general "generating attributes" problem, but on that specific attribute: this is specifically what a good attribute looks like, this is specifically what a bad one looks like.
There's also this interesting philosophical question: how high should the bar for "good enough to go to production" be, and is it the same for AI versus humans? Think about self-driving cars — people are generally less okay with AI causing a crash than a human causing one. Humans cause crashes every day and most don't make the news. If an AI causes a crash, that is news. So deciding what "good enough" looks like for each use case is this hairy, interesting philosophical problem.
Beyond Text: Multimodal AI Capabilities
Nate Roy: I think what's interesting about what you just said is that people tend to focus on text-based capabilities, but there are actually much broader use cases. David, I know we were talking about tools like Nova Reel and Canvas. How do things like that showcase the breadth of Gen AI's capabilities?
David Dorf: There are a lot of multimodal LLMs out there today that can combine text and images and things like that. They're fantastic when your use case requires that. But in some cases, your use case requires just text or just an image or voice. You can be more cost-effective using a particular model trained for that specific task.
We have a couple in the AWS stable. One is called Nova Sonic — it's voice-enabled AI. If you've ever used Alexa Plus, you've seen it. We're also looking at using it for drive-throughs in restaurants, and you can use it for interactive picking in a warehouse. We have Nova Canvas, which is for images. A lot of retailers use it to manipulate the background of product images — if they want a beach-themed product page, they can take all the images and put a beach in the background. We also use Canvas for virtual try-on: upload a picture of yourself and a picture of a sweater, and it'll combine them to look really realistic, so you have a better idea of what you'd look like in that sweater.
That's great for increasing shopper confidence and clicking the buy button. But it's also great for reducing returns on the back end. And then Nova Reel is great for short product videos to spice up a product detail page or use in advertisements. We had a customer that used videos and images to create impactful product listings and got a 45% increase in impressions, just because humans are attracted to video and great images.
Eli Finkelshteyn: What David's talking about here is really exciting for us at Constructor. As these capabilities get good enough to put on your site, they're creating so much more content for each PDP than was ever available before.
Now you've got this interesting decision: which of those things do you show and for which user? If I can create a hundred different angles of a person wearing that sweater, do I show each one to each individual user? Maybe I can AI-generate different models. Which model is a given user going to be more attracted to when they see that sweater?
You can do the same for text generation. Maybe I'm the sort of person who wants something described humbly. Maybe somebody else wants it described in an aspirational, optimistic way. Do you need to show both of us the same description? If AI gets really good at describing things in both of those ways and maybe a hundred others, that's a really cool new personalization problem we didn't have before. It's the wild west, but it's a really exciting time to be in the industry.
David Dorf: I can imagine having almost a personal website where all the product pages have been tailored to my likes and interests. A little bit in the future, but I could certainly see that.
Key Takeaway for Generative AI: Experiment Early
Nate Roy: You've both done a great job highlighting the wide scope of what generative AI is capable of now and in the future. What's the biggest takeaway for folks in the audience? Eli, we'll start with you.
Eli Finkelshteyn: Probably the biggest thing to me is just experimenting early. I liken this to the early internet. Maybe not all of the things it was capable of were general-public-ready immediately, but the people that started to get good at it — Amazon is a really good example, a very early e-commerce retailer — that probably had a part to play in how big a company it is now. Even if it doesn't feel like some of this stuff is the exact yellow brick road yet, it's probably worth it to start experimenting. Especially with new interfaces, especially with some of the generative capabilities like generating attributes and content. Because it's a muscle you're going to need. It's hard to imagine a future where that's not a muscle you need to succeed.
David Dorf: I agree. I think retailers should enumerate a bunch of use cases they think would apply, build some prototypes to prove out viability, and then project ROI on each. For the ones that are going to give you the best return, accelerate those and work on them right away. For the ones that maybe don't seem viable today, set them aside — things are moving fast, and in a couple months technology might have advanced enough to address them.
Agentic AI: What Does It Really Mean?
Nate Roy: I think the next topic we wanted to dive into is agentic AI — a super interesting concept and probably the hottest, buzziest topic right now. For folks not familiar, these are systems that can take actions toward goals — like finding specific items, answering product questions, or even tweaking your e-commerce storefront — without requiring constant human input. What does it really mean for AI to be agentic, and how do we separate true agentic AI from the generative AI use cases we discussed? David, I'll hand it over to you.
David Dorf: You mentioned generative AI is about generating content; agents are about acting. There are a lot of things out there you could call co-pilots or assistants that are helping you do things — that's more generative AI. Agents are much more autonomous. They go off and do things on their own.
I have a litmus test for an agent. It's three things: it has to be semi-autonomous, if not autonomous. It has to have some type of reasoning ability — actually breaking larger problems down into smaller problems and solving them. And it typically has access to tools or data.
For example, if you asked an agent, "I need to set an initial price for this new product I want to put on my site," it could look at your forecast and see what the cold-start demand might be. It could look at competitor websites to see what prices are offered on similar items. It might look at your pricing guidelines to determine what margins you're targeting. It goes off and figures out all that information on its own using tools — getting your demand forecast, scraping competitor sites, looking at documents that explain your brand's pricing rules. Then it uses its reasoning ability to determine the right price.
The potential is enormous. We can do all sorts of process automation, get answers more quickly, do tasks that aren't interesting to humans that happen over and over again. Agents looking at supply chains is a great idea too. But really, to be agentic, it's about autonomy, reasoning, and use of tools.
Agentic AI in Retail: New Interfaces for Product Discovery
Nate Roy: Now that the audience has a better understanding of what makes agentic different, Eli, can you share some examples of how agentic AI is being deployed in retail?
Eli Finkelshteyn: I could talk about this for the rest of the webinar, but I'll try to be quick. One of the things that gets me excited within product discovery is new interfaces. Agentic AI is creating the capability of making some of the first consistently valuable new interfaces we've seen within discovery in a long time.
For those of us that have been in the space a long time, we had search and browse for a very long time. Then we had the innovation of recommendations — first around the late 90s, maybe early 2000s — and now the vast majority of e-commerce websites have recommendations. But there haven't been a lot of broadly adopted innovations with consistent interfaces since then, and agentic AI is finally starting to change that.
One thing we're starting to see much more regularly is AI shopping agents. Something you can give general open-text information to — even stuff that website might know nothing about. Maybe I could tell it I'm going camping for the first time on Mount Whitney. The website has no idea what Mount Whitney is, but if it's got access to the outside world, it can reason about it. It figures out the location, that it's cold right now, maybe you really want a lot of supplies. Then based on knowing what products that website sells, it gives you pretty good product recommendations, maybe also some relevant content.
Another good example is a product insights agent. You might think of this as generating frequently asked questions and also allowing shoppers to ask their own questions. If you shop online often, you're probably starting to see these interfaces on PDPs with pre-generated questions based on reviews, what people are asking across the internet, product manuals, and things like that.
With the AI shopping agent, we can already see that people convert on it significantly more than they do even on search, where search already converts significantly more than browse and recommendations. The product insights agent is the one I'm personally most excited about — for the people who engage with it, and the engagement isn't small (I've seen it up to 10%), I've seen conversion rates increase not single digits, not double digits, but triple digits.
For any other new interface that anyone has tried to introduce within discovery, I've never seen anything like that. It makes sense when you step back and think about it — this is actually getting people to buy with confidence. Questions that were keeping them from buying are now being answered. Triple-digit increases are just not something we're used to in the product discovery space. It's really exciting to see that something like that is possible and AI is making it happen.
Inbound, Outbound, and On-Site Agents
Nate Roy: Those are two really good examples of on-site agents. David, we had talked about the concepts of inbound and outbound agents as well. Would you be able to define those?
David Dorf: This whole notion of external shopping agents is very interesting. Companies like Perplexity, Amazon, and Google are creating shopping agents that can go out on websites and shop on behalf of a shopper. From the perspective of a web shop owner, I call those inbound agents. That agent is coming onto your site to buy things on behalf of another customer.
Retailers need to be thinking about this as the way of the future. Do they want to optimize their site so agents can navigate better and make purchases, or do they want to block those agents because they want people on their site? That's a big conversation happening today. In fact, I just saw recently that Shopify decided to block shopping agents coming onto their site.
By the same token, there are outbound agents. If a customer is on your site looking for a particular item and you don't stock it, a lot of times what we used to do was create a marketplace with third-party sellers. But another option is to go off your site onto other partner sites to buy that item on behalf of your customer. Just like Perplexity or Amazon Buy For Me can go out and buy something, you could do that from your site as well — an outbound agent that uses the shopper's context to buy on their behalf.
And then there are on-site agents — the ones Eli was talking about that help customers discover and gain confidence in buying a particular product. I think this is actually the most powerful. If we think about retail over the years, people are used to going into a store and asking for advice. It's not a one-and-done. What's great about on-site agents is it's a conversation. "I want to build a deck. What tools and supplies do I need?" "How big do you want your deck to be?" "What sort of wood would you like?" "Do you have any tools already?" You can have a back-and-forth to figure out exactly what you need.
Retailers need to sit back and think about their strategy for inbound agents, outbound agents, and on-site agents. It's moving really fast and retailers are going to need to react.
Key Takeaway for Agentic AI: Protect Your Digital Presence
Nate Roy: We covered quite a bit on agentic AI. Eli, what's probably the number one takeaway folks should walk away with?
Eli Finkelshteyn: It's hard to distill it into just one. A lot of the stuff David was saying is really interesting, and deciding how you're going to deal with agents coming from off-site is something important. But the number one takeaway right now, because I think for retailers and brands this is probably the most important thing in this new world: you need to make sure that your digital presence is a place your shoppers want to come to.
They can now theoretically go and buy via Perplexity, Google, Amazon, and others. The number one thing — and this has been true for a long time but is going to become especially difficult — is making sure that despite all those options, you're giving shoppers a really good reason not to go shop for your products via those other places, but to go specifically to your digital properties: your app, your website, maybe your store and digital interfaces within that store.
That's where a lot of the innovation within agentic AI comes in. Can you make agents that are really valuable to users — something that has access to all of your reviews, product manuals, and product-specific information that you create and that other people don't — and then answer questions that users wouldn't be able to get answered nearly as well anywhere else?
As you start to think about shopping experiences that agentic AI can create, personalization within them becomes especially valuable when you've got loyalty programs and things like that. But overall, using all of this just to make sure that in this new world you don't become disintermediated. Make sure there's a really good reason for your shoppers to keep coming back. Agentic AI is either something that can really help you do that, or if you ignore it, it's going to make that problem a lot harder.
David Dorf: There are lots of uses of agents in retail, not just on the shopper side but in merchandising and supply chain. But I will echo what Eli said. Retailers need to think about this: on your site, if you have humans and you have shopping agents, how are you going to handle both? What happens to your ads, your loyalty program, your promotions? These are all things we need to think about.
MCP: Model Context Protocol Explained
Nate Roy: Our last concept is very tightly related to agentic AI. We're going to talk about MCP, which stands for Model Context Protocol. Basically, MCP is designed to provide contextual grounding for AI models, especially LLMs. It enables structured, dynamic communication between the model and an external system or environment, giving the model a clearer understanding of the task, user intent, and data. David, could you break this down in layman's terms?
David Dorf: In layman's terms, it just means that an agent needs access to some data and maybe some tools to figure something out, to solve its problem. MCP is a way to help discover and then invoke those different tools.
Using Eli's camping example: if I say, "I'm going to be camping on Mount Whitney," one thing an agent might do is first figure out the location of Mount Whitney — maybe it goes to Google and finds the actual location. Then it might say, given that location, what's the weather? So maybe it goes to weather.com and gets that information. Now it knows the weather, so it can help figure out whether you need a zero-degree sleeping bag or a 40-degree sleeping bag. MCP is a standard put forth by Anthropic, and there's a board that oversees it. It helps you discover and then use tools for agents.
MCP and the Future of External Shopping Agents
Nate Roy: Unfortunately, since we only have a couple minutes left, I'm going to skip to final takeaways on MCP. Eli, what's the big takeaway?
Eli Finkelshteyn: MCP is important for a lot of reasons, but for the folks on here, it's about how you handle agents that are going to be coming to your website.
For something coming to your website, do you want to just make it crawl for products, or do you want to put a little more intelligence behind it? Let's say you're willing to have Perplexity shop on your website on behalf of people. You can either just say, "Perplexity, go figure it out" — maybe it'll use a cached version of your products, maybe it won't have access to live inventory. It definitely won't have access to personalization.
What you can do via MCP is put something on the other side behind that MCP server. It might say, "I'm going to give you access to live inventory to make sure you don't recommend something that's no longer in stock. I'm going to give you access to live product catalogs." Maybe there's another agent on the other side figuring out, "I've seen this agent before. I know how to personalize to it because I know that agent's owner typically buys organic things, or typically buys things in a certain size."
MCP is one of the most important tools for making sure that if you're going to allow outside agents to come, they have a great experience and are more likely to buy from your site. There are other formats for this, like ATA, but MCP is going to be one of the really interesting ones for how you allow that personalization data on your own site to get off-site and give access to those outside agents.
Nate Roy: The idea of agents personalizing to other agents is absolutely wild. Just crazy to think that's the world we're living in. I think we'll dive into Q&A at this point.
Audience Q&A: Where to Start with AI
Nate Roy: There's a really good question here that ties a lot of this together. If someone were to pick maybe one of these aspects of AI to start with and really get it right, where's the best place to start for e-commerce companies? David, we'll start with you.
David Dorf: I think there's a lot of low-hanging fruit with generative AI. There are a lot of things retailers can do that are relatively straightforward to increase employee productivity. Agents are a little more complicated and could be a second area. But today, I would focus on understanding what's available in foundation models, how you want to access those models in a consistent way, and then look at the use cases. The ones I see being used the most are catalog enrichment and product content generation for your PDPs. Those are really great use cases.
Nate Roy: I like that. Eli, any other use cases folks should consider as starting points?
Eli Finkelshteyn: The one David mentioned is really interesting. I'll add one more that I'm most excited about. It's some of those agents we talked about, but specifically versions of them with — I don't know if "prepackaged" is the right word — a prepackaged interface, an interface that we kind of already know works.
That wasn't something around one or two years ago. But at this point, that product insights agent we discussed — you can see the same interface for it on a good number of websites now. I think every single PDP on Amazon has it. I think Walmart might have it too. Some of our customers have it. Similarly, with the AI shopping agent, you're starting to see consistent interfaces for that as well.
Those are becoming low-hanging fruit because at this point other companies have already experimented with it, already proven it out. We already know it works. So now it's less a question of "can you figure it out" versus "are you going to just sit there and lose money on not having it when we already know it works, and the companies that do have it are gaining." And it's not just money — it's share of customers. If customers know they want that stuff and can get it at one website but not another, the longer you wait, the more harmful it is.
Build vs. Buy: What Should Retailers Do?
Nate Roy: This one I think is really important and comes up a lot. How do you know when to build versus buy when it comes to AI? David?
David Dorf: In my humble opinion, unless you have a really unique situation that you want to build for, you should almost always be buying. As Eli said, a lot of these problems have been solved in a repeatable manner. Go out and look at the landscape to find the best fit for you. It's just expensive to build. If you can find something you can get away with buying, I'd much prefer that.
Eli Finkelshteyn: I can't agree with that more. I've had that exact conversation with a bunch of engineers at our company lately. I get it from the engineering perspective — it's fun to build. I would love to spend time and play around with some of this stuff and build it from scratch. But in terms of how fast the world is moving and how quickly companies are specializing these things and creating something really awesome for specific use cases, just buy it. Sign a short contract. That way you're not married to it. If you don't like it, you can go buy something else or build something else later.
But if you're building from scratch and you've got two engineers working on it as a science project while some AI company has a hundred working on the exact same problem, you're probably not making the right decision. You're going to end up with something that costs more to build, you're not going to want to maintain it, and it's going to be worse than what somebody with 100 engineers built.
Closing
Nate Roy: Well, that hour flew by. Thank you both. I feel like every time I talk to both of you, I get a little bit smarter. Appreciate everything you shared today, and hopefully the audience is walking away feeling the same.
For folks in the audience that just spent the last hour with us, thank you so much for your time. If you'd like to review the recording, you'll see that come through in your inbox. And if you have any other questions, definitely don't hesitate to reach out. We'd love to chat with you more. Thanks, everyone.
Eli Finkelshteyn: Thank you for being here, David.
David Dorf: You bet.