Cars.com Chief Product Officer on infusing product development with data: An interview with product expert Matthew Crawford

Cars.com Chief Product Officer on infusing product development with data: An interview with product expert Matthew Crawford

Jun 26, 2023

Jun 26, 2023

In this interview, you’ll hear from one of Fuzy’s executive advisors, a ‘Jack of all trades’ and master of…well, all of them, when it comes to product management. Matthew Crawford is the chief product officer at Cars.com, and his history includes building products from the ground up at startups, managing products in education technology and B2C companies, and running product teams in enterprise B2B IT software firms.

Prior to joining Cars.com, Matthew served as a Vice President at Sysco LABS where he led all aspects of commercial technology for the company, including digital transformation, marketplace, eCommerce, and sales and marketing technology, as well as heading the Sysco Foods technology innovation division.

Fuzy loves comparing notes with Matthew because he chases interesting problems no matter where they lead––across sectors, industries, and disciplines. We sat down with this product design, research, and management expert to learn about his approach to product development at Cars.com and to tap into his process for data-based decision-making.  

What was the interesting problem that led you to Cars.com?


The company’s most prominent focus is the automotive marketplace. We are currently on a journey of moving from a business model that effectively takes newspaper classified ads and puts them on the internet to a builder of advanced technology solutions to facilitate the car shopping transaction. We’re tackling how to create a common infrastructure for demystifying the car buying process and making it a more omnichannel experience. (And if you've bought a car, you know that's a problem that needs to be solved.)

What are some key signals, data, or factors that you’ve looked at to inform how you modernize the Cars.com product?


In this space, we do a lot of primary research to understand the consideration set for how people make decisions around buying cars and where they have friction and frustration points. I also did some of my own secondary research to learn what the experience would be like as a customer. I saw that the automotive market is much like other massive industries in the US (like housing, for example). It’s big and fragmented; you have a multitude of options; but effectively there are just a few primary marketplaces.

I can use travel as a corollary: You might have loyalty to an airline, so if you can book on that airline, you do. Or you may be very price sensitive and use kayak or Google flights. Then you repeat the same thing for a hotel or AirBnB; then you repeat the same thing for a rental car. In automotive, it's very similar. 

So, we have this ongoing phenomenon of decreasing brand loyalty. With the automotive space, people used to feel like, “Chevy (or Ford), ride or die.” That's not how it is anymore. People have broadened their selection criteria to be much more about what they can afford, what's reliable and safe, and what reflects their experience and their constraints versus brand loyalty.

Then within that construct, you also have an evolution of so many aspects of the market, from technology, to sales, to distribution. EVs are becoming popular, but people aren't well educated on them. Tesla is trying to forge a new direct-to-consumer model. Carvana is aiming to be an end-to-end digital solution for buying and selling. All these trends make the process pretty hard on consumers––it's confusing and no one's effectively solving it. 

This whole industry is getting shaken up and moved around and we have to figure out how to modernize intelligently. So that's what we're looking at to help make decisions about how to drive our roadmap––especially asking ourselves what we can do to build enough consumer utility to get people to engage in our product repeatedly versus just being one of many sites that they look at in the purchase cycle. To that end, we’ve made acquisitions around FinTech (like a Nerd Wallet for cars) and trade-in or valuation. Both of those things are effectively designed to acknowledge the fact that consumers are looking for a much more transparent and holistic experience across the entire purchase cycle. 

Because we want to become a destination for buying rather than one point along the journey, the approach to product metrics is different. Leads are important, of course, but even more so now it's attributable transactions, owning the consumer lifecycle, and engaging the OEM and the dealer as well. And even though that’s a different way of thinking, it introduces a common problem from a product standpoint: We have a ton of metrics that you can be looking at in any point in time, and you can slice and dice them in a thousand different ways, and you have more ideas than you're able to execute to address them.

A lot of what I'm focusing on is prioritizing what we do and don’t do so that we actually move a few things forward that materially benefit the company. We're here, we need to get there. What's the fastest path? What are the things we need to measure along that continuum in order to optimize? And we need to be convinced with data to why it matters. Fuzy is interesting because it productizes that process––showing the things you aren't looking at that you should be, providing the narrative around insights and helping people attribute significance to certain events.

Describe the challenges you’ve seen in mining insights from product data.


One of the common pitfalls in tracking and measuring data is too narrowly looking at a certain set of metrics, so all you can see is whether those specific numbers go up or down. But there are moments in time and pieces of data that are extremely valuable that product teams can completely overlook because they're focused on a set of inherited metrics; or they don't know how to analyze the impact of a launch with the right metrics. If the only indicator you have is, “We ran a campaign,” a hundred outcomes could have occurred that you don't know about. 

First, there’s the butterfly effect: All this other stuff around one metric might be changing. Secondly, you might be missing what’s three steps ahead of that metric, which is really what you need to manipulate in order to achieve a downstream event. 

Teams can become narrowly focused in a good way: “I'm trying to optimize my part of the product and drive these outcomes.” But that can sometimes be at the expense of what other teams are doing and you have to wonder if you are moving the needle from the largest outcome perspective. You never want to optimize a little cog that actually is irrelevant and you don't know it, but you’re doing it because it’s your job to make that number go up and to the right. The thought of having a tool manage this problem for you is an incredibly valuable idea.  

What do you think are the factors that drive myopathy in determining the set of metrics that you're interested in? 

There are two aspects of this: intentional and unintentional. From the intentional standpoint, you can focus on certain metrics largely by habit, because somebody made decisions at some point in time 10 years ago, and you continue the pattern due to inertia. I think that’s actually the number one cause. Next would be technical complexity or limitations and infrastructure. It takes a lot to effectively tool, instrument, measure, and store data to derive insight. So sometimes the answer is, if you’re doing something, do it faster. If the lift is too significant to instrument tooling, the cost can get outweighed by the benefit of just acting and measuring later. The third thing would be lack of maturity within teams to understand what you should be thinking about and measuring. So often the reason is, “I was just told to build this.” It's not usually laziness or malice or not wanting to be data-driven. 

I think the unintentional causes are whether or not you have a process-honoring culture and if this is even a topic that is brought up within the business. The human factors here can’t be understated, because a lot of times the myopic decisions about metrics end up being what you get measured on––people get bonused on that stuff and that’s how you get perverse incentives. It's complicated!

Where do you see current data analytics solutions falling short? 

If you go into Tableau or Mixpanel, it’s all just dashboards. It's a lot of data and visualizations, and you’re left to draw conclusions or manipulate the data as you like. But it’s typically prescriptive: “You told me you care about this funnel, so I will make it easy to visualize that, then you slice and dice it.”

This can breed the inertia I was talking about, making it hard for teams to ever discover really important stuff. It’s very inefficient to have extra resources noodling through data with free time. Ideally, you have a system helping with that. Because occasionally there's probably one of those other things that's really meaningful and might change the way that you think about a lot of stuff. “What is your priority? Should you have a team dedicated to optimizing that? Are you completely missing a segment of your population that you didn't even know existed?”

I think existing tools fall short on helping you expand the context in which you're evaluating performance. And so you just get deeper and deeper and deeper in the filter. They don't help you know when you've reached a point where it's meaningless. What I see in Fuzy is the identification of pathways or correlated events. You can see the thing happening around the thing, explore that and then get back to where you were. Creating efficiency within those cycles is huge, and I think that's a struggle for a lot of the tools. 

Then you consider something like Heap, which took like the other path by sending everything. And teams love the idea of that because they think they won't miss anything. But then you basically have like a data warehouse of stuff that you have to sift through.

All these tools started as either an upfront prescriptive model or gather-and-sift, so many tools are stuck in that structure––that’s also where the technology was at the time. As the capability evolves, the intelligence of the tools can evolve alongside it.

What considerations do you factor in when thinking about whether to invest in new headcount or a tool to build out product science capabilities?  


Part of this goes back to maturity. There's a point at which you tip over into having a proprietary process, stack, and method of answering questions and you’re wedded to that. It's hard to add a new tool to that. Usually the way that happens is a leadership role changes and they bring in their process and stack. You really have to be compelling to change that or have a very unique wedge into that. I tend to think that it's usually someone with data expertise introducing this thought process to the organization in a structured way––someone who cares enough to do it. If you don't have any type of data or analytics team, then it’s likely to be a combination of product and engineering knowing that it's a problem, but not knowing how to solve it. The risk-reward of the business case is all about ease of startup, ease of implementation. You don't have to be a data guru and expert to derive meaning from this; it's pretty turnkey and you're not going to hurt yourself. 

The second piece is, if you do have those people, then it's understanding the value prop that this adds to your efficiency and decision making. I look at any type of tool against its efficiency impact. Can it make one person 20% more efficient or three people 10% more efficient? Do I even have to hire the next person because we choose tooling that's lifted everybody up? Maybe you defer that specialized hire because you are able to get enough insight off the ground by adding the new tool.

What do you think would be a PM’s enhanced capacity with a tool like Fuzy? What more could they transform or achieve?

Especially in smaller companies, product managers get questions all the time around what they’re prioritizing. “Why isn't my thing at the top of the list?” A tool like Fuzy lends objective credibility to what the product team is saying in a lightweight way. Ideally it would show stakeholders why certain decisions would be impactful and make it very simple to visualize that. Justifying decisions is a seriously time consuming thing that product teams collectively spend time on. If this can help with that and get the decision-making flywheel turning faster, that's worth so much money. 

For more perspective on product development that incorporates data-science into decision-making, stay tuned to the Fuzy blog. Next up, we’ll hear from some heavy hitters at Indeed with tons of advice on how to demystify the process of moving from data to insights.