Ep. 29 | Finding the Customer State of Mind

Have you channeled the customer’s state of mind in your testing experiments? If you have ever grouped all your mobile visitors or your product page viewers or your site exit-ers together for an experiment then you may have missed the customer’s state of mind completely. Host Allison Hartsoe interviews experimentation expert Brooks Bell, CEO of Brooks Bell who shares the five pains and gains of the customer’s state of mind. Brooks also includes good starting places for experimentation and several nuggets of well-earned wisdom.

Read Full Transcript

Allison Hartsoe – 00:04 – This is the Customer Equity Accelerator, a weekly show for marketing executives who need to accelerate customer-centric thinking and digital maturity. I’m your host, Allison Hartsoe of Ambition Data. This show features innovative guests who share quick winds on how to improve your bottom line while creating happier, more valuable customers. Ready to accelerate? Let’s go!

Welcome, everyone. Today. The show is about getting into the customer state of mind and to help me discuss this topic is Brooks Bell. Brooks is the founder and CEO of Brooks Bell Inc. and experimentation consultancy. Brooks, welcome to the show.

Brooks Bell – 00:04 – Glad to be here.

Allison Hartsoe – 00:49 – So I first want to call out that you call yourself an experimentation consultancy and a lot of people throw around the words testing and optimization. Can you just talk a little bit about what you mean by an experimentation consultancy?

Brooks Bell – 01:05 – Well, we’ve been around for almost 15 years and believe me, we have used all of the above and ab testing and experimentation and um, and what the reason we call ourselves an experimentation consultancy is that with optimization, optimization is a great word because it implies value creation, which of course is we like, showing the value of what we’re doing and optimization does that, but optimization doesn’t, it applies to so many things in your life. There’s SEO, search engine optimization, but then you can optimize things far beyond just data on your website and your marketing. So I don’t think it really describes exactly what we do and actually, our mission statement is to make every day better through optimization, purposefully thinking about using the idea of optimization and kind of test and learn iterative improvement to improve our life,

Brooks Bell – 02:05 – but optimization doesn’t really describe what we do and the specific value that we create this. We think that experimentation is a little bit more descriptive of what we do. Experimentation is about, you know, being scientific measuring things and typically splitting into different variables and to answer your questions or make your decisions.

Allison Hartsoe – 02:30 – Okay, got it. So what does the team specifically do at Brooks Bell? You? You’ve touched on a couple of things about experimentation that. Do you have a particular set of things that you typically hit on?

Brooks Bell – 02:43 – Yeah, so we’re an experimentation consultancy, and there are. There are five broad offerings that encapsulate that. The big thing that we’re trying to do more of, generally speaking, is helping our clients build a world-class experimentation program. In many cases that means we’re helping them get a centralized team, a center of excellence, a structured and built it all and help them create a process, help them build a methodology to test with to help them have the right technology stack that will enable this kind of looking at it more broadly, how to scale and experimentation team, and then ultimately impacts their culture to be more of a data-driven culture. So they’re using experiments at to drive every decision.

Allison Hartsoe – 03:36 – I was wondering if you were going to say culture because that’s just such a big part of the testing and experimentation mindset. I’m so glad you pulled that out. So tell us a little bit more about your background and how. How did you basically get to this point of creating culture change and organizations through testing and, and excellence of, um, creating, creating culture change through organizations, through the right technology, the right methodologies, and of course the processes and excellence?

Brooks Bell – 04:10 – Well, I’ve been doing this for almost 15 years now, long before they’re even real, really was much technology in this space. In College, I started a website company, and I was literally designing websites for a dentist, a local dentist in North Carolina. I had a degree in psychology, and my approach to design was just slightly more psychology based. And so around that time, the largest technology powerhouse was America Online, and they had a completely closed environment technology environment where they have literally written their own engineering language. They had all my older data, that own analytics platform. Their data was clean and connected and powerful, so they had had the right environment to build an absolutely incredible testing culture.

Brooks Bell – 05:08 – So I’m sure you remember when you know the early 2000s, you will get spammed with all those CD-ROMs.

Allison Hartsoe – 05:17 – That’s right. When people build these artworks out of a CD. Yeah.

Brooks Bell – 05:21 – And the reason they kept getting Zania Zania is that they were testing them and they were doing the, and there were no rules, no brand rules whatsoever. You could literally put like a Pink Flamingo, and no creative director would stop you. So they did the same thing online with popups, and pretty much every digital effort have they said they would test it all. So what, what limited down was creative. And so they would go around finding tiny firms like mine, and they would show you the control, and they would say, there are no rules, you know, here’s $5,000 design, that’s five new popups. You can literally design anything you want, but it must be at the control. And if it doesn’t meet the control, we’re not going to hire you again. And if it does, you know, we’ll send a little bit more worth your way. So they gave me that little challenge in 2003.

Brooks Bell – 06:17 – Miraculously, one of my first five popups did, in fact, beat the control, and they gave me another project, and I’d be at the control again. And um, yeah. And, and I fell in love with ab testing. Um, I thought that this is what the Internet is built for, you need to have all this data, you know, every website has some kind of purpose, some, some measurable action. And I could see how the smallest changes and creative would really drive value and make a difference and also tie it into psychology. Gave me this direct connection to the end customer, all, you know, kind of brand and all that stuff wouldn’t really good in a way. And that was what really drove my passion for ab testing. So we launched Brooks Bell in latter half of 2003 to focus on ab testing, primarily with all and um and became their top three agencies over the next three years.

Brooks Bell – 07:18 – What’s amazing though is that testing actually destroy the brand. I mean they were so performance oriented, they, there’s a lot of bad behavior that, that leads to. And I also got to see the dark side of testing. So one day in 2006, they laid off 5,000 people, all of which were my clients and dropped from 85% of my revenue to 15% and the next several years we diversified our client base and start to realize that no one was testing, no data or the culture or the process or anything like that at all and it became clear that we needed to shift our focus away from just the creative piece towards just helping companies just a see the value of experimentation and, and uh, and get the same culture and data fidelity that, that we saw in those early days at all.

Allison Hartsoe – 08:15 – That’s amazing. What a fantastic story. Congratulations on 15 years. That’s almost unheard of in this space. That’s fantastic.

Brooks Bell – 08:25 – Yeah. And I think only now is the technology and the data actually ready for companies to benefit from this. I mean, it took so much longer than I expected.

Allison Hartsoe – 08:35 – I agree. I think that’s just mindblowing right there. How far ahead AOL was, and we certainly don’t really give them credit for that and how much longer at everybody else to get the memo. But I think things are moving more quickly now. So when we think about the customer mindset and obviously on this show, and we talk about customer, we’re really trying to tie it to equity to actual dollar value, and you know, it seems obvious perhaps that you should get into the customer mindset, but tell us a little bit more about what it means to do that really. And you know, today’s data-driven world

Brooks Bell – 09:16 – experimentation. It is such a powerful weapon that you have or may not be the right word, but it’s a powerful asset. Whoever owns experimentation, it’s Kinda like having this little ball of fire power. If you have data to back up your point, I mean any data, it gives you power. But experimentation is far more power as an individual because you now have statistically significant results and you could win every argument. Almost. Experimentation starts where someone wants to have that power, and so they bring it in and so use it. They start using it to drive their own agenda within an organization but really should not stop there. Um, what happens next is that the rest of the organization starts to discover it and it starts to be used to drive not just an individual’s agenda, but to drive the company’s agenda. So I think of this actually in the individual hands.

Brooks Bell – 10:17 – I think of it as a series of orbits. The person’s center of the center is individual, and then the company’s orbiting around this individual experimentation can start to align the agendas of both individual and the company. Once you start thinking about driving revenue or-or whatever the strategic goals are as a company. The third orbit though, the last ring is the customer and what I think is the true power of experimentation is to ultimately use it to not just drive the individual and the company’s agenda, but to drive the customer’s agenda. And it’s one of the few approaches that were all three of those can be in harmony, but it’s also the hardest leap to make to lead to the customer’s agenda.

Allison Hartsoe – 11:04 – We talk about the maturity curve a lot, and it’s very similar in that you’re starting with these very tactical things and then you kind of align on the department. Then you align around the company, but you finally start thinking about the customer later because if you know where is the customer department, right? There’s a customer service department, but the customer spans across the organization and so many pieces that it’s incredibly hard for companies to do that so I can see why he put it in the third orbit

Brooks Bell – 11:34 – it’s really hard. I mean it’s not just about this conceptual idea of being customer-centric, but you have to completely rethink your metrics and I’m the type of data you collect, how teams are organized, how teams are incented, how you communicate, and also your core values, how you know the types of decisions executives make that do truly attempt to put customers above revenue with the idea that in the long term they will align

Allison Hartsoe – 12:02 – and that’s really the key, isn’t it longterm thinking versus short-term thinking. How do you translate that into testing?

Brooks Bell – 12:10 – It’s not easy. I’m with testing and if you are on a, you know, you’re looking at a one month cycle with a given test clearly, and you want to have a statistically significant success metric within that month. You’re limited to fairly short-term metrics, but you can also come back and do cohort analysis so you can go back and rethink whether or not that test actually did win with your longer-term metrics so and we were highly recommended I’m doing that as much as you can, but that is kind of the downside of testing is that it does encourage more short-term thinking and so the team needs to be pretty diligent to have, you know, a series of metrics have the test success metric, but then you’re also using your intuition about knowing whether or not that was actually a good idea. One example I have for that actually is that for a financial services company there was a test it.

Brooks Bell – 13:05 – We were looking at trying to drive up mortgage applications and looking at a landing page, optimizing the landing page. And of course, there are a couple of metrics you can try to get preapproved online. But really wanted you to call into their call center, um, and then the call center would then handle the application and um, and they would lead into the application starts and then ultimately application complete. So they did not have the technology at that time to connect a call into an application complete. And so the success metric was called. So we redesigned the landing page, added some, some really important content like the mortgage rate and a mortgage calculator made it a much more effective landing page. But the call rate went down 30%. Roll this out. It’s an epic sale, right? If calls your success metric.

Brooks Bell – 13:57 – But when you think about it, when we added the rate to the landing page, the reason that people didn’t call is that they were calling to get the rate and now they didn’t need to. Of course, we couldn’t actually measure whether or not they applied through a different channel or if they left that experience or satisfied. And that is a case where really that was just the wrong success metric to short term. And we should have insisted that we don’t even run that test because it has continued on the wrong path where the customer if you’re able to measure customer-centric success metric,

Allison Hartsoe – 14:31 – that’s a perfect example of short-term driven KPI is full of confounding factors and is not long term. Now, I imagine when you’re running tests, you run into that kind of complication all the time where you’re trying to get more detail than just the quantitative input. Can you talk a little bit about what the best tests have and how you pull that customer mindset through?

Brooks Bell – 14:59 – There are a few different ways that we think about that. One is at the very beginning and when you are deciding what to test, having both quantitative data and qualitative data and bringing those together is your best bet for getting the customer view right at the beginning. If you only focus on quantitative data, you’re more likely to make short-term decisions. Focus on the wrong things. That will make the stress worse. If you also have qualitative data to supplement it, then it helps you get back into the mindset of your customer and so we have a methodology where we try to collect as much research as we can at the beginning of every test and bring together a cross-functional team, not just the analysts and the developer and the business owner, but also designer and the researcher to go through a series of questions about what are the customer problems, who are the customers, what do they like, what are they likely care about, how much information do they have, and then we also use some behavioral economics to help drive some ideas of what might solve these problems.

Brooks Bell – 16:08 – We’ve taken a lot of different theories and kind of summarize them in five different buckets that we call the pains and gains. The gains are anxiety. Could the customer be feeling anxious and if so, what would be some things to test whether or not we can reduce the anxiety and another mental effort? How much mental effort does this actually take today? Should it should it not as a high consideration? Then you made it a lot of information like do you want to get a mortgage? Clearly that’s a very important financial decision. You want a lot of information, but you also want to figure out what’s the right information in the right order, other lower, lower, do you want less, less mental effort, there’s money, how much is money a factor, and then there’s value.

Brooks Bell – 16:50 – What’s the value proposition and as the money match the value, how well are we communicating the value and then the last is time is time. A driver in this case, kind of back to the financial example, applying for an application is going to take you a lot of time, and you need to collect a lot of sensitive information like your social security number, maybe some checking account numbers or that kind of thing, and so helping give them awareness of-of time. Is it going to take them 20 minutes, she’s going to take them two minutes. That kind of thing, but realizing that the factor in their state of mind.

Allison Hartsoe – 17:25 – That’s a fantastic framework. I love that, but I have to ask this question about when you’re pulling all these different people together. It strikes me as a little bit of cat herding. How do you get the person who’s in charge of research to agree or come together with the person who’s charged with Ux and how does it not become a battle of opinions?

Brooks Bell – 17:49 – Well, the way we structured is there’s usually a leader of this group, and they collect the research, and they send it out and collects pitches from everybody else. So by position as a pitch where they’re pitching to the core team and then go through kind of essentially a prioritization technique where like how much impact will this have? What’s the level of effort, what’s the business value? And they kind of do it together so that by the end of these ideation sessions, the team has together kind of ranked these ideas and we haven’t thrown any out. Was just ranked which ones are easiest and the highest value. So there is some consensus and what’s great about these sessions is that you get like 10 to 20 good ideas out of it and they come out ranked, and everyone is on the same page. There’s consensus, and then you don’t need to come back together for like another few months because you’ve got a bunch of ideas to roll with in the meantime.

Allison Hartsoe – 17:49 – Oh that’s.

Brooks Bell – 18:48 – sufficient. You have to do this once. It reminds me a lot of audits, you know, technical audits where you figure out all the different things that need to be captured and then you kind of say, well how hard is it to do this? Ranking them by the level of effort. So very familiar from the technical side, but a brilliant way to bring it together from the perhaps from the psychological side or from the testing side. I love that. Now I also enjoyed your banking, your mortgage example. Are there other examples that you have around testing? You must have billions or maybe not billions.

Brooks Bell – 19:24 – We do have lots of examples. It kind of depends on what we’re trying to get across, but it’s actually harder than you would think to have tons of examples because with experimentation, we actually just launched a tool last month called aluminate testing repository. Precisely because we were finding that it’s very difficult to continue to get good examples. Good examples are what everyone wants to hear, and case studies is what helps you increase your expertise and experimentation and as I build an experimentation consultancy, I want all of my people to be experts in a way I do that is by making sure that they all hear these amazing examples and case studies for our clients so they can learn from each other and the work that each of the individual teams that are doing the same thing for our clients. When they’re building an experimentation program. How do they change?

Brooks Bell – 20:18 – The culture is through really great case studies and examples that lights people’s imagination up and it’s really hard to do that because most times the folks who are running the test as a small group, highly technical and tactical and a lot of data emerges from them. They look at whether or not they will only look at did it, when, how much value did this create? I’m looking at really technical details with the success, the segments, you know, the secondary metrics, all that stuff is complicated and takes a really analytical mind to get it. As soon as you step outside that small group, all that kind of data is just totally overwhelming, and there is no real story that’s been communicated.

Allison Hartsoe – 20:18 – Date died.

Brooks Bell – 21:11 – Transferable. Yeah. Data is killing the story. So I have seen this firsthand in my own company. I’m a couple of steps away from my core client teams, and of course, I want to hear all these. You know, what we’re doing and what we’re learning. It’s been much harder than I expected. The solution is not just saving all your tests; it’s saving them and also trying to dig into the insight what is insight and what insight is transferable. If it’s not transferable, it’s not actually that interesting in a transferable insight shouldn’t be so high level. That applies to everyone. Like if someone says, oh, we learned that everyone likes saving money. Okay, that’s obvious. I didn’t learn anything new or didn’t change my mind about anything. If someone says like, oh, the blue button works on, you know because it had high contrast or something against the white.

Brooks Bell – 22:06 – Okay. That’s not also not transferable because it’s tied directly into just that site or just that button or just that page. So what we’ve seen is that the most transferable insights, they’re not so high that they’re not interesting and not so low that they don’t mean anything outside the context is that if it has to do with the state of mind of the customer for that brand. A good example is Lowe’s. They’re not a lot of our clients, but one of my data scientists used to work there, and he said that one of the most powerful insights that the experimentation team developed a couple of years ago was that at Lowe’s people shop for projects not for products.

Brooks Bell – 22:53 – And that came out from deep analysis and looking at patterns over and over again and said, oh, I mean, Lowe’s was structured by the department. You know, you have your wood department and your Cabinet department and your. And like they don’t really talk and so there are different suppliers, but once I realized that’s not how customers think, they don’t go to buy wood for wood, they’re buying it to build a deck and all the other things they need to build a deck totally changed how they are structuring their outdoors, how they train their salespeople, how they organize their site.

Allison Hartsoe – 23:26 – So is it, is it common that when you find that kind of brilliant transferable insight, it’s a little bit of a head smack.

Brooks Bell – 23:35 – Those are the best ones. Yes. Yes. That’s what you’re looking for. The head snap. Yeah, the head smacks level insights and that when the testing program can drive true change and it’s the reason that most experimentation programs don’t go there is because we’re trying to be scientific in our efforts. There’s a lot of hesitancy ever to suggest something that the data didn’t fully prove no individual tests is going to have, you know, say like, oh, people want to shop for projects versus products. All it just shows you. There’s a bunch of outcomes. And what you have to start to do is get in the habit of starting to single, you know, why, why did this, why did they get this outcome, what are all the possible explanations and the sort of way to save those explanations and start to get, you know, and help people have the courage to start documenting these potential explanations, which might be totally wrong, it might be totally right,

Brooks Bell – 24:36 – but you need to get people to start asking it, writing it down, and getting the courage to go there. And we have a; we have a whole methodology around this was an insight framework that we also just rolled out and actually part of the product that we rolled out as a way to save all of these, we call them customer theories and then insights. Um, but it starts with having the courage to eat, to step away from the data just a little bit and start putting out ideas for what may have caused that. And I specifically ideas around the customer mindset.

Allison Hartsoe – 25:10 – Would you say that’s kind of like a marriage between big data and little data? You know things, you don’t necessarily have an overwhelming tidal wave of data to prove, but through a couple of points or case studies, you suddenly realize that an insight.

Brooks Bell – 25:25 – I think there’s just still a lot of room for human intuition to be, to be at the table. I think not everything can be addressed just through brute force of having so much data. I think of course machine learning can be helpful and give you some ideas, but it’s. I think that’s still an imperfect method. You need to combine your skills a deep thing deeply. You know, like there’s just no way around not thinking in this industry, we’ve gotta, we’ve gotta think about what we, why we think this is happening. There’s still room for creativity, and it needs your intuition. What really makes sense?

Allison Hartsoe – 26:09 – That’s awesome. I love that. When earlier you were talking about behavioral economics, and you talked about the five pains and gains and I wondered if when you’re looking for things that are transferrable, and you’re looking for those deep insights, are they naturally pinned to one psalm? The better the insight, the more numbers of behavioral economics areas. It knocks down is there a connection between the two

Brooks Bell – 26:38 – Between insight and behavioral economics?

Brooks Bell – 26:39 – between the five, a behavioral, between the five pains and, and whether something is transferable.

Brooks Bell – 26:48 – I think what’s helpful with the pains and gains is that it can give you some ideas when you down a certain path to start to essentially use tests to research, are people feeling anxious, how much mental effort are they feeling? Um, it puts you in that framework of the customer mindset at the beginning of the test when we’re looking for something transferable that is usually happening at the end of the test. And actually there’s this concept where we have rolled out called a hypothesis. So in the beginning, you start a test that is not, you know, it’s a little bit of an evolution of a hypothesis. You know, what, what most hypotheses look like. But at the foundation of a hypothesis is an if-then statement, if we do this, then you know, if we do x then y will happen. And within testing it’s usually if we do x, y will go up.

Brooks Bell – 27:44 – And so I don’t really love it because we’ve usually already said what we’re going to change, and we’ve already said what the success metric is. In many cases, a hypothesis is you’re combining those two things. If I change this thing, I expect the success metrics to go up and so you’re not adding anything new to the conversation with a standard hypothesis.

Allison Hartsoe – 28:05 – And not thinking deeply.

Brooks Bell – 28:08 – So we’re trying to. You’re not thinking deeply, or you haven’t really said, okay, what am I really trying to learn about the customer? Am I trying to learn if they’re anxious, am I trying to learn how much information they need to make this decision? And that’s not where he really captured in a standard if-then statement. So with a hypothesis, we want to add the, you know why we’re doing this, the way we phrase it as if this winds, if changing as wins, then here is why we think it has one or here’s what we think we’ve learned about the customer, kind of depending on the type of tests that we’re doing. And so that gives us the opportunity to incorporate a little bit more of the behavioral economics up front and then on the backside of the test we’re going to get far more data than just like what that one thing is a much richer picture.

Brooks Bell – 29:03 – So we might have learned, you know, customers are anxious for this reason, or we think they are, um, but now what is it? What is it didn’t win? What if it’s flat? What if it loses? Now, what now did we, what did we learn? Now we have segments, what if it one in mobile but on the desktop now, or did, what do we think? So now we have this much richer set of customer theories so we can start to produce that goes way beyond just the initial hypothesis.

Allison Hartsoe – 29:31 – I mean, that sounds like a tree that’s branching out. And so every time you find another spin or another dimension, it just kicks off another test. So it’s like a cascading effect, right? And is that what it should be internally?

Brooks Bell – 29:45 – Yes. It should be driving towards iteration. This is very much of a model that is founded on iteration. And so not every testing program has the right culture to support such an iterative approach, but if you want to build a customer-centric organization, then this is the path towards that. If you only care about this quarter’s revenue, it will be difficult to get the buy-in for this type of approach.

Allison Hartsoe – 30:14 – I love it, I love it. So we started talking a little bit about the beginning and the end of the test. Let’s to say I love the experimentation idea and I want to go kick it off internally. What is the right order of operations for me to take these ideas and implement in my company?

Brooks Bell – 30:33 – Well, it starts with choosing the right success metric, and it’s one of the most important decisions you’ll ever make if you choose something. So short term like calls then is setting you up for going down a non-customer-centric path. If you set up something to longterm, like NPS is very customer-centric, you will never have a winning test. They’ll all always be flat. So finding that balance of the right success metric that you have, you know, the reasonable texts exit to support it, and then so you start this as a success metric. Then you collect as much qualitative and quantitative data as you can, and you get the team involved. Do you have multiple perspectives to really enrich in the test idea and then you start to tell the story upfront of what we’re going to learn about the customer once you run, and then once you run the test

Brooks Bell – 31:33 – and you finish, you get all the data, you get the group back again, do an insight session, get everyone talking about why and then you build that case study with an with a narrative and the story but includes what? They’re not one our loss, but you do that extra step after the finished test has finished to come back and tell the story of the test and make that kind of the overall testing process.

Allison Hartsoe – 31:57 – Perfect. Perfect. That sounds, sounds ideal. So if people want to reach out to you to ask more questions about eliminate or behavioral economics or other things, how can they get in touch with you?

Brooks Bell – 32:12 – Well, we’re still kind of old school. They can email me brooks@brooksbell.com, and I’ll answer them or hand them off to the right experts to help them out. We have an Instagram account. I think it’s Brooks Bell Inc. and Twitter. Same thing. Brooks Bell @BrooksBellinc. They can reach is any of those channels?

Allison Hartsoe – 32:33 – I have to say I’m old school too. Email, still the best way to reach me or text.

Brooks Bell – 32:42 – Leave me a voicemail. Call me. Leave me a voicemail. You can do that, actually.

Allison Hartsoe – 32:48 – Talk to me. That’s great. All right, well let’s summarize a little bit about what we hit on. First, we talked about why should I care about finding the customer mindset, and there’s really a lot of different dimensions here, but what I liked was that you hit on the different orbits. In today’s data-driven world. You’ve got lots of different things that are moving around, but the orbit of solving for the stakeholder’s agenda first and then moving up to the company’s goals and then moving into the broader customer’s agenda seems like a very logical path. I mean ideally I personally would like to see everybody starting at the customer agenda, but it’s just not reality. You’ve got a lot of forces working against that, so get it going with at least solving for an immediate challenge, and then we talked a little bit about the impact we really hit on short-term versus long-term thinking here, and there were a couple of times when I was dying to interrupt with clv as one of our classic longterm metrics, and I do think that’s pretty powerful.

Allison Hartsoe – 33:52 – I appreciate what you said about NPS being something that takes a long time to come through and in many cases, short-term, longterm. It’s all relative. Right? So but I think the. The example you gave about the mortgage company and driving to the KPI call rate, which basically was not the only short term, that was confounded wIth all sorts of information about why they call. Not coming through that measure was, it was a really great example of short-term and longterm thinking, but then you went further and you talked about the customer mindset, which is based on what we here, who are they, how can we infer what it is that they are in need of, and this was where he talked about those five buckets of pains and gains, the behavioral economics of anxiety, mental effort, money, value prop and time and particularly why that’s valuable at the beginning of the test where you put the hypothesis together.

Allison Hartsoe – 34:52 – You know if x will happen, that’s a great place to start, but then add in why you think it’s valuable to the customer. Why it’s why we think we’re going to learn something interesting, and then you paired that, or you add the end of the test aspect. That data kills the story. So what you’re looking for is that transferable impact. That head smacks, and that’s also where the y apotheosis drives the story for you to think a little bit more deeply about the customer. What did we learn? How are these complexities and customers thinking to impact the different things that we’re doing as an organization and then, of course, the iteration of tests? I’ve talked a lot there. So Brooks, is there anything that I missed? Did you want to a feed in more detail?

Brooks Bell – 35:39 – No, I think you really summarized that super well.

Allison Hartsoe – 35:45 – That’s my journalism degree and my background coming in. I just. I loved your story and everything we talked about. I, I oftentimes think that I know a fair amount about testing, but boy, the way that you pick up the sense of the customer and load that in is really elegant and well done, so highly encouraged folks to reach out to Brooks Bell and her company if you are looking to do a testing program, so as always, everything we talked about is going to be at ambitiondata.com/podcast. Brooks, thanks for joining us today.

Brooks Bell – 36:19 – Yes. Thank you so much, Allison.

Allison Hartsoe – 36:23 – Remember everyone, when you use your data effectively, you can build customer equity, especially through testing. This is not magic. It’s just a very specific journey that you can follow to get results. Thank you for joining today’s show. This is Allison. Just a few things before you head out. Every Friday I put together a short bulleted list of three to five things I’ve seen that represent customer equity signal, not noise, and believe me, there’s a lot of noise out there. I actually call this email the signal things I include could be smart tools. I’ve run across articles, I’ve shared cool statistics or people in companies I think are doing amazing work, building customer equity. If you’d like to receive this nugget of goodness each week, you can sign up@ambitiondata.com, and you’ll get the very next one. I hope you enjoy The Signal. See you next week on the Customer Equity Accelerator.

Previous
Previous

Ep. 30 | Customer Centricity Right for Your Company?

Next
Next

Ep. 28 | What CAOs are Thinking Now