Ep. 69 | How to Set-up a Testing Program

How should you set up a testing program and ensure it gets off to a good start? It’s more complicated than you might think. This week Loren Hadley, VP of Customer Optimization at Ambition Data, joins Allison Hartsoe in the Accelerator to discuss just that. Loren shares his stories running optimization strategy for one of the largest consumer brands out there. He believes whether your test is a success or a failure, there’s always something to be learned.

Please help us spread the word about building your business’ customer equity through effective customer analytics. Rate and review the podcast on Apple Podcast, Stitcher, Google Play, Alexa’s TuneIn, iHeartRadio or Spotify. And do tell us what you think by writing Allison at info@ambitiondata.com or ambitiondata.com. Thanks for listening! Tell a friend!

Read Full Transcript

Allison Hartsoe: 00:01 This is the customer equity accelerator. If you are a marketing executive who wants to deliver bottom line impact by identifying and connecting with revenue generating customers, then this is the show for you. I’m your host, Allison Hartsoe, CEO of ambition data. Each week I bring you the leaders behind the customer centric revolution who share their expert advice. Are you ready to accelerate? Then let’s go. Welcome everyone. Today’s show is about how to set up a testing program, and to help me discuss this topic is Loren Hadley. Loren is the VP of optimization at ambition data. Loren, welcome to the show.

Loren Hadley: 00:45 Hey, Allison. Thanks for having me on today.

Allison Hartsoe: 00:47 Loren, would you tell us a little bit about your background and obviously we worked together, and I think we’ve worked together for a very long time, but maybe give the folks a little bit about your skill set and background.

Loren Hadley: 00:58 I started out my digital career in user experience, print in design, and I was quickly drawn to analytics because I was fascinated by understanding how people were actually interacting with the things that my team and I were putting together. I’ve gravitated the point where that’s my sole focus now. And really a lot of my work has been around optimization from that point forward. So we’ve done everything from deep dives to try to understand why and how things are working, setting up testing programs to build a culture of iterative testing and learning, creating reports that actually provide clear data to people to make their decisions during the course of their business operation. So with that, we’ve had the opportunity to work with a variety of companies from small scale startups to fortune 100 companies. And it’s been a really interesting ride.

Allison Hartsoe: 01:48 Nice. It’s surprising to me when people say, oh, analytics, isn’t it just like you just kind of munge the data together and it’s there. And I think the similar assumption happens with testing strategies where if I just kind of a turn on the tool, I’ll just have a test, and I can just kind of press the trigger, and it’s Ab test, and I get an answer. So is there really a philosophy to a testing program? Why should I care about having a particular testing program?

Loren Hadley: 02:19 I think most of the tools will happily tell you that if you buy their tool, and they’ll solve all your problems, and there are a lot of great tools out there, but if you take the uh, was the movie a thousand ways to die in the west, it’s a thousand ways to fail in your test. There are a lot of different things that you can do wrong. Getting to a task to a tool is definitely important in the sense that it allows you to deliver that experience and target who’s going to get it. But there are opportunities from a technical perspective, from creative and design perspective, from test planning perspective, just to get it a good test that can all steer you off track. So it’s really critical that you think through and have a process and place to have consistent and solid testing. Testing is very powerful, but it’s also expensive in terms of at least resources and opportunity costs. So, it’s not something that you want to do lightly. You want to make sure that you’re picking out the right things, that you’re focusing in the right areas and are you putting your energy into solving the problems that are really gonna move the needle for your business.

Allison Hartsoe: 03:19 So I have heard that some people do a test and it’s the red button or the blue button, and that’s more or less how they frame their tasks. Are you saying that by putting together a more robust process, you start to get away from maybe those really simple tests that might not move the needle at all?

Loren Hadley: 03:40 I think you focus on the places that aren’t going to really improve the customer experience. It should be based on ideas that are coming from your team and from the data around the friction points that are causing problems for people. And it should be focused in areas of the overall process where you’re going to be really able to impact the overall performance of your program. Did you take it to a page that might be someone’s tries pat, and they really want to do some testing on it. But you discovered that a very small portion that traffic reaches there. It really isn’t part of your conversion path. You can do the test, but you’re likely to, even if it’s a fantastic win, not make any serious difference for the company. And yet you’ve just invested the same amount of resources that you might if you were testing something, say in your cart or checkout process that might add $1 million plus to your bottom line. So you need to really think about where you’re going to test and how you’re going to test.

Allison Hartsoe: 04:35 So along those lines, what should the process be if I’m going to think through it and try to develop a more robust angle to find the right places to test.

Loren Hadley: 04:46 So I think testing really needs to start out with analytics. As you dive in and you do some analysis, the site, oftentimes they’ll be pretty obvious places where people are stumbling, where you’re losing customers out of the tunnel, and those are the places that you want to then try to solve for. And then the analytics is going to give you oftentimes the quantitative aspects of it. So you might need to do some work with your UX professionals. You might need to gather some voice of the customer to understand what factors are at actually impacting that, and then design an experience that would improve that piece. Figure out, what’s that one thing that I think is going to be the most critical factor here that’s going to make this experience better? And then you set up your test around that.

Allison Hartsoe: 05:27 And I imagine at some point you’re probably thinking through, do I even have enough volume to test? Does that start at the beginning with the analytics, or does that come along later?

Loren Hadley: 05:36 So typically, I would propose a program where you would come up with your hypothesis, and that could be generated by your team. It could be generated by other folks within the company. Oftentimes the best programs have test ideas coming in from a wide variety of areas that are then evaluated, and the best ones are prioritized in. And that may come from your Ux team that might come from your e-commerce team, or it might come from somebody completely outside that just happens to have an opinion or a view from using the site themselves. So then once you’ve got those hypotheses in place, then you need to do a quick evaluation. Does this test make sense? If it matches strategically with where you want to go, if it’s in a place that’s going to potentially move the needle for your company, then that passes the strategic test.

Loren Hadley: 06:18 Second piece, do we have enough traffic? Can we get to statistical significance within a reasonable amount of time? Typically, maxing a test at six weeks is often a really good practice. If you go beyond six weeks, you start to have problems with longitudinal issues. You’re tying up your site traffic in at that particular test for an extended period of time, which limits what else you can do. So you need to first identify whether you can actually run that test and you have an assumption that you could reach statistical significance on it if it proves out. Then you need to make sure that you can technically do it and that you can actually have the resources to create the assets and things that are there as I mentioned that, a thousand ways to fail in your test. We’ve seen instances where people have proposed a test only to discover after it.

Loren Hadley: 07:00 It was starting to get set up that UX already has a roadmap that’s going in a completely different direction. So regardless of whether the test wins or loses is not going to get implemented. We’ve had instances where their things are off brand in, so you come up with this great idea, and then you learn from someone further up the food chain that, nope, there’s no way you can do that kind of a button because that violates our corporate brand approach. So there’s a lot of different pieces you that need to go into that. You need to think about, is it technically feasible? Can we actually create and deliver the experience that you want to deliver? And if we can, can we do it in a way through the testing program that’s not going to have negative impacts, for example, that flicker or some sort of a strange field that breaks the rest of the site flow.

Loren Hadley: 07:42 It always works this way, but then in this one instance, suddenly menus work completely differently. Those are some things that you want to consider from those perspective. Once you’ve had that in place and everyone has signed off on it, it’s going to be putting resources in and working on this as part of the team. Then you can really prioritize and decide based on the level of effort and the potential outcome. Where do we want to put or place our bets? Where do we want to actually prioritize each of these different test options that are available? Once you’ve got tests in the queue, the next step is to do the development piece of it. You have the assets that we need for the graphic portion of it. Do we have it programmed in? We know exactly who we’re going to be targeting and how we’re going to be targeting those people as part of our traffic.

Loren Hadley: 08:21 Is the analytics in place, are we tracking the changes so that we can actually read out what’s happening there? Then there’s a thorough QA process to make sure it all works, and that’s the point where you have those final sign off sign. Yes, this is on brand. Yes, this experience is being delivered well, yes, we can measure it, and everything is good to go. Then the test can go live, and you run through, and you monitor it, and the final portion of it is really the analysis phase. That’s where you need to dig in and really understand what’s happening. And a lot of times people using tools and the tool will give you a thumbs up, thumbs down. Did you reach statistical significance on a single factor, and what we’ve typically found is that there’s oftentimes a lot of confounding back. You may have an unexpected positive lift somewhere. You may have a positive lift from the exact thing that you’re looking at on the test, but there may be negative consequences in other ways, and so, doing a thorough analysis of the test afterwards helps you really understand what’s happening and really learn about those pieces.

Allison Hartsoe: 09:15 Well, this is a pretty tight process, so what I heard you say was this really like seven steps from the hypothesis to the evaluation of whether the test makes sense. And I think that’s an incredibly important piece. I can’t imagine how many times that you must have run up against people who are like, yeah, let’s do this test. And then UX is doing something completely different that makes the test just negligible. And then the third one is based on the level of effort. What’s the priority? The fourth one is the development piece, the fifth is QA, sixth is the test goes live, and you monitor it, and then seven is the analysis. And I imagine so two things. When you monitor it, you probably also monitoring customer service a little bit to see if people are getting weird experiences. Is that right?

Loren Hadley: 09:59 Yeah, we’re definitely taking a look at it and if we see early indications that a particular test version is performing poorly, we would want to dig into that more deeply and decide, hey, do we want to carry this particular piece on or do we have enough information to say that that’s not a good experience, and then dropped out portion of it. So definitely pieces that go into that. Jumping back half a step, I would add the eighth piece to your summary of the process, and that is how do you institutionalize the knowledge? How do you pull in those learnings, those wins? It’s one thing to say we’ve won the task, let’s make this change, but it’s another to make sure that down the road that people know what’s fantastic, how it works, what the hypothesis is about what actually drove those changes, so that future decisions and knowledge of how your customers are interacting with your site becomes a broader asset to your company that people understand what’s working and what’s not.

Allison Hartsoe: 10:50 I think that’s incredibly important, especially as you start to run more tasks. And I’m going to hypothesize that since you said six weeks, if I take 52 and divide it by six, I can basically run eight and a half tasks linearly. But it’s not that you, maybe you start running that way, but eventually, you’re running multiple tests on top of each other. How do you deal with that?

Loren Hadley: 11:12 Sure. What you’re going to want to look at is if you’re at high volume, you’re going to probably need someone who is in a program or project management mode to keep all the balls in the air at the same time and make sure that you’re not losing things that, you know, everything’s ready, except that can’t get to it for another six weeks. Keeping all those pieces going, you’re going to need to do some analysis around what your traffic flow looks like, and how you can split that out. So for example, you could take, there’s a variety of ways to split the traffic, but you could be doing testing in your product pages, and you could be doing testing in the card at the same time. You could be splitting out your traffic so that 10% of your population is getting in test A, 10% is getting in test B, assuming that you’ve got enough traffic flowing through each of those experiences to be able to reach that statistical significance.

Allison Hartsoe: 11:56 Can you tell us a little bit about maybe an actual test that went through, and how the process flowed in this particular framework?

Loren Hadley: 12:03 Sure. I guess it all starts off with a couple bit we’re sort of unexpected one that highlights the longitude issue. There was a test. It was put together to improve the product merchandising and hopefully the overall sales for a particular line of sporting goods. The test was set out, and it was often running and doing a really great job of taking out, and then all of a sudden, it stopped working. The test continued to work but the winds, the drive towards STGs degrees, significant improvement stopped, and it started to drop back off, and in the end, we did not reach statistical significance on that. So when we got to digging in and analyzing in, what we discovered is the business is very seasonal, and there was a definite peak in traffic as parents and athletes were going out and replacing their gear, getting ready for the new season, but then it hits a point where everybody’s bought what they need for the season, and the sales drop off to a background level again, and it just happened with this particular test that when it was set up, it started out during the course of that peak, which was great.

Loren Hadley: 13:02 That’s probably the best time to test that particular piece when you really have a lot of traffic volume, the people are interested in the product, but because it did not reach significance before it dropped over the peak, then suddenly the buying behavior completely changed, and the performance to task completely changed. So we were left with a, this is probably a good idea, but if we want to test it, we need to plan the testing timeframe, taking into account that seasonality and make sure that we started earlier so that we can actually take advantage of that statistically, so you get the game. The statistical significance we can get from that portion of time where people are really active would that be,

Allison Hartsoe: 13:37 yeah, it’s like tax season, right? There’s a peak at a certain point of the year, and then after that peak, you’re starting your test after April 15th if you won’t get the same kind of information that you would, it May 15th.

Loren Hadley: 13:49 Exactly. Another piece that you could look at is what’s actually causing it. And I see, you know, again back to the, if you do a task and you just simply take your hypothesis and whether the answer was yes or no, you might not get the full picture. And so in this instance, there was an executive who felt that they could really improve sales by driving people through too. Get more brainstorm before you started shopping. We really want to tell you why we’re great and why you should love us and why you should be excited about our products. And there was a lot of people who were interested in that stuff. What did test ended up being? They took and rerouted people from shopping through a page with a whole bunch of information that touted the latest and greatest product features. And the idea was that you were going to expose people to the brand messaging and really improve that, which would then ultimately improve shopping.

Loren Hadley: 14:37 The people that were actually on the e-commerce side, that we’re less focused on the brand. We’re tearing their hair out but just happened to be sort of that hippos situation. And so they ended up running the test, and lo and behold, it showed up as a win. It actually did improve sales, and that caused a lot of head scratching. So when we started to take this part from an analytical perspective, what we found is that driving people through to the site didn’t necessarily lead to anybody reading that content. We found it, very few people actually scroll down, nobody engaged with it. There was no material change in how much of that content was actually consumed. But what people did do is immediately when they hit the page flocked to the navigation, and it just so happened that the navigation on that page was a stripped down and simplified version.

Loren Hadley: 15:21 They’ll let you go basically from the top of the pyramid. If you started out on the main portion of this site, you had to choose a category first. For example, basketball shoes or do I want running shoes or do I want training shoes? And that gets to be very confusing. Even within the company, not everyone was able to tell you whether a particular shoe was a training shoe or a running shoe. It just happened to be the reflection of how that particular organization was put together by allowing people to start at the very top. You could say things like, Hey, I’m really interested in a black sneaker for all around where, and then you could find your way down from that point and quickly get to the options that you were looking for, as opposed to starting out where you’re diving into a deeper category. And I can see part of the black sneaker set, and then I go to a different category. And I see it another part of the black sneaker set, and I get to a third category and see a third. So the real takeaway here was that simplifying the navigation could be a real win and actually create a significant increase in overall conversions.

Allison Hartsoe: 16:18 And it had nothing to do with the brand message.

Loren Hadley: 16:20 It had nothing to do with a brand message. It was purely a side effect. So the decision was that the brand messaging piece was actually failed. They didn’t actually get people to engage more with that, and they were disrupting the flow in some ways. There was actually a negative impact on some clothing sales because the out process there, it was just a bump in the road that steered people away and possibly distracted him from what they were doing. Um, and it didn’t provide additional menu type assets. What the end was that they could go back and simplify their overall main site navigation, make the process easier for their people to shop and that led to millions of dollars worth of benefits for that company. And it was all a side effect. And if you had simply looked at it and said, hey, running people through this side site to get the brand messaging was a win, you would have come away with really the wrong story and miss how you could really improve the overall process.

Allison Hartsoe: 17:11 So does that encourage us to sometimes put a wild test out there, something that’s completely different, wacky, something that we don’t think is going to win in order to see if there are some learnings that can be pulled from it?

Loren Hadley: 17:24 I think if you’re going to do that, you would really need to think about what those learnings are. There should be a point of focus for any test and just doing something wacky and with the off chance that it might create something good. I think it’s probably not the direction most companies would be comfortable going. But it also means that what you really need to do is understand what caused that change, you’re trying to solve for a single variable, but websites are complex, people are complex and so any change. Your mate may end up having other consequences until I think you know really my take away from this would be, make sure you understand that you have a clear picture of what your test actually did beyond just and improve our conversion rate or it got more people to see a product detail page.

Allison Hartsoe: 18:04 And I’m also going to assume here that because this was largely around search, even though the measurement was e-commerce, but you could probably apply the same thing to a non-e-commerce site.

Loren Hadley: 18:15 Yeah. In fact, most tasks, if you’re trying to evolve strictly for conversion, oftentimes you’ve got the wrong hypothesis. If I’m trying to make a correction or an improvement on my product wall, I’ll probably want to look at whether that does, in fact, impact your overall conversion rate. But really a product wall is trying to get people to engage with a product and move on to a product detail page. So our hypothesis should really be, hey, if I fixed this, I’m going to get more people to actually engage with my product detail pages, which may down the line impact conversion, but there’s a lot of steps between the product wall and being able to check out. The same thing goes with naughty commerce sites. If you are trying to enhance the experience or improve the behavior, then focusing your tests around what you’re actually trying to drive to is most important, and it should be something that’s a direct benefit of that prosecutor page.

Loren Hadley: 19:05 So if I’m on an informational page and you want to move me into a lead generation page for example, then it makes a lot of sense that hey, can I get more people over to that lead generation page vice versa. So I guess really as you build your hypothesis, keep hardest to think about what behavior are you trying to influence, and that behavior should be something that’s really directly related to what or where you’re testing. Are we getting people routed more effectively? Are we getting people to take that next step in the journey? Those are the things that you want to look at when you do your analysis, you’ll look more broadly, but your hypothesis should really expecting a product wall to improve CLV expecting an information page to improve your lead conversion is probably a bit of a stretch perhaps. So keep your focus on what your change is actually trying to drive to.

Allison Hartsoe: 19:52 Yeah, that makes sense. And really it gets back to understanding the behavior in the first place or at least framing the behavior up in the first place so that you’re not just looking at page consumption, which is one of the things that drives me nuts. A lot of people will set up a test around, adjust the number of people looking at a page or the number of people coming in from a device to a page, and that completely ignores someone’s intent when they come through, which I think muddies the waters.

Loren Hadley: 20:18 I would just think of it is we’re digital anthropologist. People are leaving their footprints out there. They have purposes in needs that they’re trying to folks at all. Whether that’s entertainment, whether that is trying to get a new pair of sneakers. So what our job is to try to discern what those people are trying to do, and how effective we’re being at doing that. And testing is one way that you can sort that out. Which we start to try to understand how making changes to that experience changes those users’ behavior and hopefully improves their overall experience.

Allison Hartsoe: 20:48 Oh, that’s perfect. I love it. So let’s talk a little bit about some of the dangers that people run into when they’re trying to set up a testing program. I think one thing we touched on earlier, which really aligned with the story, the second story you told is so you have this highest paid person’s opinion. You run a test. You get an interesting finding. But in this case, it seems like the way that you institutionalize the knowledge is really critical because the way the test was set up didn’t reflect the learning. So how can you, or maybe is that a danger in the testing program trying to stick the knowledge?

Loren Hadley: 21:24 Yeah, I think every it has to go into should have a clear hypothesis. What is it that you think is going to happen, and ultimately what are you going to do with that that proves to be true. I think going back to the getting away from the hippo approach. If you have a clear governance process, then you’ve got a good way to assess all those different tasks and put them on a level playing field and see which ones are going to make the most sense. So you can vote on the ones that you think who can actually make the most when, so for example, if your company is really small, it may just be that there’s one or two people involved that will weigh in on it. If your company gets larger if there are more politics involved, then having a committee that might be made up of a director or executive from the testing side, someone from the user side, the Ux Gx side, perhaps someone from the business side all involved in that process.

Loren Hadley: 22:11 Then you get the chance to evaluate it from a variety of different perspectives and come to a consensus on what makes the most sense for the organization, and it also gives you some air cover in case you’ve got that hippo who’s putting out this idea, and if when everything is weighed out, and the committee doesn’t do it, it doesn’t point the finger at a single person and say, Oh yeah, that’s just because George doesn’t like my idea. Lay it out in, you’ve got some good groundwork for why those decisions were made, and why it’s in the best businesses best interest to go in that direction. So that’s, really, from a governance standpoint, the institutional is it getting the information out to the organization after a task can be challenging. It’s usually pretty easy to get it out there immediately, but how do you keep it?

Loren Hadley: 22:53 You need to have some sort of a process or plan in place for summarizing that information for storing the result on an easy way for people to see what’s already been tested. It’s surprising the number of times that same task gets brought up, we ran a test, and six months later somebody proposes virtually the exact same test, and it worked the first time. And unless you’ve got reasons to believe that something has drastically changed in the way that your customer base is working, the way your site is working, chances are you probably don’t want to spend a lot of resources. I’m trying to run that test a second or third time in less. There’s a really civic behind it, but a lot of times the reason is just that, oh, I didn’t know you already did that. And so if you can surface that app so that those people either can see that before they make a proposal or you can give them the results of the test and see if that satisfies that ultimate need, then you’re in a good spot. And that’s an area that I think falls off for a lot of people. It’s the test is done, the analysis is done, it marks a win or a fail, and then it languishes away somewhere. And while it’s theoretically in the tribal memory, it’s relegated to the back of somebody’s mind as opposed to actually being a useful working document that you can come back to and learn from.

Allison Hartsoe: 23:56 So are you suggesting that when you institutionalize that knowledge, you store it all in one place or you put it in a Wiki or what’s effective in that respect? Or is it just a spreadsheet? Is that enough?

Loren Hadley: 24:09 It’s going to depend upon your organization on your level of sophistication and size. But something like a Wiki or a SharePoint site or even just a Dropbox repository that is structured in a way that you can find the step is a good way to make sure that you archive that material when it’s available. A lot of the work in trying to keep things top of mind is just socializing the testing program, making sure people are aware of those wins, and then having an easy spot to get to. Again, that might be the Wiki. It might be a summary sheet in the Dropbox. We help you understand what tests have been run on different parts in areas of the site so you could quickly understand if we’ve already done something like that or if this test should be an iteration on that. Well, we tested this, but my idea was a little different, and here’s why and I think this is going to potentially make that difference. You can start to learn and iterate based on what you already know as opposed to just running the test and then forgetting what happened. I mean, back then I had to go there again,

Allison Hartsoe: 25:02 Right, I’m going to give a call out to our friends over at insight rocket here, so if there’s a company that already uses tableau, and they need the ability to tell a story around it, I think that tool is particularly good for leveraging all of the information and putting it into a format that makes it easy to understand and digest. I think it’s just a good general practice whether you’re doing testing or whether you’re just trying to understand different analyses that you’ve run somewhere that tribal knowledge has to live.

Loren Hadley: 25:31 Agreed. I see you inside rocket used really effectively. In some cases, the output is in Tablo. It might be in PowerPoint deck or spreadsheets, just depending on how your organization approaches with bed. Yeah, I think that’s a critical piece in there is an insight rockets a great tool for that.

Allison Hartsoe: 25:47 Okay, so let’s say I’m convinced and I want to do a testing program. What should I think about first, second, third? You? Should I just go through the checklist or are there some other areas or softer things I need to clear before I could actually get a testing program off the round?

Loren Hadley: 26:02 I think the first thing that you probably need to do is really start getting your process into place so that there is a plan, and people know what’s going to happen. Doing a landscape analysis, going in and really taking a look at your site and the traffic and how various pieces of this I influenced your ultimate outcomes, be that lead generation, be that e-commerce conversions. Understanding what really plays a heart and let you follow the 90th, let you go through and size up opportunities so you can understand, hey, if we got a good win off of this, it would be worth x million dollars. It allows you to kind of come back and, Oh, it’s not directly ROI. It allows you to size that up with a prospective on what the total outcome is going to be. So I guess I would say do your analysis up front, build that tool and get in socialized that.

Loren Hadley: 26:46 Make sure that your processes in place where you really get things kicked off, and then I think you were at a good spot to start working through it and it’s an iterative process. Just like the testing and learning yourself, how you developed that program, your organization is going to become more adept at it. You’re going to get more people who are interested in and involved in what’s going on. You may find that your initial ideas are all coming out of your development design UX teams and that’s fine, and that’s great in the beginning, but you may find that people from customer service, people who are in the stores, fans who are using the site and giving you feedback all may come up with some really great ideas to spark good tests, and so being able to evolve that over time and socialize it and get more and more of the organization involved in that test and learn is really critical to being able to adapt and used for resources the organizations.

Allison Hartsoe: 27:36 Yeah, I love what you’re saying because I think that’s one of the beauties of testing, and it’s one of the markers that separate companies that are less mature in customer centricity versus companies that are more mature in customer centricity. The ones that are more mature inevitably have a testing program running because they need to listen to the customers and understand what they’re asking for, and oftentimes that becomes the hypotheses for different tests because when we look at data, we can’t get inside someone’s head. We get close but we’re not actually in there and the tests help us understand, and so that cultural component I think is a really interesting pivot that happens from a company that might be a little bit static, a little bit slow in using data to one that gets more excited about using data through the testing paradigm to one that’s really hammering the tasks. It’s almost like a canary in the coal mine. If I was looking for a job in analytics, the first thing I would look at is how good is the testing program to understand how much are they going to push my skills?

Loren Hadley: 28:39 Yeah. I think that’s an excellent point.

Allison Hartsoe: 28:41 I want to ask you one more question about testing, and in analytics, there are so many different areas that you can focus on. What is it that attracted you particularly to testing? Was there some kind of love behind it that made you say, hey, you know, instead of focusing on tracking or instead of focusing on SEO, I really love testing. What is it that really drew to this particular aspect?

Loren Hadley: 29:07 So I think if I alluded to earlier the whole optimization piece trying to understand how people are using this site. How can you make that better for them, more effective and if it’s better for them, it’s better for your company. However, if you don’t have a testing program in place, your analytics are going to let you see what’s happening. I guess in the wild you can see what’s happening in a given situation and try to discern how to improve that. But testing let you set up those opportunities to say, what if this, what if that, you can then compare two different pieces, and it gives you a much stronger tool to really be able to focus in on what’s making that difference. And so I see that as being a really powerful addition to any optimization person’s toolset.

Allison Hartsoe: 29:48 Yeah, it is a good piece. All right, Loren, if people want to reach out to you, how can they get in touch?

Loren Hadley: 29:53 Sure. You can find me at loren@ambitiondata.com or on LinkedIn at Lauren Hadley, and I think we’re going to go ahead and put together a spreadsheet that will run you through what our process checklist looks like as well.

Allison Hartsoe: 30:05 Excellent, so we’ll link to that in the show notes so that people can download it and follow the process and get a little starter. As always, links to everything we discussed are at ambitiondata.com/podcast. Loren, thanks for joining us today. It’s always a pleasure to hear your comments about testing and the things that people should be looking out for.

Loren Hadley: 30:24 Thanks us. It’s been a pleasure.

Allison Hartsoe: 30:26 Remember, when you use data effectively, you can build customer equity. It is not magic. It’s just a very specific journey and evolving testing that you can follow to get results. Thank you for joining today’s show. This is your host, Allison Hartsoe and I have two gifts for you. First, I’ve written a guide for the customer centric Cmo, which contains some of the best ideas from this podcast, and you can receive it right now. Simply text, ambition data, one word, two three one nine nine, six and after you get that white paper, you’ll have the option for the second gift, which is to receive the signal. Once a month. I put together a list of three to five things I’ve seen that represent customer equity signal, not noise, and believe me, there’s a lot of noise out there. Things I include could be smart tools. I’ve run across articles, I’ve shared cool statistics or people and companies I think are making amazing progress as they build customer equity. I hope you enjoy the CMO guide and the signal. See you next week on the customer equity accelerator.

Previous
Previous

Ep. 70 | Amazon Strategy for DTC Retailers

Next
Next

Ep. 68 | Becoming the Data Ambassador