Every agency owner knows this pain: the constant pressure to deliver consistent results across different channels, the endless status updates, and the gnawing feeling that there must be a better way to run things.
In this episode recap of Social Pulse: Agency Edition, powered by Agorapulse, Hal Smith, the founder of H Street Digital and the architect behind a game-changing 90-day sprint system that’s revolutionizing how agencies deliver results, shows exactly how he’s transformed traditionally messy processes like media buying, creative testing, and attribution into a systematic approach that’s changing the game for agencies everywhere.
[Listen to the full episode below, or get the highlights of the Social Pulse: Agency Edition, powered by Agorapulse. Try it for free today.]
What led you to realize that agencies needed a more structured approach to client success?
Hal Smith: Yeah, I appreciate that question, and I think one interesting point that you’ve already highlighted with that, with more people or more clients, is looking at it from just a purely business perspective, right? Like, how do you scale and how do you scale effectively?
And the kind of key question mark there, especially around more clients, is that a lot of times it’s hedging against churn and knowing that existing clients might leave, right? The other thing is, you know, added with that assumption is that by bringing on more people or more clients, you’re assuming that things will go better with that higher scale, which is never true.
And so it’s important to figure out how to build out systems and processes that allow you to scale so when you take on new people or new clients it incrementally gets better and then the key thing is to, with the first point is how do you make sure that your existing clients don’t churn?
What we have identified with this 90 days sprint system that we’ve developed is identifying a good strategy that we can implement with our clients in 90 days chunks, so they know exactly what’s coming and there’s a lot of clarity around what are the most important action items effectively that we need to be working on with the client over those next 90 days.
When we have those 90 days mapped out, it’s pretty clear what we’re going to be doing. The client is confident in what we’re doing. We have a good strategy mapped out as well as the underlying tactics that we’re going to execute on, and it just makes the flow of work much easier. What made me kind of realize this is that we had, in starting H Street, we kind of typical way you start, you just take on any business that you can get.
And for us, that meant taking on a lot of smaller direct-to-consumer e-commerce businesses. So we took on a lot of businesses that were doing between one and $5 million in revenue. And the important thing with that is that companies at that size are very chaotic. Generally, it’s a small team. It’s heavily founder-led, and that founder is extremely busy, and they don’t have time to think.
And so what happens is that, team starts trying things randomly, and it’s a lot of the kind of phrase, throw it against a wall, see what sticks. It’s a lot of disparate tactics that they’re trying every week, every month, and then they’re dropping and trying something new, and there’s a little bit of amnesia going on where they try something, they forget about it, they come and try it again. Six months later, it doesn’t work. Forget about it again. Try it again.
And so what we identified is this is a kind of messy space and you do need to think about what is a good playbook to implement for these types of companies so they can effectively scale and they can scale gradually and incrementally, and there’s continued compounding success with these brands so that’s the one important thing.
The second thing we found, especially with these 90-day sprint plans, is that 90 days is just the most ideal timeframe for people to work within, and this is something I’ve learned from reading a lot of business scaling books. The kind of key one here that turned, opened this up, opened my eyes to this, is a book called Traction, which is like the EOS Business Operating System. The key thing with that is effectively the 90-day rocks. So understanding that any business to scale successfully, generally works in 90-day sprint cycles, because that’s the longest amount of time that people can hold in their heads to understand, like basically be clear about what they need to be doing to scale.
And then two, it’s also the longest amount of time to work on more complex tasks that involve multiple people and multiple things getting in place. So we wanted to organize the chaos. We wanted to put it in a framework that was understandable and digestible, and then we wanted to put it in a place where we could work across what we consider the most important performance levers for our clients.
And so with that, we realized for especially clients at this size, media buying creative, offer strategy, landing page optimization, as well as the reporting attribution infrastructure were the most important things to allow a brand to go from $1 million to five and then we eventually found out from five to 10.
And so for us, we realized we needed to create the sprint cycles in 90 days, and then we needed to figure out a structure on how to work across media, buying creative, CRO, et cetera, in those 90-day sprints as well. And so that was the genesis behind this. And, if the through line here is any scaling business, any startup generally should follow some type of scaling methodology, regardless of where you’re applying that to the business. Most business owners go with a business operating system like EOS at this stage. We effectively took some of the EOS model, as well as other business operating systems, and applied them specifically to performance media buying.
What are some of the key components in that framework that make it work for you and your clients?
Hal Smith: Yeah, for sure. And I think that is a key thing too, with expectation settings. So, fundamentally, it’s that we have SMART goals, right? So, each kind of key objective that we want to work on is set in a SMART goal format.
And then the key thing with the acronym SMART, the M stands for Measurable. So we commit ourselves to a specific metric that we’re trying to hit with whatever action item we’re running, or optimization, or running across campaigns, or any type of deliverable. We always want to make sure we set a metric, and then we’re trying to hit that metric at the end of the period.
The second piece of it is that we’ve identified that there are certain optimizations, there are certain deliverables that take 30 days, others might take a whole 90 days. So what we try to do across this SMART goal we have SMART goals for 30 days, for 60 days, and for 90 days. And then we break it out for each of these service delivery lines.
So we have it on the media buying side, we have it on the CRO side. We have it on the creative side as well as reporting analytics. So each of those has its own SMART goals for each of those periods. And what we do is we set that at the beginning of the sprint, and then effectively for the next 90 days, we’re working against those goals.
What that allows us to do actually is to remove a lot of the noise in the relationship because what we actually fell into early on was, you know, to what you were speaking to our client’s expectations around performance. We basically found ourselves spinning our wheels, trying to chase random tactical things, hoping they would improve performance, and we would adjust on a week-to-week basis. We learned that some actions that you take, you don’t actually see the results in a week, right? You might see those results over 90 days. So we tried to figure out what things we should be doing and what time horizons we would see the results in, and then what metrics we should be tracking against to make sure we had success in driving that initiative.
So the fundamental structure is their SMART goals, there are SMART goals set by different time horizons, and then there are SMART goals set by different service lines.
Hal Smith: I think I’ll take this piece by piece.
So, on media buying, generally what happens is the media buyers will go into accounts depending on how high-touch they are. Sometimes, some agencies and media buyers will only look at an account once a month. They’ll do some kind of gradual optimizations or write up a couple bullet points about that and send it to the client. That’s it.
Other ones, I would say, effectively overmanage the accounts where they’re in and make these small optimizations every single day. And it creates a lot of noise, and it’s basically busy work, but the clients feel good that someone’s in the account just messing with things every day.
But the kind of key thing here with media buying is that it’s a lot of effort. It’s a lot of energy and a lot of time spent doing ineffective optimizations that you actually aren’t quantifying the impact of. And I think most agencies kind of lean towards the monthly optimization cycle, which is also still pretty low impact. And it’s typically not validating. There’s no goal they’re trying to focus on. Like generally, agencies work with an ROE’s goal or a target CAC goal. However, they’re not trying to tie those optimizations towards that goal and saying, “If I do X, we will see an improvement in CAC in 30 days by y percentage.”
Generally, it’s just “Hey, we’re trying all these different things and, oh, look, the CAC improved.”
On the creative side, good creative agencies do build good testing frameworks. And so what they typically do is they do research around the personas, what creative angles they want to test, what formats they want to test, and then they create a creative testing roadmap.
And generally, that roadmap follows a monthly cadence, but this is typically not in conjunction with media buying, so the creative team or creative agency wants to go test these concepts, and it typically is in conflict with the media buying team. What we’ve tried to do with this solution here is integrate both media buying and creative, and then also tie both of those to these overarching goals for 30, 60, and 90 days. And then CRO, I think what’s most different about CRO today, for a lot of CRO agencies, is that dedicated CRO agencies typically work across the entire website. It’s very isolated from the creative side and media buying side, but what we focus on is specific landing page and product page conversion rate optimization. And that is generally tied with whatever we’re trying to do on a creative test and perspective, as well as media buying so it’s all integrated, and we’re trying to run CRO tests that will improve conversion rates that will then improve the way our Meta campaigns are running, or Google Ads campaigns are running.
And we try to also fit those tests into these same timeframes. So it’s all happening at the same time. And then the other thing on the other side, where it’s just landing page optimization for CRO, a lot of times those types of CRO agencies just make good-looking product pages, but there’s no validation on impact.
And we, obviously, in creating this attribution and reporting infrastructure, make sure that we have statistically significant results on these product pages. So we are basically improving the CAC or scaling span at the target CAC that our client might have.
What do you think the first step is, then, for an agency if they want to start moving towards this kind of a structured approach?
Hal Smith: So what we did is we created a test tracker that we implemented for creative and media buying, and then CRO, and we basically created this index.
We logged every single test we were running, and we did that for about six months. What that allowed us to do is we did stuff in this kind of more disjointed way. However, we were just logging every single test we did. And from that, we’re also logging the impact of that test, and generally it always comes down to improving our customer acquisition cost or allowing us to scale spend at the target customer acquisition cost, or CAC.
And what we were able to do is aggregate those learnings and figure out the most important things that we should be doing. And then that allowed us to basically better prioritize things that we are putting into these 30, 60, 90-day plans. So that was the first thing, is just like collecting the data, basically, of what these optimizations do.
The second thing we did was a lot of research and kind of training on different prioritization models and so one is called the ICE scoring model, which is like the Impact, Confidence, and Ease, basically running our optimization recommendations or things that we’re putting into these plans through that model to make sure it’s like a worthwhile venture. We also did basically training around like the Eisenhower Matrix, figuring out things that were potentially not that urgent, but we knew were high impact. Also, using the MoSCoW method and then also going and doing training and research around scaling frameworks like EOS, and then also from the Lean Startup. But I think it’s the first step was collecting data.
The second step was researching prioritization models and scaling methodologies.
And then the third step is looking at both of those, understanding what we believe to be truly high-impact things that had relatively low effort to implement, and then building out a framework for that.
So, we let you know about options where if we go, we’re sitting with a client, we’re looking at performance, we’re like, “Hey, let’s go take these things out. Put ’em in 30 days, or put ’em in 60 days and 90 days.” It makes it pretty seamless and easy for us.
Did you run into any kind of resistance when you’re trying to implement these systems, whether it’s with team members or maybe even clients? If so, how did you overcome it?
Hal Smith: Yeah. I love that question because there’s a lot of resistance, because it’s pretty new. It’s new both for obviously team members we’ve hired and with clients too.
A couple of key things I had to develop in setting client expectations, as well as team member expectations, for this. One is this whole kind of principle around, I think, like scaling, is this idea of the theory of constraints. So generally, a business has a constraint somewhere, and what we’re doing in setting these optimizations or setting these goals is identifying a constraint in the business. We’re creating a test that basically validates that constraint as well as what could potentially solve it, and then once we identify that, we implement the solution, and then, ideally, that constraint goes away.
So, one is explaining that concept to clients as well as training team members on that to understand that, then of theory and philosophy around what we’re doing. I think a lot of people in this space are just kind of like, Hey, we see this trend on social media influencer marketing, let’s go try that out and see what it does, right? That creates a lot of misses. What we want to do is analyze the business to figure out where those key constraints are, get data around it, validate that, and then solve for it. And so one example could be, we have fairly efficient campaigns on digital ad platforms like Meta and Google, fairly good brand presence.
However, let’s say their kind of sales have been dipping over the last couple of months, one thing we might identify is that their new customer offer is just not competitive with the market, right? And so that’s a constraint because basically if they can develop a more competitive offer, if they can frame that correctly, build good creative around it, they’ll have better click-through rates and better conversion rates.
Therefore, obviously, it allows them to accelerate their customer acquisition, so we would want to run tests around that, and we want to do this in this kind of structured way, so we have clear data that proves this was successful.
So, basically, explaining that concept and showing it to clients in a real-life example with that client, I think, is super effective.
The kind of key thing is just like how do you measure impact, right? So we would have to collect data first, show ’em what that key number is, and tell ’em, “Hey, we’re going to improve this number.” And then the other key thing is over what time horizon, right? Because of a kind of earlier point, if their expectations around time horizons are too short, like they want to see improvement in a week, they’re going to get incredibly frustrated.
But some things just take longer, amounts of time to solve. And so, setting the right time horizon expectation has been extremely important for us. And then, especially on the performance media buying side, figuring out the attribution of things. So, like a Meta Google Ads, Cross Channel Attribution, whatever it takes in terms of the customer journey, figuring that out in the context of what we’re planning. Those were all important.
But in terms of how we executed this, the other key thing I found we had to do was that we actually had to create the whole strategy ourselves without telling the client. So we create the whole 90-day plan, and then we have to say, “Hey, we’ve developed this new approach here, we want to share it with you.”
And then we would show them the 90-day plan, and then they would immediately want to give us feedback like, oh no, we should be doing this in 30 days or 60 days, or take this off, or whatever. But they adapted to it. But they adapted to it by wanting to work on it with us, and then we created this kind of collaborative atmosphere where we both worked on it.
What I tried to do before was ask clients like, “What are your priorities, right? Or what do you want to see done over the next quarter?” And they would be like, “Oh, yeah, we don’t know, like maybe this thing or whatever.” And it was hard to solicit feedback or information from the clients.
What I realized is you have to present them something, and then they’re like, wait a second, like I want to do this, and this. And then you could work on it collaboratively, and that got it done. But the key friction actually experienced was internally with team members because basically I had some team members who did not want to commit to some of these goals, and they effectively didn’t want to be responsible for ’em.
And, I think as a professional service business like we are, as somebody that, you know, as a performance advertising agency that we are. You need to drive true impact for your clients, and you need to be able to set these goals and commit to them. Because I believe truly, if you’re not responsible for outcomes, you’re not providing professional service.
And I had team members who are no longer with us when we rolled this out, it caused a lot of conflict. “You said to me we’re not responsible for outcomes. We can do stuff on media buying, but after a point, it’s out of our hands.” I’m like, “That’s not true.” Like, they’re paying us to help them scale their brand. You’re paying us to put in systems that will allow ’em to scale new customers, we’re going to commit to these goals and a big part of that was me knowing that we had all this historic data so we could have clear metrics and like we knew what our optimizations would drive what levels of impact.
So I was confident in presenting some of these goals, but some of the team members weren’t, and it was a hard conversation with some of them. But I would say that was a little bit of a hard, rough patch to go through in terms of business. But we have a team now that is inspired by this, and they want to learn this and be effective in these plans, and it’s worked out well.
Were there any challenging client situations where putting in this kind of system helped to turn things around for them?
Hal Smith: I have a good example with a client, a high-end luxury skincare product. They have an incredible brand, and as part of having a great brand, they are focused on their creativity. They want to make sure their creative has the same quality that the founder has for the vision for the brand across the website, across any kind of content that’s published by the brand.
The challenge with that is when you run performance advertising campaigns, generally, the majority of those campaigns are what are called prospecting campaigns, and a big piece of prospecting is bottom of funnel campaigns, which is like direct response creative. Direct response creativity is very transactional. It’s, hey, we solved this problem for you, or we will help you reach this aspiration. Click now to buy it, right? It’s very direct. A lot of the creative is effectively ugly, and a lot of good creative lower funnel is like taps into this concept around like pattern interrupt where it’s like it shows up on your feed very differently from other ads and it is thumb stopping and forces people to be like, “Hey, what is this?” And they check it out and click on the ad.
Good lower funnel direct response creative is not great from a brand perspective, so what this meant is we had a lot of challenges with this client in getting creative approved, and some of the creative that was our best performing creative, ultimately, we had to pause out because the founder did not like the way it looked.
However, when we started implementing some of these structures around creative testing, especially in these 30, 60, 90-day plans, when we had set these goals and we had clearly defined the KPIs at the frontend, defined the key metrics we wanted to hit with certain creatives. We were able to set expectations at the beginning of what we believed to be the impact of this creative. We would then launch it, test it, realize those goals, bring it back to the client, and be like, Hey, this is what we told you we’re going to do. We have now done it, and we now beat your expectations in terms of performance.
And I think just having that kind of clarity and consistency in terms of how we set these goals at the beginning, how we achieve them and how we create this structure around testing it creates a lot of trust with the client and they’re willing to try things more often and effectively we were able to try a lot more different creative concepts and ideas that were highly effective that at the beginning would’ve been an absolute non-starter. The client wouldn’t have approved them ever.
The other kind of key thing we did there is that the reporting attribution side is important for us as well, and they had a kind of messy reporting infrastructure.
So, over 30 days, we were able to figure out a good attribution time window to be optimizing towards, In 60 days, we could figure out good cross channel attribution and then in the 90 days, we figured out a good way to build kind of proxy KPIs and Ad platforms that tie directly to their source of truth.
And that’s all those things took those different time windows to complete, just because you needed enough data and those timeframes to validate some of the attribution infrastructure we were setting up.
But the result was that all of a sudden, we were able to create a system that made it clear that what we were testing in terms of creativity was super effective in driving new customers. And so it was another thing where this integrated approach allowed us to be effective on the reporting side, but also backed up what we’re doing from a creative testing perspective.
Thank you for reading the recap highlights from this episode of Social Pulse: Agency Edition, powered by Agorapulse. Try it for free today. And don’t miss other editions of the Social Pulse Podcast, like the Retail, Hospitality, and B2B editions.