Case Study: Using Predictive LTV to Improve Campaigns Using the Maximize Conversion Value Bidding Strategy

This is the third and last (at least for now) case study in the series of improvements we made to a Google Ads account of a client in the home services industry.

We started by restructuring the account, which doubled their revenue while lowering the ad spend by 34%. Then, we tested Maximize Conversion Value vs Maximize Conversions bid strategies, which allowed us to scale up the account further, spending about 30% more at a higher ROI. Today I want to share how revenue predictions helped us improve and scale our Maximize ROAS campaigns compared to waiting for the real revenue to arrive.

About The Client

The client operates in the home services industry, specifically providing appliance repair, handyman, plumbing, and similar services to users in most major cities and their provinces around the US.

The Client’s Conversion Funnel

The conversion process is pretty simple here. A user sets up an appointment, and then the client matches the most qualified technician available at the chosen time slot with the appointment. Once the technician sees the job, they can either accept it to be assigned to it or reject it to pass it to the next technician in line.

Once the technician gets to the client’s house, they diagnose the problem and give the client a quote. If the client accepts the quote, the technician buys the parts and comes back with them to complete the repair. If the client rejects the quote, the client pays a diagnostic fee.


At this point, we’re reporting the revenue as soon as it arrives. When we analyzed the data, we saw that 98% of the revenue is reported six days from the date the appointment was set. This is not ideal because it means Google has to wait six days for us to signal whether they “got it right” or not, which does not allow them to adjust the positions and cost.

The Goal

By predicting the user value, we want to allow Google to take a less cautious approach that would, hopefully, allow us to scale up the account even further without sacrificing the profitability

Objectives of Success

We’d consider the test a success if any of two scenarios take place:

  1. More volume at the same ROAS
  2. The same volume at a higher ROAS
  3. More Volume at Higher ROAS

When I say volume, in this context, I mean ad spend.

How Do Value Predictions Work?

Based on the data from last year, we built a model with a company called Voyantis. This model predicted the lifetime value (LTV) of a user just a few hours after they set their first appointment.

The predictions keep updating a few more times to a total of 5 predictions per user. The first one is 6 hours from the sign-up, then 24 hours, 48 hours, 72, and 90 hours. 

Recency vs Accuracy

The later the prediction, the more accurate it is. This is because it had more time to collect signals about the users, from 80% accuracy in the first prediction to a tad above 90% in the last.

This allowed us to report the value straight to Google almost in real-time, compared to almost a week later.

Fine Tuning The Predictions

We aimed to hit 90% accuracy with the predictions while getting as few ‘bad mistakes’ as possible. This required a few rounds of revisions and fine-tunings to get it right.

Good Mistakes vs Bad Mistakes

 I know what you’re thinking. What on earth are ‘good mistakes’? Let me tell you.

There are two ways to make a prediction mistake. You can either attribute value to a user who shouldn’t have had it or you can not attribute value to a user who was valuable. Both essentially make the prediction less accurate, but attributing value to a user who shouldn’t have gotten it is much worse in our context, making it a "bad-mistake".

The Plan

Testing Method

We decided to take a similar approach to the previous test, in which we compared Maximize Conversions to Maximize Conversion Value.

We created a separate conversion action and let it run as a secondary goal to help Google understand its pace. A month later, we reviewed the values it reported to Google, made sure they were correct, and started the experiment.

We ran the experiment for 12 weeks and made a decision based on the last 5 to allow Google to adjust to the new conversion.

The Results

Unlike the previous test, Google got used to the new conversion extremely fast in this experiment, so we decided to shorten the length of the experiment by five weeks, from 12 to 7.

Ever since the second week, it was very clear that the predictions worked better, but we kept the test alive to ensure it was statistically significant.

According to the last five weeks of the test (the period by which we measured the experiment), the prediction group spent 40% more at roughly the same ROAS as the test group. It was above our expectations.

The Aftermath

In the months that passed, we decided to sacrifice some of the additional scale we got to improve the ROAS and lower the pressure from the technicians, at least until we were able to recruit more.

We decided to target the original spending capabilities of the account, meaning a reduction of 16% in ad spend. This reduction was done by increasing the target ROAS rather than by limiting the budget, so we saw an additional uplift of 10% in ROAS.


Ever since we started to manage the account, we've seen tremendous growth and a very big improvement in the profitability of the campaigns.

The first round, which included the restructuring of the accounts, doubled the client's revenue and lowered their ad spend by 34%, effectively turning them profitable for the first time ever at an ROI of 300%.

In the second round, we improved ROAS even further, from 300% to almost 350%, while increasing spending by 30%. However, by then we still weren't spending as much as we wanted.

The third move we made helped us surpass our spending target by 16% while keeping the same ROAS. Later, we reduced spending by 16% to get to the original state, allowing us to improve the ROAS by an additional 10%.

Overall, by the end of the third test, the account kept its original spending capabilities, while improving the ROAS from 100% to 380%.

If you need help scaling your Google Ads account while increasing its profitability, contact us, and we'll be happy to help.

Bidding Strategies Case Study: Maximize Conversions vs Maximize Conversion Value

In this case study, I'll share the results of a bidding strategy test we conducted. We wanted to see whether changing our bidding strategy from Maximize Conversion with a target cost per action to Maximize Conversion Value with a target return on ad spend would help us increase our scale while maintaining our ROI.

About The Client

This is the second part of a plan that had three phases. It turned the client from a company that lost money on every lead it got (in the name of growth) to a company that can continue its growth without needing to raise more capital, which is a rare case in its industry.

They are one of the biggest home services companies in the US, with around 4,000 technicians offering services such as handyman, appliance repair, plumbing, electricians, and more.

The Client’s Conversion Funnel

The first step a user goes through is to book an appointment. Then, a technician arrives at the set time to diagnose the problem and give the client a quote. If the client approves the quote, the technician orders the parts (if needed) and returns to fix the appliance once the parts arrive.

This whole process takes an average of 4 days, and by the 6th day, 98% of the revenue is collected.


We got to this test about a month after we successfully completed a transformation of their entire Google Ads account, doubling their revenue while lowering their ad spend by 34%. At that point we had a technical problem with reporting revenue back to Google, and we just figured out a way of doing that despite the technical difficulties.

Our biggest challenge wasn’t technical, though. It was the fact that, given the nature of the business, revenue comes in installments (service calls and diagnostics, followed by the actual repair that happens only after the parts arrive). 

After going through the data, we found that 98% of the revenue was received by the 6th day after they got the lead. It’s not perfect, but it’s something we could work with.

The Goal

The goal of that move was to scale up the account by allowing us to bid more competitively for users who are more likely to perform a high-value transaction and a lot lower for those who are not.

Objectives of Success

We knew this type of test could result in many different ways (as most tests do), so it was important to define what would be a successful outcome beforehand, and we narrowed it down to one of these:

  1. An increase in spending while maintaining current ROI (preferred)
  2. Similar spend, while improving the ROI further

Early signs we expected to see are

  1. Increase in CPC
  2. Stable conversion rate (click to lead)
  3. Improvement in conversion rate (lead to sale)
  4. Increase in ASP (average sale price)

The Plan

We started by creating the conversion action and setting it as a secondary goal. We did this so Google could understand how it works, how long it takes for the value to be reported, and how many value-based conversions each successful transaction has. It also allowed us to see the current ROAS of the campaigns from within the Google Ads interface, which was a nice change. 

Testing Method

The testing method we chose is to run it as an experiment and set it to 50-50 split users based on cookies rather than search to ensure that the credit is not shared between the test groups. 

We ran the test for 12 weeks, and decided on the results based of the last 5 weeks. We did it to compensate for Google’s way of understanding conversion differently if they are set to secondary or primary.

We chose five markets to start with out of the company’s 45. They had to be big enough, and we wanted each to have a different level of profitability.

We set the target ROAS to the current return on ad spent of the campaigns, so it was set pretty high, not following the best practice of starting low and increasing it gradually. The reason was that the whole point of the test was to see if we could get more volume at the same ROAS. So, if there’s one thing that I want you to take from this case study is that no best practice can predict your campaign and test goals.

Secondary vs Primary Conversions

When a conversion action is set as a secondary goal, Google learns only certain aspects of it, but they don’t get the full picture. They learn about pace and frequency, but they don’t learn about the actual value, which is the most important aspect of the test.

Google Likes What Google Knows

Another reason for the long test is that when testing different bidding strategies, Google tends to favor what they know, which in our case were the  Maximize Conversion campaigns. The reason is that they are calculating the bid we’re going into the auctions with using our conversion data, and since there’s significantly less data in a new strategy, thy tend to take a more percussive approach.

The Results

The first two weeks of the test were pretty hectic. The ROAS campaigns worked very inconsistently, like a driver alternating between the gas and brake pedals. Some days, we spent almost double our daily budget, followed by days with almost no spending at all. Overall, the ROAS was in the same neighborhood as the target but not quite there, and the bipolar nature of the campaigns made it difficult for the technicians to plan their days.

In the next two weeks, the ROAS campaigns “calmed down” a bit. They worked slower than the control group (Maximize Conversions with a set tCPA) and in a slightly lower ROAS, which gradually improved.

In the following week or so, the scale went significantly, signaling us the Google figured out which users we wanted to target.

The last five weeks of the test, which were the period of time by which we decided whether the test was successful or not, were amazing. On average, the ROAS was 15% higher than the control group while spending 26% more. 

The Aftermath

After the test was over, we started gradually rolling it out to the rest of the account. We decided to use experiments again to make the transition but to shorten the duration to 6 weeks instead of the original 12.

After about three months, the entire account used the Maximize Conversion Value with a tROAS. It grew by over 30%, even though the ROAS improved by 12% overall.

What’s Next

The next item on our checklist was to shorten the time between getting the lead and reporting the total conversion value. For that, we used a predictive model that helped us predict a user’s LTV as soon as we got their lead for the first time. This, though, is a case study of its own.

Lead Qualification Case Study: 43% Lead Conversion Boost in Less Than 2 Weeks

In today’s case study, I’ll show you the process after which an established financial services company was able to close 43% more sales from the exact same leads, effectively reducing their cost of acquiring a new client from $730 to $500.

About the Client

The client is an established regulated financial services provider that operates in the UK, Canada, Australia, Spain, and Cyprus.

They have a separate regulation in each location, and each has its own requirements. The UK’s FCA is different than Cyprus’ CySec, and although Australia’s ASIC is pretty similar to Canada’s CIRO, there are subtleties that sometimes require different approaches.

Day One

Their advertising budget was about 1.2 million pounds before we started the process, it brought roughly 1,000 leads per day that converted into paying clients at around 5.5%, making their CAC (cost of acquiring a new client) around $730. 

What Affects CAC

The cost of acquiring a new client is affected by many factors. In fact, it’s affected by everything. If we look at the user journey from seeing an ad all the way to becoming a paid customer, we see three steps:

  1. Click on an ad
  2. Filling out the form on the landing page
  3. Becoming a paid client after getting the call from the call center

Every step had the same effect on the CAC, so lowering the CPC by 5% and Improving the conversion rate between click to lead by 5% will lower the CAC by the same amount.

The Plan

Even though we’re a marketing agency, we knew that lowering the cost per lead could only take us so far. 

By looking at the numbers, it was very clear that the biggest pain point was the sales team's ability to convert the leads. At this point, we have already audited the account and know the leads are relevant.

We decided to pass a few parameters in the URL, indicating exactly what the users searched for before they converted, which ad, and which landing page they converted from.

All of these appeared in the CRM and were available to the rep when they called the lead. This gave the reps a lot of information that allowed them to adjust their pitch to match the users' expectations.

Searching For Low Hanging Fruits

We asked for transcripts of calls that converted and calls that didn’t. We found that we could identify pretty early in the conversation whether a client was going to convert.

As it turns out, relatively early in the sales representative’s pitch, they give a very detailed example that only makes sense if a client searches for a certain theme of keywords. So, just by letting them know whether this user belongs to this group or not allowed them to create separate sales pitches designed for different types of common users.


Over the next couple of days, the sales team increased their conversion rate day by day until, after roughly two weeks, they reached a consistent new conversion rate of 7.8%, an improvement of 43%.

The new conversion rate lowered the CAC from $730 to a little over $500 and gave us an opportunity to bid more aggressively to scale up the account while keeping the CAC lower than it was before.

The Aftermath

Of course, this process didn’t mark the end of our work with the company, but it was the most impactful one. After the results settled, we did the math and decided to sacrifice some of the 43% profitability and reinvest it in scaling the account further.

After some experimentation, we settled on aiming for a CAC of $600 and a monthly spend of a little under $1.7 million.

This gets the client a little more than 2800 paying clients monthly instead of about 1650, while increasing the profitability by almost 20%.

Home Services Case Study: 2X Revenue and -34% Ad Costs With Google Ads

In this case study, I'll share with you the strategy we used to double the revenue of one of the biggest home services companies in the US while lowering their Google Ads spending by 34%.

This wasn’t a quick process. It took about six months of constant improvements that I’ll spare you from reading, keeping only strategic changes we made that had a clear impact on the account.

About The Client

The client is one of the largest home services companies in the US, offering a wide range of services, including handyman, appliance repair, plumbing, electricians, and more.

Geographically, they service the vast majority of big cities in the US and have divided their service area into markets. Each market is managed in a separate campaign to control supply and demand better.

They hired more than 4,000 technicians, for whom we were tasked with getting jobs. Not an easy task.

The Client’s Conversion Funnel

This business works pretty simply, but there are many cases where it gets a bit more complicated (such as a job that requires multiple technician visits or a repair that falls under the guarantee), which I’ll skip to keep as simple as possible.

A user can set an appointment in one of 2 ways: They can call the company or fill out an online questionnaire and set an appointment that way.

After that, the system pings the technicians based on an order set by an algorithm they developed. As soon as a technician assigns himself the job, they can start communicating with the user directly to get additional information. At this point, the job is “locked” to the technician, and other techs can’t see it anymore.

Once the technician is on his or her way to the user, the payment for the diagnosis fee is guaranteed to the technician, so they get paid even if the user decides not to accept the quote, a common practice in this industry.

If the user accepts the quote, the company orders the parts required to do the job, and if there are no parts required, the technician starts working. 

When the job is done, the user pays through a credit card terminal the technician has.


When we got the account, it was managed with a strict CPL goal. The CPL goal was calculated with the sale conversion rate, the average revenue per sale, and the average ratio between profit and revenue.

At that point, they barely broke even on their Google Ads spending and relied on other, less scalable sources to improve the company’s cash flow.

The Goal

Improve the client’s profitability as much as possible to a point where Google can cover the cost of the entire operation. 

At the client’s request, I won’t go into specific numbers, but the campaigns needed to earn enough money to cover the technicians’ payments, the rent of a few offices, and the salaries of 80 people.

The Plan

Tracking Down the Funnel Events

The first thing we did was to create a set of additional conversions to signal Google on additional events apart from the lead. 

We tracked the event where the technician initially took the job, when they indicated that they were on their way to the client (to eliminate the cancels), and the event where the client accepted the quote (eliminating those who just paid the diagnostic fee).

At this point, we couldn’t report monetary value back to Google for technical reasons.

Identifying Where the Value Is

Different types of services are worth different amounts, and acquiring leads for them vary in cost. We wanted to find opportunities for services that are cheap to acquire leads for while still bringing a lot of profit.

Dividing The Markets

We found that there’s a bigger difference in cost per lead in different areas than in different types of services. That understanding made us realize that we can calculate profit per type of service, but we have to calculate the cost per lead per market per service.

Thankfully, we had enough data to do that in most markets, but in very small markets, we used multipliers that were consistent in other markets. for example, a lead for service 1 costs 20% more than the cost for another service, etc. 

It’s not as accurate as we would have loved it to be, but it was good enough to create a starting point. Another problem we had in that area was users searching for general keywords, such as “handyman” or “appliance repair”, that we couldn’t assign to a specific service. These keywords are responsible for more than 60% of the search volume.

Sub-market Division

In any market, there are areas that perform better and worse, so we knew we could maximize our profit even more if we divided each market into sub-markets. We split each market into 2 groups of locations based on the revenue per lead (which takes both the size of the job and the conversion rate between lead to sale into consideration). In the bigger markets, we divided each of these groups based on the CPL in each of the areas. This allowed us to bid more aggressively for users in areas where the CPL was lower while still bringing a lot of revenue per job.

The Results

Everything you read so far allowed us to double the client’s revenue while lowering the campaigns’ cost by 34%, increasing the ROI from 100% to a little over 300%.

Even though we spent 34% less, we got a lot more leads than when we started, satisfying the need to scale up the account and provide more jobs to the technicians.

The Aftermath

The additional revenue allowed the company to grow a lot and very fast, but we had to do it gradually. The nature of what they do requires a balance between the amount of technicians they hire and the amount of demand we can generate. Maintaining this balance is vital because even if we get them the jobs that can make the most profit, it doesn’t help if they don’t have the technicians to do them.

What’s Next

We knew that even this much improvement wasn’t the best this account was capable of. The next step was to bid for conversion value, a thing the client was technically unable to do at that point. We eventually found a solution for that, but it’s a story for another case study.