Case Study: Using Predictive LTV to Improve Campaigns Using the Maximize Conversion Value Bidding Strategy

This is the third and last (at least for now) case study in the series of improvements we made to a Google Ads account of a client in the home services industry. We started by restructuring the account, which doubled their revenue while lowering the ad spend by 34%. Then, we tested Maximize Conversion Value […]
Predictive LTV Case Study Scaling Up Maximize Conversion Value Campaigns

This is the third and last (at least for now) case study in the series of improvements we made to a Google Ads account of a client in the home services industry.

We started by restructuring the account, which doubled their revenue while lowering the ad spend by 34%. Then, we tested Maximize Conversion Value vs Maximize Conversions bid strategies, which allowed us to scale up the account further, spending about 30% more at a higher ROI. Today I want to share how revenue predictions helped us improve and scale our Maximize ROAS campaigns compared to waiting for the real revenue to arrive.

About The Client

The client operates in the home services industry, specifically providing appliance repair, handyman, plumbing, and similar services to users in most major cities and their provinces around the US.

The Client’s Conversion Funnel

The conversion process is pretty simple here. A user sets up an appointment, and then the client matches the most qualified technician available at the chosen time slot with the appointment. Once the technician sees the job, they can either accept it to be assigned to it or reject it to pass it to the next technician in line.

Once the technician gets to the client’s house, they diagnose the problem and give the client a quote. If the client accepts the quote, the technician buys the parts and comes back with them to complete the repair. If the client rejects the quote, the client pays a diagnostic fee.

Baseline

At this point, we’re reporting the revenue as soon as it arrives. When we analyzed the data, we saw that 98% of the revenue is reported six days from the date the appointment was set. This is not ideal because it means Google has to wait six days for us to signal whether they “got it right” or not, which does not allow them to adjust the positions and cost.

The Goal

By predicting the user value, we want to allow Google to take a less cautious approach that would, hopefully, allow us to scale up the account even further without sacrificing the profitability

Objectives of Success

We’d consider the test a success if any of two scenarios take place:

  1. More volume at the same ROAS
  2. The same volume at a higher ROAS
  3. More Volume at Higher ROAS

When I say volume, in this context, I mean ad spend.

How Do Value Predictions Work?

Based on the data from last year, we built a model with a company called Voyantis. This model predicted the lifetime value (LTV) of a user just a few hours after they set their first appointment.

The predictions keep updating a few more times to a total of 5 predictions per user. The first one is 6 hours from the sign-up, then 24 hours, 48 hours, 72, and 90 hours. 

Recency vs Accuracy

The later the prediction, the more accurate it is. This is because it had more time to collect signals about the users, from 80% accuracy in the first prediction to a tad above 90% in the last.

This allowed us to report the value straight to Google almost in real-time, compared to almost a week later.

Fine Tuning The Predictions

We aimed to hit 90% accuracy with the predictions while getting as few ‘bad mistakes’ as possible. This required a few rounds of revisions and fine-tunings to get it right.

Good Mistakes vs Bad Mistakes

 I know what you’re thinking. What on earth are ‘good mistakes’? Let me tell you.

There are two ways to make a prediction mistake. You can either attribute value to a user who shouldn’t have had it or you can not attribute value to a user who was valuable. Both essentially make the prediction less accurate, but attributing value to a user who shouldn’t have gotten it is much worse in our context, making it a "bad-mistake".

The Plan

Testing Method

We decided to take a similar approach to the previous test, in which we compared Maximize Conversions to Maximize Conversion Value.

We created a separate conversion action and let it run as a secondary goal to help Google understand its pace. A month later, we reviewed the values it reported to Google, made sure they were correct, and started the experiment.

We ran the experiment for 12 weeks and made a decision based on the last 5 to allow Google to adjust to the new conversion.

The Results

Unlike the previous test, Google got used to the new conversion extremely fast in this experiment, so we decided to shorten the length of the experiment by five weeks, from 12 to 7.

Ever since the second week, it was very clear that the predictions worked better, but we kept the test alive to ensure it was statistically significant.

According to the last five weeks of the test (the period by which we measured the experiment), the prediction group spent 40% more at roughly the same ROAS as the test group. It was above our expectations.

The Aftermath

In the months that passed, we decided to sacrifice some of the additional scale we got to improve the ROAS and lower the pressure from the technicians, at least until we were able to recruit more.

We decided to target the original spending capabilities of the account, meaning a reduction of 16% in ad spend. This reduction was done by increasing the target ROAS rather than by limiting the budget, so we saw an additional uplift of 10% in ROAS.

Summary

Ever since we started to manage the account, we've seen tremendous growth and a very big improvement in the profitability of the campaigns.

The first round, which included the restructuring of the accounts, doubled the client's revenue and lowered their ad spend by 34%, effectively turning them profitable for the first time ever at an ROI of 300%.

In the second round, we improved ROAS even further, from 300% to almost 350%, while increasing spending by 30%. However, by then we still weren't spending as much as we wanted.

The third move we made helped us surpass our spending target by 16% while keeping the same ROAS. Later, we reduced spending by 16% to get to the original state, allowing us to improve the ROAS by an additional 10%.

Overall, by the end of the third test, the account kept its original spending capabilities, while improving the ROAS from 100% to 380%.

If you need help scaling your Google Ads account while increasing its profitability, contact us, and we'll be happy to help.

Related cases