Track Impact
Content Outline
Did your work have an impact?
In order to understand whether efforts taken to impact CX scores have an impact, the simplest way is for you to track and compare trends period-over-period.
We recommend initiating efforts on a cycle either at the beginning of the week or the month, and then comparing performance against the previous period. For instance, comparing the trend of Apr 1 to Apr 30th versus May 1 to May 31st.
Note that depending on the nature of your business, certain impacts may be seen only over a longer term and may need comparison against quarterly results.
Period-over-Period Comparison
-
Stabilize your current scores and response collection over a period so that you have a reasonable baseline for comparison.
-
Note the current scores and response sample sizes for the previous period. For instance, this may be data for the month of May.
-
Initiate planned improvement changes across the touchpoints being evaluated in a consistent manner. For instance, you may be initiating a change in how customers are greeted at the start of June.
-
At the end of June, roll up data for the month for the score and response sample sizes.
Pre Changes | Post Changes |
---|---|
- Assuming the response sample sizes are similar, compare the scores between May and June to understand if there was an improvement in attributes of interest. If the scores have gone up, it’s time to celebrate your team’s achievement. You are now making steady CX improvements fully backed by customer data.
For a more thorough analysis, inspect the results further. See if any other attributes that influence experience may have changed significantly in case they correlate with the score being examined. Also ensure that the size of the samples between the two groups pass significance tests.
A/B Test across locations
Running an A/B test is a powerful way to drive customer backed decisions by testing a new idea (variant) against what is currently in place (control group).
-
Identify a ‘test’ location where you wish to introduce improvements first to see if they have a positive impact before rolling them out across the organization. The rest of the locations will be called the ‘control’, meaning they will remain in a controlled environment where no other changes will be introduced.
-
Stabilize your current scores and response collection across test and control locations over a reasonable period (for instance, a month or a quarter) so that you have a reasonable baseline for comparison.
-
Note the current scores and response sample sizes for test and control across the previous period. For instance, this may be data for the month of May.
-
Initiate planned improvement changes across the test touchpoints being evaluated in a consistent manner. For instance, you may be initiating a change in how customers are greeted at the start of June.
-
At the end of June, roll up data for the month for the score and response sample sizes across test and control.
- Assuming the response sample sizes are similar, compare the performance between test and control to understand if there was an improvement in attributes of interest. If the scores in the test locations have gone up, it’s time to celebrate your team’s achievement. You are now making steady CX improvements fully backed by customer data.
It is important that the test location is representative of the broader population of customers across other touchpoints.
For a more thorough analysis, inspect the results further. See if any other attributes that influence experience may have changed significantly in case they correlate with the score being examined. Also ensure that the size of the samples between the two groups pass significance tests.