When working with ongoing surveys over a long period, it's relevant to monitor the behavior of recipients. There are many different factors that can influence the amount of responses your organization receives. It can range from changes in systems behind the data to altered behavioral patterns among respondents or simply errors when something has changed in the shared infrastructure.
To keep track of this, there's delivery statistics available. There are several different views that allow you to follow key metrics either aggregated or on a daily/weekly/monthly basis. Here, we explain in general what you see and address common questions that may arise.
A tool where you can track:
Added - Number of individuals received by Quicksearch
Valid - Number approved for dispatch and not blocked by various rules
Sent - Number of surveys sent out
Delivered - Number of emails delivered
Reminders - Number of reminders sent out
Opened - Number who opened the email
Started - Number who started the survey
Finished - Number who completed the survey
How to use the tool
It's important to use this statistics for an overview and to monitor trends, just like with tools like Google Analytics or similar tools that provide email open statistics in newsletters, there's a degree of assumptions surrounding some of the numbers. Therefore, they should be used relative to themselves in a trend rather than as absolute figures. It will give a good indication if respondents are opening the email less frequently, but there will also be cases where respondents are not recorded when they do, and currently, there's no way around this.
The tool can be used to optimize delivery flows by identifying various things such as:
- If we send out more frequently than before, do recipients stop responding because they get tired?
- When we rephrase the dispatch, do we get a higher or lower level of activity?
- Changes in the data or customer profile result in more being filtered out from the dispatch than before?
- Too many drop off after starting the survey?
- How different question types/questions, or if requiring to answer a certain question, lead to recipients dropping off?
- What impact do we get from adding reminders?
Time is a crucial factor
The charts show events that occur within a certain period, but there's always a delay throughout the survey process. Examples of such delays are:
- Rules that prevent a survey from being sent too late in the evening/night or on weekends or alternatively waiting to send out the survey some time after Quicksearch has received the contact.
- The time from when an email has been sent until everyone who will read it has done so.
- The time it takes from someone starting a survey and not until after a reminder answers it.
- Percentage answering the survey only after a reminder.
Number of individuals to be followed up and whether they've received a survey before
We often measure transactions such as purchases or customer service contacts. The number of purchases or customer service contacts varies between different days. If the store is open, if there are certain campaigns, Christmas shopping, holidays, and public holidays. We also tend to use rules to prevent individuals from receiving surveys too often, meaning that more or fewer individuals can be selected for dispatch than usual. More or fewer can be sent to Quicksearch and more or fewer may choose to respond to surveys during, for example, the Christmas holidays.
Some key metrics are not and cannot be absolute
Key metrics for the number of delivered and opened emails are as uncertain for us as in all newsletter services with the same key metrics.
Delivered emails are based on where the recipient's mail server has received and said that the message should be delivered. Spam filters both on the mail server and with the user can prevent this without being reported back to us, and it's entirely natural. Spam filters should not report back if they have caught spam because spammers would quickly learn how to get around the filters, and the filter would no longer have any value. So it's only when we directly find out that the address doesn't exist that we can report that the message hasn't arrived.
Opened emails are based on whether the person has seen images embedded in the message. Not all email clients show images at all, some show images only after the recipient clicks to display the images, and sometimes emails are opened when the recipient doesn't have an internet connection. All these cases have different levels of impact but play a role in how accurate the information is when looking at it in absolute terms.
It's entirely reasonable for a recipient to have started the survey, but the email has not been reported as opened.
Is it normal to have variations?
So it's entirely natural that sometimes you have more who have started a survey than the number of delivered surveys or more sent out surveys than individuals received by Quicksearch. It's a sign that you're looking at a shorter period where there are variations and natural delays before responses come in.
Even point interventions can affect statistics such as if you manually import individuals for a dispatch and it doesn't go through the normal integration and rulesetting.
It's a trade-off against looking at the total time, all accumulated dispatches, which is a task that becomes irrelevant quite quickly as the tool should capture changes. Which period is suitable to look at is entirely based on your measurement. When dispatches occur, how many individuals receive the survey, and it may therefore be appropriate to adjust the time period or look at a couple of different time periods to utilize different advantages of the charts.