The Recall Based vs Observed Behaviour Debate – Part 2

Content from BPR

In the first part of this article about recall vs observed behaviour research we looked at the critical importance of recall-based research in the measurement of the emotional transaction a listener has with a radio station.

In this second part we will look at some of the other issues to consider as well as application ideas for observed behaviour data.

The most fundamental principle of research is knowing who it is you are researching and basing your assumptions on.  In recall-based research you start using a stratified sample approach which ensures that you are getting a representative sample of your intended target market, while observed behaviour from a digital stream generally provides a view of your station users only.

With digital stream data you are effectively looking “inside the box” and you can certainly assume that they represent part of your cume and if they listen for an extended or reoccurring basis are likely part of you P1’s however outside of that you have to be careful what you assume.

In terms of accurately measuring listening behaviour the argument has two dimensions.  The argument against recall-based research is whether the respondents recalled behaviour is how they actually behaved. If you operate in a market where the official survey is a recall-based methodology, then the argument against using recall-based research is largely irrelevant. In markets where radio listening is predominately digital, and the official survey is a personal people meter then observed behaviour analytics from your audio stream may have better application.

Observed behaviour data sourced from your audio stream also has interpretive issues unless you can apply some demographic context to the information.  If you are only dealing with observed behaviour data without demographic context then is user ID 013245 a 18-24 male or a 25-39 female?

In countries where digital consumption remains a small part of overall radio listening, basing strategic decisions on digital analytics is likely a waste of time at best or even potentially dangerous.  A station’s digital listener can often be inconveniently different in demographic and behavioural profile from your analogue listeners depending on your format.  To draw any meaningful strategic conclusions from observed behaviour data it is critical to know who your digital listeners are.

Leaving aside issues of sample integrity, observed behaviour data is terrific at profiling usage patterns of your radio station by your digital users. The only caution is treading carefully on what you assume is behind people leaving or joining your digital stream.  In BPR’s broader ‘all audio studies’ where we see the entire listening landscape people often leave a digital device/source and move to an analog device/source such as when they leave home and jump in the car.  Assuming that they “left” your radio station as measured by your digital data is wrong, all they did was simply continue listening to your station in the car on FM.  Equally just because someone comes into you radio station on your digital stream at 9am doesn’t mean they started listening to your radio station at that time.

A good application for observed behaviour data is trending the performance of specific content such as a benchmark or show.  If you have a new show between, say 3pm and 4pm and you start to see your digital audience retention building during that time, then it is a safe bet that your new show is working.  Equally if the reverse happens then the new show is possibly a fail.

Observed behaviour data is also a good way to link between your strategic sample-based studies and monitor if what you decided to do is working.  It is not a perfect solution, but it will be cheaper and at least provide you with broad behavioural feedback from your digital users.

Another advantage of observed behaviour data is that the sample size is usually relatively large, the measurement is continuous, and you can review the data pretty much in real time whereas recall based research involves a much longer project timetable and needs to be planned ahead of time.

In any event you will have some form of observed behaviour data at your fingertips, generally data on your music streams.  From this you should be able to gain some sense of your stations overall listening momentum.  This is great to have but beware it has limitations in terms of strategic insight.  Observe it over time and look for the bigger picture trends.  Observed behaviour analytics can assist in telling you what is happening, whereas recall (sample based) research tells you why it happened.

The major problem with recall-based research are those people and organisations still using and defending old methodologies and questionnaire tactics.  21st century recall based research using digital platforms and visual intuitive response interfaces have changed the game.  Sample based research is now much more about actively engaged respondents reacting in real time rather than people being badgered on the telephone for 30 minutes while they try to cook dinner.

Whether you belong to the Observed Behaviour Tribe, the Recall-Based Tribe or my tribe or the Whatever Works Best Tribe, it doesn’t really matter, just keep your mind focussed on what you are trying to achieve with your research and choose tools and services that best suit what it is you are trying to understand, discover or manage.

For me observed behaviour data is best utilised in assessing the overall performance of specific content elements against your digital users, presuming you can view the data as a trend over time.  In terms of anything strategic such as identifying your station’s strengths, weaknesses, opportunities and threats, designing new radio formats and determining the “why” of listening behaviour and brand attributes, recall based research wins hands down.

A little while after this article is published, I am going to receive a lot of statistics from BPR’s Research and Social Media Coordinator about this article. These observed behaviour statistics will include such things as how many people opened it, how many page views it had, who shared it, length of time reading it, where they live, how it performed against other articles and what operating system you used but the only thing that really matters at the end of the day is who remembers what I wrote a few weeks or months from now.

Want to continue the conversation?

[email protected]

By Wayne Clouten, BPR

This story first appeared on