Bringing Science Back to Surveys

27 Mar

Increasingly it is becoming obvious to the financial community that CSR criteria are something they cannot take lightly and form an important part of commercial companies and investment funds’ reporting of their respective ‘successes’.

But simply surveying ‘investor attitudes’ is not an effective or, at any rate, empirically sound way of gauging their CSR priorities. Should a company focus on managing its carbon emissions, or having a flawless labour relations record?

Literature Review

Indeed, academic consensus shows organisations that tackle stakeholders’ concerns, over public approbation, achieve greater returns than firms that fail to address these interests. Manrai and Manrai (2007) demonstrated the success of this priority in reducing customer churn; Sweeny and Swait (2007) on how it increased share and profits. McDonald and Rundle-Thiele contended in their paper that customers, though, were not satisfied by corporate social responsibility (CSR) initiatives – which might be costly – as much as by dramatic-sounding but essentially superficial actions like preventing child labour, or other human rights abuses.

These considerations ranked highest in customer satisfaction levels, according to the authors after examining banking industry surveys. Furthermore, they found community support, i.e. “offering customers in low socio-economic groups fee-free accounts and low-interest loans, banks’ support of their employees’ volunteer activities via paid leave and flexible working arrangements” resulted in the lowest customer satisfaction. Other important sustainability measures, “reduction of water and energy consumption, carbon offset programmes, recycling and use of recyclable materials,” delivered the third-highest level of customer satisfaction

Questions of Reliability

Survey responses, though, are not the most reliable means of gauging an individual’s true opinion, as respondents usually know the purpose of the survey so they might be inclined to slant their responses in a ‘helpful’ direction. Another potential – and major – source of bias arises from participants’ desire to present themselves and their firm in the best possible light. This could be an unconscious source of bias even when responses are anonymous.

Examples abound of surveys linked to plush events with a hefty price tag, where participants are wined and dined and soothed into a compliant mood by keynote speakers who promote a sense of inclusion around common issues. An even more blatant example of ‘priming’, as psychologists term the process of activating certain of an individual’s emotions and short-term memory cells, is the Award Ceremony where everyone unites in an orgy of back-patting and self-congratulation.

Making Surveys more Scientific

Isn’t it time someone initiated a rigorously scientific study into stakeholders’ real unbiased opinions? Academic psychology studies are carefully crafted to minimise all possible sources of bias; and the results, when analysed, are not simply tested for correlation or with regression analysis, which imposes a series of ‘logical’ mathematical assumptions to determine constants that support its own self-generated model. Assuming, that is, that the ‘error term’ is zero, a fact which is rarely if ever true for real-life models.

Let us say that we initiated an experiment with controlled variables whereby we divided participants into three groups whereby each was exposed to the following conditions:

  1. Played video/ shown slides about two companies’ efforts at reducing and effectively reporting their carbon emissions.
  2. Played video/ shown slides about two companies’ efforts at providing favourable work conditions – flexible hours, home working, travel bursaries, sponsoring further vocational qualifications.
  • Played video/ shown slides about two companies’ efforts at providing grassroots investment to local communities affected by their activities.

At the end, each was given an assessment card where they recorded their perception of each company’s level achievement in each area, from a scale of 1 to 10. At the end, – and this would be the experiment’s overarching objective – they would be questioned about whether they thought the company scored highly for CSR criteria, whether they thought it represented a good long-term investment, and its perceived risk level.

These experimental conditions would need to be replicated across a number of samples, such that the results started to follow a normal distribution. The assumption of the homogeneity of variances between samples is a key tenet of the statistical test you are about to perform, but corrections could be made if this is not entirely the case.

Levene’s Test tests the null hypothesis that the variances of the groups are the same. If Levene’s test is significant (i.e. the value of significance is less than 0.05) then the variances are significantly different meaning that one assumption of the analysis of variance has been violated.

The formula for Levene’s Test can be found here

https://statistics.laerd.com/statistical-guides/independent-t-test-statistical-guide.php

You could then perform an analysis of variance, to determine the difference between systematic and unsystematic variance. For a full walk-through of how Analysis of Variance is performed, with or without the aid of stats software, stay tuned for my next blogpost …

 

Remember also that psychologists have to control for multiple sources of bias, for example:

  1. Selection bias. Measures should be taken to ensure demographic factors which might influence a subject’s opinion or response, such as age, income bracket, social ethnicity etc are controlled for.

If different participants are used for different experimental conditions, a method for allocating interventions to participants must be laid out in the report, based on some chance (random) process, i.e. sequence generation. Moreover, steps should be taken to prevent foreknowledge by participants of the forthcoming allocations.

 

  1. Performance bias, defined as systematic differences between groups in the care that is provided, or in exposure to factors other than the interventions of interest. Many studies are designed such that the actual thing being measured is concealed under an alterior objective which is presented as the subject of study.

 

The aim is to reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects the results. Often the assessors are also ‘blinded’ as to which participants have received which condition, to prevent them unconsciously biasing the outcomes.

 

  1. Detection bias, defined as systematic differences between groups in how outcomes are determined. In recording a subject’s reactions, if the evidence is qualitative rather than quantitative an assessor can unconsciously predicate the desired result. Again, blinding (or masking) or outcomes assessors may reduce the risk that knowledge of which intervention was received, rather than the intervention itself, affects outcome measurement.

 

  1. Reporting bias refers to systematic differences between reported and unreported findings. Within a published report those analyses with statistically significant differences between intervention groups are more likely to be reported than non-significant differences.

The_Normal_Distribution.svg

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: