In Essence: E.ON is one of UK’s largest power and gas companies. Given the importance of performance measurement systems for the success of a business, this report evaluates the adequacy of the KPIs used by E.ON. In addition, a business model is presented highlighting the best practices in ‘customer complaint resolution’ divisions within the utility sector.
Abstract:
E.ON is one of UK’s largest power and gas companies. E.ON’s new initiative known as ‘Target Operating Model’, which is believed to enhance customer experience and deliver operational benefits, is currently underway. The initiative includes enhancements in various areas such as training, processes, systems and website development. E.ON has a top-down approach in adopting the KPIs for these initiatives, starting from defining strategic objectives and cascading them to business function level.
Strategic, operational, financial and team/individual performance represent a major objective for an organisation. To appreciate the extent to which corporate objectives are achieved and in order to evaluate the efficiency of business strategies, it is vital to define an integrated system of performance indicators capable of assessing processes with respect to the set targets and objectives at a given point in time. Moreover, managerial decisions should be based on a thorough knowledge of the current state of the business which cannot be attained in the absence of a performance measurement system. Such a system should be capable of informing management about the results obtained in all initiatives of the company. Given the importance of performance measurement systems for the success of a business, this report evaluates the adequacy of the KPIs used by E.ON. In addition, a business model is presented highlighting the best practices in ‘customer complaint resolution’ divisions within the utility sector.
Primary research was conducted in the form of survey questionnaires and Structural Equation Modelling technique was adopted to detect a pattern in the complainants’ responses to the questionnaire. The survey measured consumers’ judgement about their respective energy suppliers’ customer complaint resolution process. We believe that the findings of this research have an imperative practical implication as the constructs of the model are chosen to test the appropriateness of KPIs in addressing the underlying principal determinants of ‘Loyalty’, which is the ultimate objective of the customer complaint resolution division. The research also tests the adequacy of the KPIs which are intended to meet the requirements identified by “Which?”, “Consumer Future” and Ofgem in order to bolster E.ON’s stance in the top rankings.
Furthermore, a gap analysis is performed to identify KPIs that could be adopted by E.ON with respect to its strategic objectives. Using AHP methodology, the list of new KPIs are ranked based on SMART (Specific, Measureable, Attainable, Realistic and Time-sensitive) criteria. Finally, the report provides recommendations to E.ON summarising the KPIs that could be replaced and/or adopted as part of the organisation’s performance measurement system.
The body of the Article
1.1.1 UK Power Industry Timeline. 7
1.1.3 Electricity as a commodity. 9
2.1 Performance Measurement. 13
2.1.1 Key Performance Indicators. 14
2.2 Performance Measurement Systems. 15
2.2.3 Medori and Steeple’s Framework. 18
2.3 Critical Review of PMS. 20
2.4 Customer Relationship Management. 21
2.4.1 Satisfaction, Trust, Loyalty. 21
5.2 AHP & SMART Methodology. 40
11.2 Graphical Illustration of PLS Analysis. 67
FIGURES
Figure 1 Market structure before deregulation.. 8
Figure 2 Market Structure after deregulation.. 8
Figure 3 Performance Pyramid (Cross et al., 1992). 16
Figure 4 Balance Scorecard (Kaplan & Norton, 1996). 17
Figure 5 Medori & Steeple’s Framework (2000). 18
Figure 6 Performance Prism (Neeley et al., 2001). 19
Figure 7 Complaint Mgmt & Service Evaluation Model (Sangareddyet et al., 2009). 23
Figure 8 Basic aspects of customer experience. 25
Figure 9 Business Enterprise Management 26
Figure 10 Respondents’ distribution of energy supplier. 30
Figure 11 Respondent’s demographics (gender). 30
Figure 12 Respondent’s demographics (age). 30
Figure 13 Respondent’s demographics (marital status). 31
Figure 14 Respondent’s demographics (income). 31
Figure 17 SMART criteria in prioritising KPIs. 41
Figure 18 Hypothesized Model 42
Figure 19 PLS path-modeling result (graphical illustration). 46
Figure 20 AHP hierarchy based on SMART criteria. 48
Figure 21 Big 6’s breakdown of cost structure and profit margin (2%-5%) energy. 67
Figure 22 Graphical illustration of PLS Path-modelling Analysis. 67
Figure 23 Time line of Re-opened and Multiples. 68
TABLES
Table 1 Objectives and KPIs. 11
Table 3 PLS measurement reliability figures. 43
Table 4 Correlation Matrix. 43
Table 5 PLS path-modeling results. 44
Table 6 Intuitive Gap Analysis. 47
Table 7 Nine point scale for AHP analysis (Saaty, 1994). 49
Table 8 Pairwise comparisons of KPIs – Utility industry. 49
Table 9 Normalized pairwise comparisons of KPIs – Utility industry. 50
Table 10 Pairwise comparison of SMART criteria – Utility industry. 51
Table 11 Normalised pairwise comparison of SMART criteria – Utility industry. 51
Table 12 Global weights of KPIs – Utility industry. 52
Table 13 Correlation Matrix. 68
Glossary
Abbreviation |
Term |
AHP |
Analytical Hierarchy Process |
AVE |
Average Variance Extracted |
CF |
Customer Futures |
CSF |
Critical Success Factors |
BSC |
Balance Scorecard |
CRM |
Customer Relationship Management |
KPI |
Key Performance Indicators |
KRI |
Key Result Indicators |
NPS |
Net Promoter Score |
PI |
Performance Indicators |
PLS |
Partial Least Squares |
PMS |
Performance Measurement System |
RI |
Result Indicators |
SEM |
Structural Equation Modeling |
SMART |
Specific, Measureable, Attainable, Realistic, Time-sensitive |
SME |
Small and Medium Enterprise |
TWR |
Time Weighted Ratio |
1. Introduction
1.1 UK Utility Industry
1.1.1 UK Power Industry Timeline
Public supplies of electricity in the United Kingdom were first made in the late nineteenth century for use in street lighting. By the 1920’s, there were approximately five hundred suppliers of electricity in England and Wales (RWE, 2001).
In 1926, the government enacted the Electricity Supply Act in order to create a centralized authority which would promote a national transmission system, which was completed in the mid-1930s. The Electricity Act of 1947 brought the distribution supply activities of the over five hundred suppliers under government control and integrated them into a dozen regional boards (RWE, 2001).
The Electricity Act of 1957 marked the formation of the Central Electricity Generating Board and the Electricity Council. These bodies were responsible for generating the vast majority of power in England and Wales. The regional boards purchased electricity from the Central Electricity Generating Board and sold it to customers (RWE, 2001).
In the late 1980’s, the UK government published a white paper outlining restructuring and privatizing the nationalized electric supply industry, which were enacted as part of the Electricity Act of 1989 (RWE, 2001).
In 1991, HM Government partially privatized the ESI, and years later, by 2001, the means of electricity distribution transformed radically under the New Electricity Trading Arrangements. These arrangements were based on agreements between various participants and stakeholders including generators, suppliers, traders and customers, thereby reducing monopolies in the industry and providing greater choice for all the players and participants in the industry (RWE, 2001).
Liberalization:
The liberalization process created a highly competitive market in the UK in which suppliers would be able to sell their services nationwide and customers would have the autonomy to choose the supplier that best suited their needs. Suppliers could thus openly access the grid and distribution networks, leading to a significant reduction in horizontal market concentration and reduction in market share of the previously large generators (RWE, 2001).
Prior to liberalization, the state had a monopoly on generation and transmission. As is the case in many other state run electricity boards, the government enforced strict price regulations. It was the need for investment in the ageing infrastructure, which required replacement that spurred the move towards liberalization (Department of Energy & Climate, 2012). Investments were also required due to the movement towards a desire to become more energy efficient and conduct research on renewable energy and low-carbon technology.
Figure 1 Market structure before deregulation
Figure 2 Market Structure after deregulation
1.1.2 Electricity Delivery
Electricity delivery involves the below mentioned four components (EnergyUK, 2013):
Generation – Generators produce electricity, which is sold on the energy market for consumption in homes across all of Britain.
Transmission – Generated electricity travels across the transmission network along high voltage networks.
Distribution – The UK has fourteen licensed distribution network operators (DNOs). These operators have distribution networks that carry electricity from transmission systems to homes throughout the UK. The licensed distributors are responsible for distribution of electricity in confined distribution services areas.
Retail – Customers buy electricity from suppliers who buy it from the wholesale market, while paying distribution network operators for transporting the electricity across their networks.
1.1.3 Electricity as a commodity
The deregulation and privatization of the power industry has led to a shift in the strategic intent of power and utility companies. Since there are minimal or no differences in the actual product provided, power companies have been forced to find a way in which to differentiate or brand themselves in order to survive the increased amount of competition (EnergyUK, 2013).
As a product or a service becomes a commodity, its features become standardized and therefore customer focus naturally shifts to price. Hence, utility companies, once operationally efficient, or as low cost as they can possibly be, ask how they can get a decent price for their product.
The relationship between a supplier and a customer usually is limited to price, service quality and customer service. However, in the developed world, with an increasingly reliable infrastructure, service quality, or reliability, is generally the same across all suppliers, as disruption in power supply are minimal. Since there are no variations in type of electricity, and there is no product range, price by itself tends to trend downwards and it becomes difficult to compete on price. Hence, this leaves customer service as one factor by which a power company can differentiate from others.
1.1.4 Competition
Due to the end of the monopolistic environment, energy suppliers have become more exposed to free market competition. The UK retail energy market is amongst the most competitive energy markets in the world (Energy UK, 2013). Approximately 100,000 consumers switch energy suppliers each week, which equates to over five million consumers a year. This is one of the highest rates in the world. Churn, i.e. the process of customer switching, is a mechanism by which competition is measured. With over five million consumers switching energy suppliers in the UK each year, it is safe to say that its’ energy market is highly competitive. Only the car insurance consumers achieved a higher churn rate than gas and electricity consumers (Energy UK, 2013).
Therefore, energy companies have been forced to focus on branding a commoditized product in order to increase customer satisfaction. These companies are putting a particular focus on loyalty, since costs of acquiring new clients in residential energy markets can be up five to six times higher than costs associated with the retention of existing customers (Pesce, 2002).
A loyal base of residential customers to their energy provider can be considered as one that not only repeatedly purchases the services of the company, but also holds favourable attitudes towards the company (Dick and Basu, 1994).
1.2 Project scope
Recognizing the increase in competition, E.ON has turned to differentiate itself from other energy companies by focusing on customer service. This is an opportune time for E.ON to make a name for itself in the realm of customer experience. A study conducted by global consulting firm Ernst and Young in 2012 that brought together a focus group of utility consumers’ revealed urgency to improve customer experience. The study revealed the following results:
· Trust Deficit
o Perceptions of energy providers are generally poor, even if they are perceived to offer reliable service.
o In the G8 market, none of the big energy providers scored more than 51% in an independent customer satisfaction survey.
· Lack of relationship with consumer
o Customers feel that their relationship with their energy provider is largely transactional based on price.
o Consumers would not consider a more extensive relationship with their energy provider to be valuable.
o Consumers will not think about their utility providers for additional services.
· Customer expectations not met
o Customers have no confidence that their bill is accurate.
o There is no transparency in relation to the cost of energy.
o The level of customer service is not as high as customers expect.
o Tariffs/rates are hard to understand and to compare.
All the above factors lead to customer dissatisfaction, limited brand loyalty and customer disengagement. The following quotes from Ernst and Young’s focus group help capture the sentiment of a majority of energy consumers:
“There is no relationship. You pay and they give you electricity.”
“Over the last decade there has been a steady decline in the public’s trust in the energy market, with suppliers being perceived as big profit-making machines that will always prioritize profit and shareholder return over the quality of services that they provide, the experience of their customer and, above all, the value for money of their services.”
As per documents provided from E.ON and conversations with key personnel, the energy provider has put in place a strategic vision to be their customers’ “trusted energy partner.” The vision is consumer centric and will ideally work to rid consumers of the discontent described above. As such, E.ON has put in place certain objectives and related KPI’s that the organization believes will lead them towards their strategic vision. These objectives are outlined in the table below:
|
Desired Outcome (Objective) |
KPIs |
Overall |
· We are recognized in the industry for being the best at resolving complaints and for learning from complaints to improve business performance and customer service |
Which? Consumer Focus? (Rankings/Ratings) [UKCSI] |
Customer performance |
· Our customers are satisfied with the way we have handled their complaints and with the resolution · We are able to identify systematic issues and define and implement solutions |
· NPS [Customers staying vs leaving after resolution]- Insight · % of planned cases to Ombudsman · % of upheld cases (w. deadlock or final position) · Reduction of complaints in prioritised categories |
Operational performance |
· We have improved the way we manage complaints and the end-to-end process · We do not leave complaints to age · We manage the cases that need to be independently evaluated by the Ombudsman |
· Reopened complaints · Multiple/Repeat complaints · Same day resolution · 49 days resolution · % of planned Ombudsman cases · OS cases – E.ON % in industry · Age of complaints (step 1, step 2) · % of complaints escalated from step 1 to step 2 |
People and culture |
· The perception is changed: excellence in handling and resolving complaints = CS excellence |
Internal NPS |
Quality performance |
· Quality of processes and practices matches our aspiration to be number 1 in CS, retain our customers and be complaint |
· Regulatory risk · QA Controls – % of surveys done vs. expected · QA level (general) |
Financial performance |
· We are fair in compensating our customers when things go wrong · Everything we do is fair and follows Ombudsman guidelines to resolve a dispute/complaint |
· QNSB -> re: complaints · Ombudsman fees (based on number of cases) · COS hand off from frontline to complaints teams |
The above targets are recorded present day with targets set for the immediate future. Several of the KPIs originate from two consumer rights organizations, ‘Which?’ and ‘Consumer Focus’ and are described in brief below.
Which?
This consumer rights organization captures consumer satisfaction on a variety of sectors including energy. It asks customers to rate organizations on specific areas such as ‘customer service,’ ‘value for money,’ bills (accuracy and clarity,’ complaints’ and ‘helping you save energy.’ Companies are rated out of five stars and given an overall percentage ‘customer score.’ The customer score is based on their overall satisfaction and the likelihood that they would recommend their energy provider to a friend.
As per the latest data available on the Which? website (2013), E.ON was amongst the top 10 ranked providers. However, its ratings in the below categories were not in line with their internal targets. The following table indicates E.ON’s performance.
Customer Service |
3 out of five stars |
Value for money |
3 out of five stars |
Bills (accuracy and clarity) |
3 out of five stars |
Helping you save energy |
3 out of five stars |
Complaints |
4 out of five stars |
Customer score |
51% |
Consumer Focus
Consumer Focus, now under the name of Consumer Futures, is the operating name for the new National Consumer Council. The organization is part of the UK government’s reforms to aide markets in working better for consumers, by improving customer protection and providing clarity regarding where they can turn for help (Consumer Futures, 2013). Latest documents provided by E.ON indicated that the company had a 3 star Consumer Focus rating.
The scope of this project is to assess the adequacy of the abovementioned KPIs and if appropriate, make recommendations in how to improve or enhance them. Furthermore, the team also intends to conduct a gap analysis through a survey of the literature, primary research with energy consumers as well as benchmarking from other industry players in order to suggest new KPIs in case a gap does exist.
The report begins with a literature review of performance measurement, key performance indicators, performance measurement systems and customer relationship management (CRM). The review was conducted in order to better understand how organizations measure performance and the role that CRM plays in addressing performance and how it can lead to increased customer loyalty. A business model is presented highlighting the best practices in ‘customer complaint resolution’ divisions within the utility sector. Then the report proceeds to address the abovementioned scope of checking the adequacy of KPIs used by E.ON. The methods used are explained in detail later in the report.
2. Literature review
2.1 Performance Measurement
Performance measurement is the basic principle of management. It helps in identifying performance gaps between actual and required performance and gives an indication of progress towards closing the gaps. (Weber et al, 2005) Performance management involves monitoring and controlling an organisation’s activities as per set goals (Brignall et al., 1996). It helps in quantifying the effectiveness and efficiency of an organisation’s activities (Neely et al., 1995). Conventional performance measurement models have focused on achieving limited financial accounting measures such as earnings per share and return on investment that may provide misleading signals for continuous improvements in today’s dynamic business environment of intense competition (Kaplan et al., 1992). Numerous researchers have cited disadvantages of the traditional approach to performance measurement that use only financial performance measures (Ghalayini et al., 1997; Jagdev et al., 1997): (I) financial measures are difficult to quantify monetarily such as lead-time reduction, customer service and quality improvement, (II) financial reports are produced on monthly basis and are results of decisions made one or two months earlier, (III) financial measures have predetermined inflexible formats across all departments which ignores the fact that a department may have varied priorities and characteristics compared to other departments.
According to Raynor et al. (2013), in ratios such as economic value added (EVA), cash flow on investment, and return on asset, the denominator is a measure of assets and the numerator is measure of income. When customers are no longer willing to pay for the product or service and income starts declining, it is easy to try and make those ratios go up by shrinking the denominator. Senior managers have long felt that this is a mistake but they do it anyway, misled by compelling presumption that cutting costs has faster and predictable consequences. On the contrary, exceptional organisations typically accept higher costs as a price for excellence. Such organisations put significant resources over long period of time into creating non-price value thereby generating higher revenue. Ross et al. (1993) suggested that ‘traditional’ measures of profitability are flawed because many business strategies and opportunities involve sacrificing current profits for long term gains. Besides achieving financial measures, organisational success depends on how well the organisation adapts to the environment within which it exists (Otley et al., 1985). Butler et al. (1997) argue that management’s approach must take into account the significance of an organisation’s environment especially the relationship with its customers. Therefore, performance measurement criteria should be oriented towards an organisation’s predefined objectives.
2.1.1 Key Performance Indicators
Many organisations work with wrong measures, many of which are incorrectly used as KPIs. The reason is that very few business leaders, accountants, consultants and organisations have essentially explored what a KPI is. Performance measures are of four types (I) Key result indicators (KRIs), which informs how an organisation has done with respect to critical success factors; (II) Result indicators (RIs), which informs what an organisation has done; (III) Performance indicators (PIs), which informs an organisation what to do and (IV) Key performance indicators (KPIs), which informs an organisation what to do to increase performance significantly (Parmenter, 2010).
KRIs include measures such as net profit before tax, profitability of customers, customer satisfaction, employee satisfaction and return on capital employed. These measures are the result of many actions. They give an indication whether the organisation is heading in the right direction. However, they do not indicate what the organisation needs to do for improving these results. Hence, KRIs are ideal for the board i.e. members who are not involved in day to day management. KRIs are reviewed on monthly or quarterly basis and not on daily or weekly basis as KPIs are. Separating KRIs from other measures has an impact on reporting, resulting in a separation of performance measures into those influencing governance and those influencing management (Parmenter, 2010).
PIs and RIs complement KPIs and are displayed with them on the scorecard for each organisation, division, department and team. PIs are generally non-financial measures such as percentage sales with top 10% customers, late deliveries to key customers, customer complaints from key customers etc. RIs are all financial measures such as daily and weekly sales analysis, which is in turn the result of efforts by many teams (Parmenter, 2010).
KPIs represent a list of measures that focus on certain aspects of organisational performance that are critical for the current and future success of the organisation. The seven common characteristics of KPIs are: (I) non-financial measures, (II) frequently measured, (III) acted on by senior management, (IV) indicates what action is expected by employees, (V) measures that tie ownership down to a team, (VI) affect the critical success factors and (VII) encourage desired actions. Management can use KPIs as lag (outcome based) and lead (performance driver) indicators (Parmenter, 2010). Hope and Fraser (2003) recommended using no more than 10 KPIs whereas Kaplan and Norton (1996) suggested no more than 20 KPIs. The ‘10/80/10 rule’ proposed by Parmenter (2010) suggests that it is ideal for organisations to have no more than 10 KRIs, 80 PIs and RIs, and 10 KPIs.
2.2 Performance Measurement Systems
Much progress has been made in establishing performance management systems (PMSs), which includes selection of measures aimed at balancing the more traditional view on profitability (Tangen, 2004). One of the major objectives of PMSs is to have proactive rather than reactive management (Bitichi, 1994). According to Tangen (2004), a PMS should reflect upon short and long term results, various perspectives (e.g. the shareholder, the customer, the competitor, the innovativeness and the internal perspectives), different types of performances (e.g. cost, quality, flexibility, delivery and dependability) and different organisational levels (e.g. local and global performances). It is also important to note that performance measures by which employees are evaluated subsequently have an effect on their behaviour, thus an improper set of measurements can lead to dysfunctional behaviour of employees (Fry, 1995). Skinner (1986) termed this as ‘productivity paradox’ where unanticipated behaviour results from poor performance measures. Therefore, a PMS should guard against sub-optimisation by establishing an unambiguous link from the top of an organisation all the way to the bottom so as to ensure employee behaviour is consistent with corporate goals (Tangen, 2004). Further, Bernolak (1997) highlighted that a limited number of performance measures are sufficient to generate appropriate actions. The more the number of performance measures, the more difficult it becomes to know which performance measures should be prioritised. A large number of performance measures thus increase the risk of information overload (Tangen, 2004). It is therefore important to limit the data requirements to both the necessary detail and frequency and to consider whether the data is needed for a particular useful purpose, and whether the cost of producing it is not higher than its expected benefit (Bernolak, 1997). It is also recommended to remove ‘old’ performance measures that are no longer of use from the PMS (Tangen, 2004).
2.2.1 Performance Pyramid
One of the key requirements of PMS is to have a clear link between performance measure and different levels of an organisation to ensure each department strives towards common goals. The performance pyramid, also known as the SMART system provides a link between corporate strategies with operations by translating objectives (based on customer priorities) from top down and measures from bottom up (Cross et al, 1992). The performance pyramid includes four levels of objectives that address an organisation’s internal efficiency (right side of pyramid) and its effectiveness (left side of pyramid). An organisation’s performance pyramid begins with defining corporate vision at the first level, which in turn is translated into business unit objectives. At the second level, business units are set short term targets of profitability and cash flows and long term goals of market position and growth. The gap between the top level and day-to-day operational measures is bridged via the business operating system. Finally, four key performance measures in terms of delivery, cycle time, quality and waste are used at various departments on a day-to-day basis (Cross et al, 1992).
Figure 3 Performance Pyramid (Cross et al., 1992)
According to Ghalayini et al (1997), the main strength of this conceptual framework is that it attempts to integrate organisational objectives with operational performance indicators. However, this approach does not explicitly integrate the concept of continuous improvement, nor does it provide any mechanism for identifying KPIs. Moreover, the framework has not been empirically tested (Salem et al., 2012).
2.2.2 Balance Scorecard
The most well-known PMS is the balance scorecard system developed and promoted by Kaplan and Norton (1992). Data collected by the American research firm Gartner Group, indicated 40% of the largest businesses in the US had implemented balance scorecard by the end of year 2000 (Neely, 2003). The balance scorecard (BSC) proposes an organisation to use a balanced set of measures that would allow top management to take a quick but comprehensive view of the business from four major perspectives. These perspectives provide answers to four fundamental questions: “(I) The customer perspective: How do our customers perceive us? (II) Internal business perspective: What must we excel at? (III) Financial perspective: How do we look to our shareholders? (IV) Innovation and learning perspective: How do we continue to improve and create value?”
Figure 4 Balance Scorecard (Kaplan & Norton, 1996)
There are a number of advantages of using BSC. First, the scorecard is a single management report that brings together different aspects of a company’s competitive agenda such as improving quality, becoming customer oriented, shortening response time and managing for the long term. Second, the scorecard allows managers to evaluate whether improvement in one aspect of the business would be achieved at the expense of another (Kaplan et al., 1992). Balance scorecard is believed to be beneficial both for large organisation and SMEs (Rompho, 2011). According to Neely et al (2000), the BSC is a valuable framework, however, it provides little guidance on how to identify, introduce and ultimately use the appropriate measures to manage businesses. They further conclude that BSC lacks the competitor perspective. Ghalayini et al. (1997) suggest the main drawback of the approach is that it is mainly designed to provide top management with an overall view of performance. Hence, it is not applicable for the factory operations level.
Despite many successful stories of BSC implementation in large organisations, Kaplan and Norton (2001) based on their experience identified several sources of failure: the design and the process. A poor design includes: (I) Use of few measures in each BSC perspective, resulting in failure to obtain a balance between financial and non-financial indicators or leading and lagging indicators. (II) Use of many indicators without identifying the critical ones, resulting in the organisation to lose focus. (III) Failure in selecting appropriate KPIs for each perspective to depict the corporate strategy. This means the organisation’s strategy is not translated into action and hence unable to obtain any value from BSC. Process failures are the common causes of failure in BSC which includes: (I) Lack of commitment from the senior management (II) Ineffective communication within the organisation (III) Treating BSC as a one-time project instead of a continual process (IV) Overlaying long development process. On the other hand, some literature suggests that BSC is not appropriate for SMEs (Mc Adam, 2000). SMEs are fundamentally different from large organisations in three aspects: evolution, uncertainty and innovation (Garenco et al., 2005). As a result, the PMS in SMEs are found to be somewhat different than in large organisation. Studies also indicate that SMEs rarely implement PMS as a holistic approach and the measures used are more focused on financial and operational performance and lacks measures in other areas (Rompho, 2011).
2.2.3 Medori and Steeple’s Framework
Medori et al. (2000) proposed an integrated conceptual framework for enhancing and auditing PMSs. This approach consists of six stages. The first stage involves defining an organisation’s strategy and critical success factors (CSFs). The second stage involves the task of matching the organisation’s strategic requirements from first stage with six defined competitive priorities e.g. quality, cost, flexibility, delivery, future growth. In the next stage, most suitable measures are selected by using a checklist that contains 105 measures with full description. The fourth stage involves auditing the existing PMS to identify which existing measures can be retained. The fifth stage involves actual implementation of the measures by describing each measure with eight elements: title, objective, benchmark, equation, frequency, data source, responsibility and improvement. The sixth and the last stage involves periodic review of the organisation’s PMS (Medori et al., 2000)
Figure 5 Medori & Steeple’s Framework (2000)
According to Tangen (2004), the major strength of this framework is that it can be used both to enhance an existing PMS and to define a new PMS. It also provides unique description of how performance measures should be realised. However, the framework has a drawback in its second stage, where a performance measurement grid is created in order to provide the PMS its basic design. Little or no guidance is given on how to construct the grid. Moreover the six competitive priorities can be divided into many other categories instead.
2.2.4 Performance Prism
One of the recently developed PMS frameworks is the performance prism. It proposes an organisation to have five distinct but linked perspectives of performance (Neely et al, 2001). These five perspectives provide answers to five fundamental questions: “(I) Stakeholder satisfaction: Who are the stakeholders and what do they want and need? (II) Strategies: What strategies are required to ensure the wants and needs of stakeholders? (III) Processes: What are the processes we have to put in place in order to allow our strategies to be delivered? (IV) Capabilities: The combination of people, practices, technology and infrastructure that together enable execution of the organisation’s business processes (current and in the future): What are the capabilities we require to operate our processes? (V) Stakeholder contributions: What do we want and need from stakeholders to maintain and develop those capabilities?”
Figure 6 Performance Prism (Neeley et al., 2001)
According to Neely et al (2001), the performance prism provides a comprehensive view of different stakeholders. They highlighted the fact that the idea of strictly deriving performance measurement from corporate strategy is incorrect. Therefore, the wants and needs of stakeholders must be considered first followed by strategy formulation. The strength of this framework is that it first questions the organisation’s existing strategy before the process of identifying performance measures and it also considers new stakeholders such as suppliers, intermediaries, alliance partners etc. who are generally neglected while forming performance measures (Tangen, 2004). However, this conceptual framework has little or no consideration on the processes of designing the system (Salem et al., 2012). Moreover, there is lack of evidence that the framework works in practice (Etienne et al., 2005).
2.3 Critical Review of PMS
Well known adages such as “you get what you measure” and “what gets measured gets done” suggest that implementing appropriate PMS will ensure actions of an organisation are aligned to its strategies and objectives (Lynch et al, 1991). The inadequacies of traditional performance measures have been widely documented (Neely et al, 2003). Authors suggest that conventional methods are historical in nature (Dixon et al, 1990); encourage short-term vision and hardly provide indication about future performance (Kaplan, 1986); lack strategic focus (Skinner, 1974); are focused internally rather than externally, ignore key stakeholders such as competitors or customers (Kaplan and Norton, 1992; Neely et al 1995); and inhibit innovation (Gordon and Richardson, 1980). Mauboussin (2012) noted that organisations that link non-financial measures and value innovation stand a better chance of improving results. The new approaches to PMSs have solved some of the limitations of the conventional way of measuring performance e.g. SMART system and BSC are strategically driven PMSs. These frameworks guard against sub-optimisation and limit the number of performance measures to avoid information overload (Tangen, 2004).
However, all conceptual frameworks have limitations. First, they provide little guidance for the actual selection and implementation of the measures selected (Medori et al, 2000). The measurement practitioner still has to translate the framework into practical measures and make decisions on how each performance measure should be specified, how often it should be measured and to what level of detail (Tangen, 2004). Second, they do not provide a list of performance metrics. Third, they do not combine all the critical success components that impact the success of an organisation e.g. profit, quality, employee morale, ergonomics, safety, efficiency, productivity etc. In addition, they do not provide an approach to quantify the qualitative performance measures and include mathematical models to analyse multiple variables simultaneously (Ferreras et al, 2013).
2.4 Customer Relationship Management
The marketing literature suggests that maintaining successful long-term relationships with customers helps an organisation in achieving a portfolio of satisfied and loyal customers, which in turn improves the organisation’s economic and competitive position in markets as well as improves the effectiveness and efficiency of its strategic actions (Yang et al., 2004). Norreklit (2000) argues that customer satisfaction alone does not create superior financial results but rather a series of actions produce high customer value ratio. This subsequently leads to good financial outcomes, which is not a matter of causality but rather common sense since it is incorporated in many concepts. One way by which organisations can deliver superior value to customers is by maintaining quality relationships with their customers. Therefore, managing relationships with their customers is crucial for achieving company’s success (Zinkhan, 2002).
Many organisations have a mission that focuses on delivering value to their customers. Therefore, top management prioritise on company performance from its customer’s perspective. Customer concerns generally fall into four categories: quality, time, cost performance and service (Kaplan et al, 1992). For the utility industry since the offered product (electricity and gas) is highly standardized, the customer service will imperatively be the only source of differentiation from competitors. The profit margin for the utility industry is very low (Refer Appendix 12.1 for detailed information). Empirical studies demonstrated that in ‘stalemate industries’, differentiation strategies are profitable and effectual alternatives. The knowledge of consumer behaviour provides opportunities for differentiation where in ‘zero default’ strategy or total quality (e.g. fast and accurate answers to requests, short delivery times etc.) is the major area for differentiation (Boston Consulting Group, 1988). According to Levitt (1980), “there is no such thing as a commodity”. With consumer and industrial services–usually termed as “intangibles”, what really counts is their claimed distinction of execution e.g. clarity and speed of their confirmations, their responsiveness to enquiries, efficiency of their transactions on behalf of their customers and the like. The usual assumption about undifferentiated commodities is that they are exceedingly price sensitive. However, the author notes that such assumptions are seldom true and in the real world of markets, nothing is exempt from other considerations, even when price war rages.
2.4.1 Satisfaction, Trust, Loyalty
The marketing literature has extensive research on customer satisfaction and the studies indicate a significant and positive association between customer satisfaction, repurchase intention and brand loyalty (Anderson et al., 1993). In the area of customer-firm relationships, Parasuraman et al. (1985) suggest that trust is a critical factor in successful relationships. According to the authors, customers should be able to trust their service providers, be sure that their information will be treated with confidentiality and feel confident in their dealings with them. All these considerations are in turn important in gaining customer loyalty and helps organisations to build a stable customer portfolio thereby decreasing the probability of ending on-going relationships. Reichheld & Sasser (1990) suggest that for service providers, customer defection potentially has a greater impact on the bottom line than market share, unit cost, scale and other factors usually associated with competitive advantage. The customer-firm dissolution process takes place when customers perceive an issue in their exchange relationship with an economic agent. The main issues detected involved failures in the basic service provided, the treatment received and the provision of information (Stewart, 1998). However, the link between the existence of an issue and relationship exit is not direct, because most customers strive to get their issues resolved. Therefore, customers terminating a relationship are conceivably not a casual response (Stewart, 1998). A study by Andeassen (1999) suggests that customer satisfaction with complaint resolution is a strong driver towards customer loyalty. McCollough (2009) refer to the situation in which effective service recovery leads to a customer rating an encounter more favourably than if no issue had occurred in the first place as the “paradox of service recovery”. However, according to a number of empirical studies, merely satisfied customers do not necessarily develop customer loyalty and some unsatisfied customers would continue their relationship with their habitual service provider because they feel ‘trapped’ in it (Alvarez et al., 2011).
From the customer’s perspective, their satisfaction relates to the extent to which their expectations prior to purchase met or exceeded when they availed the service (Flint et al., 1997). Satisfaction transmits the idea that a firm’s actions are subject to the well-being of its customer and that no opportunistic behaviours occur (Ambrose et al., 2007). According to Andreassen (1999), initial service failure causes negative affect, which in turn has a carryover effect on satisfaction judgement on complaint resolution. Initial negative affect also has an adverse effect on customer loyalty whereas satisfaction with complaint resolution has a positive effect on customer loyalty. Corporate image also has a positive effect on customer loyalty. Corporate image not only helps in attracting new customers but also in retaining existing dissatisfied customers. In today’s competitive environment with rapid technological development organisations find it much more profitable to dedicate their effort and resources towards customer retention, in an attempt to stem customer loss as much as possible (Colgate et al., 2000). Two key aspects in relationship success (i.e. trust and satisfaction) play an important role in reducing the probability of customers to defect (Alvarez et al., 2011). According to Ambrose et al. (2007), customer satisfaction leads to the development of trust, which is crucial to maintaining long-term relationships. Tax et al. (1998) note that effective complaint handling is positively related to customer loyalty and subsequent retention. Customers judge fairness in the complaint resolution procedures and despite the unhappiness about the service failure may find their trust in the service provider enhanced in case of an excellent service recovery (Sangareddy et al., 2009). Management discovers a firm’s inability to satisfy its customer is via two kinds of feedback mechanism: exit and voice. Exit implies customer stops buying the services while voice is customer complaints expressing dissatisfaction directly to the organisation or by word of mouth to third parties. The number of customers who defect because of dissatisfaction could be reduced by (Hirschman, 1970): (1) improved service quality i.e. to reduce number of dissatisfied customers (2) increased voice i.e. to reduce the number of dissatisfied non-complaining customers and (3) improved complaint quality i.e. to reduce number of lost complaining customers. The initiatives for encouraging customers to provide feedback while addressing their concerns may lead to innovative practices within the organisation (Sangareddy et al., 2009).
2.4.2 Complaint Management
‘Complaint Management’ refers to the way organisations deal with the issues that their customers communicate to them about aspects of their service that generate a certain degree of dissatisfaction (Alvarez, 2011). Customers assess their satisfaction with the service provider depending on the complaint management process and their perception of equity during the process potentially influences their repurchase intention. Therefore, one of the ways to achieve customer satisfaction is to effectively manage customer complaints. Customers expect to have any service failures diagnosed and resolved quickly (Sangareddy et al., 2009). Complaint resolution is therefore an important element of an organisation’s retention strategy (Andreassen, 1999).
Figure 7 Complaint Mgmt & Service Evaluation Model (Sangareddyet et al., 2009)
As depicted in the above figure, the complaint management process consists of three interrelated yet distinct factors (Sangareddy et al., 2009): “(I) interactional justice: perceived quality of interaction between the service provider and the customer, (II) procedural justice: perceived fairness of the service recovery process and (III) distributive justice: perceived fairness of the outcome of the service recovery procedure”.
In the relationship marketing literature, elements such as provision of reason for failure, honesty, politeness and empathy are proved to be associated with interactional justice. Impolite treatment, dishonesty and poor interaction usually form a low opinion about the elements of service recovery process. The perceived interactional justice has a direct link with perceived procedural and distributive justice. Thoughts such as “how long will it take to solve the problem? How long will I have to follow up with them? Are the customer service personnel being polite and honest?” can be mitigated if the service provider cares to give explanation and is concerned about their customers (Sangareddy et al., 2009).
On the other hand, following sample questions must be addressed in a fair procedure: “Who keeps a record of customer interactions? Who will bear the responsibility of failed recovery attempts? How will the customer get notified of the progress? Where can the customer check the status of the problem? What will be the time taken for repair?” Fair procedures are defined as unbiased, impartial and consistent representative of customer’s interest and based on ethical standards and accurate information. Procedural justice is mediated by distributive justice and does not have a direct linkage with customer satisfaction with an organisation and repurchase intention (Sangareddy et al., 2009). Leventhal (1980) note that customers are likely to accept a service recovery outcome as fair even in adverse cases if they perceive that service provider has followed fair procedures. Organisation justice literature suggests that perception of fair service recovery procedures comforts a customer, which in turn influences his or her perception about the service recovery (Moorman, 1991).
Distributive justice refers to the tangible and fair outcome of a complaint resolution, which has its roots in social exchange theory. Relationship marketing literature indicates there is a positive association between the distributive justice and customer satisfaction with an organisation (Blodgett et al. 1997).
Research by Tax et al. (1998) suggests that an organisation’s favourable actions during episodes of conflict and complaint demonstrates its trustworthiness and reliability and imply that investments in complaint handling can improve evaluations of service quality, build customer commitment and strengthen customer relationships. Given this finding, organisations should reassess the appropriateness and fairness of the existing processes (procedural justice), the results or outcomes (distributive justice) and the employee-customer communications (interactional justice).
2.5 Contribution
The big question is whether it is feasible to accurately measure an organisation’s success? Yes, but businesses need to combine various components of success such as balance scorecard and SMART, a list of performance metrics and a statistical model approach to quantify the qualitative measures (Ferreras et al., 2013). Research evidence demonstrates that organisations that are managed using integrated PMS outperform and have superior stock prices to those that are not “measure managed” (Lingle et al., 1996; Gates, 1999).
3. Business Model
As an organization continues to operate and move forward, it needs to do so in a structured manner. In the process of the research conducted for the study, the authors analysed several organizations around the US and Europe, the regulatory and business environment, and the role of customer relationship management.
Five basic aspects of ‘Customer Experience’ essential to an organization have been highlighted below that should fall into the strategic initiatives of an organization:
Figure 8 Basic aspects of customer experience
While the study focused primarily on customer complaint resolution, although essential, this aspect remained just a part of much bigger whole. As such, a business model is presented below which highlights key departments and/or activities that a utility company can be structured around.
Figure 9 Business Enterprise Management
The assessment of KPI’s performed in this study in regards to customer complaint resolution tie in with the ‘Customer Experience’ part of the framework. The KPI’s must be guided by the framework, and the framework must be guided by the KPI’s. For example, an organization must ‘develop insight’ about what is important to their customers (e.g. resolution time) and then ‘serve the customer’ accordingly. Organizations can also ‘develop insight’ through conversations with customers via social media platforms. With the increased presence of social media in all aspects of daily life, customers are continually looking to find information, complain and troubleshoot on various social media platforms. Organizations need to interact where their customers are, and as such must be able to ‘serve customers’ on these platforms.
The following are examples of best practices in the utility industry that an organization should consider incorporating in relation to the above model (EPA, 2013; Reijers & Mansar, 2005; Gordon, 2002):
Develop Insight Best Practices
· It is understood what different groups of customers value and expect (in terms of offers and services) and how they buy and how to communicate best with them.
· Key customer segments are divided into targeted sub sectors to recognize behavioural differences in how customers interact (e.g., early adopter).
· There is a consistent view of the customer across the organization that enables drills down / aggregation through levels of customer hierarchy.
· Data analysis and intelligence is proactively driven from an analytics engine
· There are named individuals across the organization responsible for enforcing data quality. The responsibilities for data owners are clearly defined and data meetings are held on a regular basis. When the individuals change role or job, there is a process to ensure that new individuals are designated in replacement.
· The organization has a clear customer and partner validated articulation of future customer value.
· Customers are surveyed regularly and customer complaints are analysed to identify areas for improvement and actions are in place to tackle some of the key issues in a proactive way. There is an open dialogue with customers in order to continuously improve the quality of service and adapt the customer strategy.
· Customer service measurement is extremely sophisticated and shows a clear picture of the complete customer experience – data and intelligence is shared with the customer and performance trends collectively analysed and debated.
· These measures are taken at least quarterly.
· Action plans are put in place by managers at the actionable business unit level to increase employee-to-customer engagement.
Develop Customer Strategy Best Practices
· Customers and partners look to the organization to play a key role in shaping their strategy – the organization fulfils the role of business partner.
· All levels of the organization have complete clarity on how their roles deliver the customer strategy.
· People ‘live the strategy’; in their everyday actions and behaviours.
· Potential new products and services opportunities are identified based on assessments of the current market trends, the target customer needs and the benefit that the retail company can get from these offers (in terms of direct revenues, attraction of target customers, brand improvement, etc.).
· The offers strategy is adapted based on the results assessed on current products / services offered success and failures.
· Strong mutual understanding of capabilities (existing and emerging) allows partners to be an integral part of the value proposition development processes.
· Customers share their ideas with the organization to shape the company’s future direction
Manage Sales and Marketing Best Practices
· Employees are encouraged and rewarded for contributing ideas with regards to marketing effectiveness (e.g., innovation, measurement, execution).
· The marketing organization is regarding as a fundamental contributor to organizational success.
· The Marketing organization consists of employees with legacy experience, competitive companies, and leaders from outside the industry.
· A distinctive brand is built and maintained that provides valued differentiation in the eyes of the customer and highlights key messages for incorporation in propositions and customer engagements.
· Adaptive marketing strategy based on customer feedback and sales activities.
· Integrated customer experience regarding brand and value proposition is in place.
· The brand and the operating practice fit together to create a single image of the company and a consistent customer experience across all touch points.
· A strategy & tactical plans are in place to progressively achieve brand goals.
· Brand recognition / awareness are routinely tracked by individual target market segments on a proactive basis using sophisticated measures.
· Robust brand risk management control practices are in place.
· Marketing play a crucial role collating customer feedback against the portfolio of products and services feeding future product and services strategy and development
Manage Customer Accounts Best Practices
· Customer data and contact details are captured/ entered following a simple and clear process to ensure accurate data will be used for billing the customer (home move process, name change process, property change of use). There are several check points for confirmation of data with the customers.
· Customer data and contact details are captured by Contact Centre agents that know the process and are well trained on asking precise and clear questions and capturing the contact details in the relevant format and fields.
· The change of tenancy occurs seamless. The old tenant does pay his/her last bill and the new tenant contact the utilities to provide it with his/her contact details and a meter reading corresponding toe the one captured but the tenant moving out.
· The customer moving-out is also contacted before the move-out date in order to retain him/her when he/she will move in into the new property.
Serving Customer Best Practices
· The entire organization is aligned to customer value.
· Clear focus results in high customer satisfaction and best in class cost to sell and serve. Based on the different customers segments (their expectations and their value to the retailer), approaches are developed that allow the retailer to serve them in a cost efficient manner and up to their expectations in terms of both operational response and contact centre service.
· Insight into individual customer value / profitability, behaviour, and preferences is leveraged to personalize customer interactions in real-time.
· Customer insight is leveraged to customize / tailor sales and service treatment strategies consistently across all channels (e.g. web, phone, store).
· Channels are differentiated based on lifetime value or profitability measures.
· Customer profitably analysis is conducted on a monthly basis and customers serviced based on profitability. Customers are directed to channels with the greatest return.
· There is a fully integrated dashboard that combines customer and operational measures to give alignment of actual performance and the customer ‘perception’.
· Customer service measurement is extremely sophisticated and shows a clear picture of the complete customer experience – data and intelligence is shared with the customer and performance trends collectively analysed and debated.
· The customer reviews continuous improvement plans and agrees relative priority of initiatives.
4. Data
4.1 Data for PLS
The data used in this research document is obtained using a likert-type survey questionnaire (Refer Appendix 12.4) similar to the type of data used by Anderson and Swaminathan (2011) to investigate the factors driving customer satisfaction and loyalty in E-markets using a selected set of constructs that are thought to drive customer satisfaction and loyalty. The questions have been designed to fulfil various requirements. The main elements (latent variables), and their manifest variables (questions corresponding to each latent variable) have been chosen based upon (I) the underlying principles of interest reviewed by “Which?”, “Consumer Focus” and Ofgem, (II) the consumer satisfaction and consumer relationship literature and (III) in conjunction with the understudied Key Performance Indicators. More elaboration on the criteria used for selection of the underlying latent variables is presented in the methodology section.
The survey questionnaire consisted of 34 questions, with 29 questions corresponding to the constructs, 1 question about the respondents’ utility supplier and 4 questions about the demographics of the respondents. A random sample selection process is followed only satisfying one condition that, all of the respondents must have raised a complaint to their utility supplier and received a resolution. 117 respondents were approached out whom 58 had raised complaints and obliged to complete the survey questionnaire. Out of the 58 responses, 5 responses have been dropped from the sample due to incomplete data. Figures 8 to 12 depict the demographics of respondents, and the share of big six utility suppliers in the raised complaints.
Figure 10 Respondents’ distribution of energy supplier
Figure 11 Respondent’s demographics (gender)
Figure 12 Respondent’s demographics (age)
Figure 13 Respondent’s demographics (marital status)
Figure 14 Respondent’s demographics (income)
4.2 Data for AHP & SMART
Apart from customer survey data, research on benchmark KPIs was carried out. The KPIs used by E.ON are verified against benchmark KPIs used across different industries for measuring and improving performance of customer service especially in the complaint resolution department. The list of KPIs provided by E.ON concurs with what is being used across industries. However, based on the research of KPI libraries (www.kpilibrary.com, 2013; www.brighthub.com, 2011) and the gap analysis (Refer Section 6.1.1), following KPIs could also be adopted by E.ON. These KPIs are further used in AHP and SMART analysis for the ranking purpose (Refer Section 6.2, Table 11).
· Time Weighted Ratio (TWR)
· Abandon rate of calls
· Customer Accessibility Rating (CAR)
· % of service requests with all data fields complete
· Average queue of incoming calls
5. Methodology
5.1 PLS Methodology
Researchers have historically been using first generation statistical techniques, such as regression based approaches (e.g. analysis of variance, discriminant analysis, logistic regressions and multiple regression analysis) and factor or cluster analysis for identification or confirmation of theoretical hypothesis based on the analysis of empirical data in a variety of disciplines including but not limited to economics, chemistry, biology, psychology and etc. For instance, Spearman (1904) used factor analysis to work on general intelligence for psychology, Altman (1968) used discriminant analysis for forecasting corporation bankruptcy and Hofstede (1983) used factor and cluster analysis in a research on cross-cultural differences for sociology.
Haenlein and Kaplan (2004), pioneered three main limitations that appear to be common amongst the first generation statistical techniques; (I) postulation of a simple model structure (at least in the case of regression based approaches), (II) the assumption that all the models’ variables could be considered as observable and (III) the conjecture that all of the models’ variables are measured without errors. The abovementioned caveats, limited the applicability of such techniques in many research situations.
To elaborate more on the shortcomings of the first generation statistical techniques, it could be suggested that, although model building always implies omitting certain aspects of the real world, this assumption may be too limiting for an analysis of a more realistic situation which would potentially be more complex (Shugan, 2002). Such a problem becomes more obvious if the potential effects of mediating and moderating[1] variables on the relationship between one or more dependent and independent variables are to be investigated (Baron & Kenny, 1986). With regard to the second limitation, a variable could only be called observable if it is possible to obtain its value by means of real world sampling experiments (McDonald, 1996). Thus, any variable that does not directly correspond to anything observable must not be considered as observable (Djikstra, 1983). With regard to the third limitation, Bagozziet. al. (1991) highlighted the real world’s observations characteristics, pointing at certain measurement errors accompanied by the real world’s observations that may consist of two types of errors: (I) random error[2] and (II) systematic error[3]. According to Churchill (1979), since random and systematic errors are always constituent components of the true score of a variable, the application of first generation techniques are questionable.
Structural Equation Modelling (SEM) is developed as an alternative to the first generation techniques in order to overcome the abovementioned limitations. The difference between the two approaches is simultaneous modelling of relationships among multiple dependent and independent constructs rather than analysing only one layer of linkage between dependent and independent variables at a point in time (Gefen et. al., 2000). In other words, SEM allows for estimation and evaluation of an entire conceptual model rather than mere testing of individual hypothesis (Shackman, 2013). Therefore as Diamantopoulos (1994) suggests, in SEM, one no longer differentiates between dependent and independent variables but distinguishes between the endogenous and exogenous latent variables, the former being variables that are explained by the relationships that are contained in the model and the latter being variables which are not explained by the postulated model. Furthermore, not only SEM enables one to construct unobservable variables measured by indicators (also known as manifest variables) but also it allows to explicitly model the measurement error for the observed variables. Therefore, by overcoming the limitations of first generation techniques, SEM enables researchers to “statistically test a priori substantive/theoretical and measurement assumptions against empirical data” (Chin, 1998a). Since SEM is used to test a theoretical framework with empirical data in the context of this research, it is worth spending a short time developing a sound understanding of the structure of theories.
A theory may contain three different aspects as suggested by Bagozzi and Philipps (1982), (I) theoretical concepts which are normally unobservable properties or attributes of a social unit of entity; (II) derived concepts, that are similar to theoretical concepts in terms of being unobservable, but must be tied to empirical concepts (unlike theoretical concepts) and (III) empirical concepts which “refer to properties or relations whose presence or absence in a given case can be inter-subjectively ascertained by direct observations”(Bagozzi and Philipps, 1982). Furthermore, three possible types of relationships have been identified by Bagozzi (1984), that link the abovementioned concepts: (I) non-observational hypotheses which relate two distinct set of theoretical concepts; (II) theoretical definitions which link derived and theoretical concepts and (III) correspondence rules which serve “to provide empirical significance to theoretical concepts” through linking theoretical or derived concepts to empirical ones.
Using the framework described above, Structural Equation Modelling could now be constructed by converting empirical concepts into manifest variables and theoretical and derived variables into un-observable (latent) variables, which could then be linked by a set of hypotheses and then the linkages could be tested for statistical significance. It is now possible to set up three set of equations, in order to describe the relationships between different parameters of a model. The first set relates the manifest variables (indicators) of the exogenous variables ( ) to their latent exogenous variables (ξ) and their associated measurement error ( ):
The above five equations could also be shown in a matrix format, where depicts a vector of the indicators of exogenous variables, represents a vector of loadings corresponding to the exogenous variables, corresponds to the vector of latent exogenous variables and depicts a vector of measurement errors corresponding to the indicators of exogenous variables:
(1)
The second set of equations describe the relationship between the indicators of endogenous (i.e. dependent) variables ( ), their associated measurement error ( ), and the latent endogenous variables ( :
Just like the equations corresponding to the latent exogenous variables, the above set of equations could be illustrated in a matrix format, where represents a vector of the indicators of endogenous variables, depicts a vector of loadings corresponding to the endogenous variables, corresponds to the vector of latent endogenous variables and represents a vector of measurement errors corresponding to the indicators of endogenous variables:
(2)
And finally the last set formalises the interrelationship between the latent exogenous ( ) and endogenous ( ) variables:
The above set of equations could also be shown in a matrix format, where depicts vectors of exogenous path coefficients, represents vector of endogenous path coefficient and corresponds to vector of random disturbance terms. It is important to distinguish the random disturbance term from the residual terms in the previous set of equations as the random disturbance term does not reflect the measurement error but it simply captures errors in equations.
(3)
Figure 15 graphically presents the model described by the above set of equations using a path diagram (also known as arrow scheme).
Taking a closer look at the above set of equations combined (equations 1 and 2 as measurement equations and equation 3 as a theoretical equation representing non-observational hypotheses), it is now possible to express a structural model based on the correspondence rules defined by Bagozzi and Philipps (1982). One very important feature of the SEM is the distinction made between the reflective (indicators that can be expressed as a function of their associated latent variables) and formative (indicators that influence their associated latent variables) indicators according to Bollen and Lennox (1991). The major difference between the reflective and formative indicators in analysis of the implications of SEM is that, reflective indicators should theoretically have a high correlation as they all depend on identical unobservable variables whereas the formative indicators of the same construct could have zero correlation with one another (Hulland, 1999).
The parameters of a Structural Equation Model are generally estimated using either of the covariance-based or the variance-based approaches. In the last few decades the covariance-based estimation approach has received high prominence to the point that to many social science researchers it has become a tautological synonym with the term SEM (Chin, 1998b). The approach of the covariance based estimation of the model’s parameters is minimization of the difference between the sample covariance and those predicted by the structural model whereas the variance based approach attempts to maximize the variance of the dependent variables explained by the independent ones. Explaining the mechanics of variance and covariance based approaches is beyond the scope of this research, thus we would demonstrate the underlying reasons for not choosing the covariance-based approach and instead using a variance-based approach developed originally by Herman Wold (1975) under the name of Nonlinear Iterative Partial Least Squares (NIPALS) and extended by Lohmöller (1989) to what is known as the Partial Least Squares Regression and Path Modelling.
One driving factor in the decision to use a variance based estimation approach (PLS) rather than a covariance based approach was the sample size requirement. Kline (2011) suggests that SEM’s covariance based estimation approaches (such as LISREL) typically require samples of at least 200 to 250 with even larger sample sizes required for more complex estimation techniques. More importantly, the distributional assumption of covariance based estimation methods was another critical driving factor in the method selection criterion. One of the major underlying assumptions of the covariance based estimation techniques of SEM is the multivariate normal distribution of corresponding indicators (Fornell and Bookstein, 1982 & Diamantopoulos, 1994), but it has been commonly cited in marketing literature that satisfaction measures are indeed skewed (Peterson and Wilson, 1992) thus, they fail to meet the multivariate normality assumption of covariance based estimation techniques of SEM parameters. In contrast to the covariance based estimation approaches, Cassel et al. (1999) illustrated the robustness of PLS with regard to several inadequacies such as skewness or multi-collinearity of the indicators or misspecification of the structural model, using Monte Carlo analysis and implied that the latent variable scores always conform to the true values.
As mentioned in the data section, constructs of the model are chosen to test the appropriateness of KPIs in addressing the underlying principal determinants of Loyalty which is ought to be the ultimate objective of the customer service and within that the customer complaint resolution division, while trying to meet the requirements identified by “Which?”, “CF” and Ofgem in order to bolster E.ON’s stance in their rankings. The following section describes the latent constructs (i.e. determinants of loyalty in line with authorities’ requirements) and the hypothesis to be tested.
Accessibility: Accessibility of complaints handling process itself is one of the categories identified by the “Which?” with a weight of 25%. Based on the debrief document published by “Which?”, complaints handling document and web site as well as convenience in obtaining correct contact detail are the areas in which the utility suppliers’ performance is assessed. In addition to that, the ability of complainants to track the status of their complaints conveniently and the proportion of customers reporting receiving key information when registering a complaint are considered to be other assessment variables under the complaints process category. Furthermore, analysis of supplier’s customer complaints report and web site is another assessment area identified by “Which?” with a weight of 10%. There seems to be no KPI directly related to the accessibility of relevant information, despite the importance of information availability and transparency from the “Which?” and literature perspectives. Cho et al. (2011), refer to “lack of access to information” in the process of complaint resolution as a primary cause of dissatisfaction amongst the complainants.
Time: Complainants’ expectation of how long a company should take to resolve a complaint and the potential discrepancy between the expected and the actual time taken for a complaint to be resolved is referred to as a primary determinant of delight and disappointment within the customer service literature. Tax et al. (1998), Estelami (2000), Bitner et al. (1990), and Mohr and Bitner (1995), found the time taken by a firm to resolve a complaint positively impacts satisfaction. TARP (1986), shows a 20% gain in customer loyalty levels due to prompt versus slow resolution of complaints. In addition to that, out of the operational performance KPIs, three of the KPIs are directly and two of them are indirectly related to promptness of complaint resolution, namely same day resolution, 49 days resolution and the age of the complaints are directly related and re-opened complaints and multiple complaints could be indirectly related to the resolution promptness. The importance of promptness is reflected in the report published by “Which?” as well, as it recommends providing expected timeframes for resolution at each and every stage of the complaints handling process.
Responsiveness: Responsiveness to the complaints is referred to the quality of interaction between the complainants and the firm. Sangareddy et al. (2009) refer to elements such as politeness, honesty and explanation as important aspects of perceived interactional justice by the complainants. In another study Goodwin and Ross (1992) cite making a meaningful effort in resolving a complaint is amongst the primary factors influencing complainants’ perception of interactional justice. The implications of effort in this context are rather generic, but examples such as “taking the responsibility of a failure” and “genuine attempt to resolve a failure” are widely mentioned in the literature (Folkes, 1984). Given the importance of interactional justice and the significance of the impact of “making a meaningful effort” on the perceived interactional justice, It is crucial for businesses to understand that this effort needs to be communicated before it could be perceived. In other words businesses should ensure that their customers could observe the effort they make in resolving their complaints. Taking the responsibility of a failure and maintaining an active level of communication could help an organization manifest its effort in the resolution of complaints. Furthermore, “Which?” considers the average number of contacts required to resolve a complaint as an indicator corresponding to the effectiveness of the complaint resolution process.
Fairness: Consumers’ perception of fairness is considered to be a primary determinant of satisfaction in marketing literature (Oliver and Swan, 1989). In the complaint resolution context, it is possible to distinguish between two distinct implications of complainants’ perception of fairness, namely distributive and procedural justice. Distributive justice refers to a fair and tangible outcome of a dispute based on the social exchange theory by developing a common sense argument that, the complainant’s perception of fairness corresponding to the outcome of a complaint is determined by the degree to which the complainants’ expectation of a fair (i.e. unbiased and impartial) outcome coincides with the actual outcome (Greenberg, 1990). The second implication however, is not determined by the potential gap between the complainant’s perception of a fair outcome and the actual outcome, but it is directly influenced by the elements of fairness in procedures, policies and guidelines followed in the complaint resolution process (Sangareddy, Jha, Ye and Desouza, 2009). Consistency, un-biasness and impartial representation of complainants’ interest are considered as elements of a fair procedure which assures complainants that the service provider follows standard, ethical and non-discriminatory procedures to arrive at an outcome. Based on the report published by “Which?”, “discrepancy between proportion of complaints considered resolved by supplier and proportion of complaints considered resolved by customer” is an element of interest in assessing the complaints process which corresponds to the definition of fairness from the distributive justice point of view. Also, with regard to the understudied KPIs, it could be suggestible that, the percentage of upheld cases, QNSB, QA level and overall satisfaction are understood to be directly linked to the perceived fairness and the percentage of customers staying (or leaving) after resolution is considered to be indirectly related to the perceived fairness.
Satisfaction with the customer complaint resolution: The role of any customer service (and customer complaint resolution) unit is to improve the overall customer satisfaction. The overall customer satisfaction itself is a function of various elements such as the product (or service) features, supplementary service components and the commitment of the business to fulfil the promises made in satisfying its customers’ needs and desire. Viswanathan, Rosa and Ruth, (2010), referred to the after sales customer service and complaint resolution management as principal factors influencing the overall customer satisfaction. In addition to that and based on the comments of the publishers of the “Which?” report regarding their intention, the project is aimed at helping energy suppliers improve the level of service they provide to their customers, thereby positively influencing customer satisfaction. With regard to the KPIs, NPS is directly and the percentage of customers staying (or leaving) after the resolution is indirectly related to the satisfaction level.
Loyalty: The classic definitions of loyalty such as Brown (1952) and Kuhent (1962) are mainly focused on the repurchase behaviour of customers. Engel et al. (1982) modified the definition of loyalty, taking the attitudinal component of loyalty which was missing in the classic definition into account. For the purpose of this research, customer loyalty is defined as the customers’ favourable attitude toward a utility supplier resulting in repeat buying (staying) behaviour. Loyalty is thought to be greatly influenced by satisfaction as Szymanski and Hise (2000) imply, although the degree to which this relationship holds could vary depending on the competitive structure of the industry. The main purpose of this research is to assess the appropriateness of KPIs in addressing the ultimate objective of a customer service (and the complaint resolution unit in specific) unit in increasing the customers loyalty which could be translated to financial gain for the business as an overall measure. Besides, with respect to the KPIs, the percentage of customers staying (or leaving) after the resolution is directly related to the customer loyalty.
Inertia: Inertia is defined by Campbell (1997), as a condition where “repeat purchases occur on the basis of the situational cues rather than on strong partner commitment” (p. 2). Beatty and Smith (1987) suggest that, the likelihood of indulging in the same purchasing behaviour over time is higher for customers with higher levels of inertia, so they will be less likely to search for alternatives. Thus, the possibility of inertia mitigating the impact of satisfaction on loyalty remains valid. Therefore, Inertia as a latent variable is built into our model as a mediating (moderating) variable. Besides, Inertia could be indirectly related to the KPI corresponding to the percentage of customers staying (or leaving) after the resolution.
Trust: Morgan and Hunt (1994), define trust as confidence in the reliability and integrity of exchange partners. Doney and Cannon (1997) suggest that, trust denotes perceived benevolence and credibility of a business. Trust may be developed as a result of past experiences or marketing activities such as brand building or relationship management. Similar to inertia, trust could have mediating (moderating) effect on the relationship between satisfaction and loyalty.
Hypothesis No. 1) Accessibility
Complainants’ perception of Accessibility causes their opinion about Responsiveness, Fairness, and Promptness to change
Complainants’ perception of Accessibility impacts their trust in the company and their Inertia to change their supplier
And the greater complainants’ perception of the Accessibility, the higher their satisfaction with the complaint resolution Process
Hypothesis No. 2) Promptness
Complainants’ perception of Promptness causes their opinion about Responsiveness and Fairness to change
Complainants’ perception of Promptness in complaint resolution impacts their Trust in the company and their Inertia to change their supplier
And the greater the complainants’ perception of Promptness, the higher their satisfaction with the complaint resolution process
Hypothesis No. 3) Responsiveness
Complainants’ perception of Responsiveness causes their opinion about Fairness to change
Complainants’ Perception of Responsiveness in complaint resolution process impacts their Trust in the company and their Inertia to change their supplier
And the greater the complainants’ perception of Responsiveness, the higher their satisfaction with the complaint resolution process
Hypothesis No. 4) Fairness
Complainants’ perception of Fairness in the complaint resolution process impacts their Trust in the company and their Inertia to change their supplier
And the greater the complainants’ perception of fairness, the higher their satisfaction with the complaint resolution process
Hypothesis No. 5) Satisfaction
The greater the Complainants’ satisfaction with complaint resolution process, the higher their loyalty to the firm
Hypothesis No. 6) Trust & Inertia
Trust and Inertia moderate the impact of Satisfaction on Loyalty
5.2 AHP & SMART Methodology
In an organisation, goals support allocation of resources, guide the organisation’s efforts and focus the organisation on success. KPIs are derived from an organisation’s goals and are used to measure the progress towards achievement of the goals. However, each indicator should be based on criteria that are suitable for further analysis. From literature review, it is evident that the set of criteria are often referred to as SMART (Shahin et al., 2007): (I) Specific: Goals should be specific as it is easier to hold someone accountable for their achievement (II) Measureable: Goals should be measureable, either qualitative or quantitative measures having measurement against a standard of expectation and a standard of performance. (III) Attainable: Goals should be reasonable and attainable. (IV) Realistic: Goals should be realistic apart from being attainable because realistic goals help in examining availability of resources and selecting KPIs. (V) Time-sensitive: Goals should have a time frame for completion because it allows the analyst to monitor progress and measure success along the path of reaching the goal.
In the thirty years since publication of the first papers, books and software, AHP has been used by decision makers all over the world to model problems in more than thirty diverse areas including public policy, strategic planning and resource allocation. It is used to rank, select, benchmark a wide variety of decision alternatives. AHP is based on three principles: decomposition, comparative judgement and synthesis of priorities. It involves three steps (Shahin et al., 2007): “(I) Structuring to down. Specify an overall goal followed by criteria and alternatives that have an impact on the goal. (II) Comparison analysis. After structuring the hierarchy, establish ratio priorities for each node in hierarchy. This is done via pairwise comparison of child nodes below parent node. The comparisons are done based on the importance or contribution of child node to parent node. After sufficient comparisons for a node has been performed, the principal Eigen vector of the comparison matrix is standardised, so that it sums to one and becomes the ratio measure of the relative importance of each child node, these are called as the local weights. (III) Aggregating local weights into a composite priority. This is the final step of AHP and is done via the principle of hierarchy composition that first multiplies local weights by the product of all higher-level priorities. Within the hierarchy, the process transforms local weights to global weights that measure the importance of each child node in the total hierarchy.”
An integrative approach of AHP and SMART is designed to determine which KPIs are consistent with SMART. The following figure shows the use of SMART criteria in prioritising KPIs. This approach involves steps in the following order (Shahin et al., 2007): (I) First step includes defining and listing all KPIs (II) Second step includes using AHP hierarchy in which the goal is to prioritise alternatives with respect to SMART criteria (III) Third step includes pairwise comparison between alternative KPIs (IV) Fourth step involves calculating local and global weights (V) Finally, fifth step involves selection of KPIs that have topped the raking based on SMART criteria.
Figure 17 SMART criteria in prioritising KPIs
6. Analysis
6.1 PLS Results
Figure 18, portrays the model that is analysed using Smart-PLS (Ringle, Wende, and Will, 2005), which measures the psychometric properties of the model and takes the moderating latent constructs into consideration in estimation of the structural model parameters. The parameters of the model converged in fewer than 25 iterations, so it would be possible to confirm that the structural model supports the assessed hypothesis. To confirm the reliability of measures used for the various latent variables the composite reliability, Average Variance Extracted by each of the different latent variables (based on Fornell and Lacker’s (1981) approach), and Cornbach’s alphas of the latent variables (based on Nunnally and Bernstein’s (1994) approach) are calculated and presented in Table 3. Path coefficients (standardized regression weights) and the T-Statistics corresponding to the understudied hypothesized effects are presented in Table 5. The T-statistic measures are calculated using a bootstrapping for latent variables and Sobel’s test for significance of mediation for the moderating constructs.
Table 3 PLS measurement reliability figures
Constructs |
Composite Reliability |
AVE |
Cornbach’s Alpha |
Accessibility |
0.9007 |
0.6942 |
0.8539 |
Promptness |
0.7177 |
0.6973 |
0.7121 |
Fairness |
0.9185 |
0.7907 |
0.8649 |
Responsiveness |
0.9424 |
0.7671 |
0.9221 |
Satisfaction |
0.9137 |
0.7030 |
0.8662 |
Trust |
0.8726 |
0.6960 |
0.7587 |
Inertia |
0.7667 |
0.6268 |
0.6927 |
Loyalty |
0.9630 |
0.9286 |
0.9233 |
As illustrated in Table 3, composite reliability, average variance extracted and the Cornbach’s alphas are all satisfactory as the computed composite reliability, AVE and Cornbach alpha of all of the constructs are higher than 0.7, 0.5 and 0.7 respectively. Moreover to confirm that there is adequate discriminant validity amongst the various constructs of the model, the correlations amongst the various constructs are presented in Table 4. It is important to note that, all of the values reported in the Table 4 are corresponding to the correlation of constructs except for the diagonal element of the table, in which the correlations are replaced by the square root of the average variance extracted by each of the latent variables in order to investigate whether the share of each latent variable’s variance extracted from their own indicators is higher than latent variable’s variance extracted from other constructs.
|
Access |
Prompt |
Fair |
Respons |
Satisfac |
Trust |
Inertia |
Loyalty |
Access |
0.833187 |
|
|
|
|
|
|
|
Prompt |
0.4717 |
0.835045 |
|
|
|
|
|
|
Fair |
0.5328 |
0.6724 |
0.889213 |
|
|
|
|
|
Respons |
0.5554 |
0.801 |
0.6898 |
0.875842 |
|
|
|
|
Satisfac |
0.5744 |
0.7128 |
0.854 |
0.7975 |
0.838451 |
|
|
|
Trust |
0.5921 |
0.7138 |
0.843 |
0.8027 |
0.8383 |
0.834266 |
|
|
Inertia |
0.4167 |
0.4596 |
0.5883 |
0.5423 |
0.4884 |
0.506 |
0.791707 |
|
Loyalty |
0.5405 |
0.6461 |
0.7932 |
0.7657 |
0.8538 |
0.8246 |
0.5882 |
0.9636 |
As illustrated in Table 4, the square root of the AVEs of all of the constructs are greater than their correlation with other latent variables except for the correlation between the satisfaction and loyalty, in which case the correlation is slightly higher than the square root of the AVE of satisfaction variable (0.8538≥0.838451), which could be neglected, first because of the magnitude of the difference and also due to the nature of the relationship as satisfaction is the only latent construct explaining loyalty with a direct effect. Thus the overall results of Tables 3 and 4, confirm fitness of the measurement model.
Table 5 PLS path-modeling results
Hypothesis |
Impact |
Path Coefficient |
T-Statistics |
Decision |
|
Accessibility → Responsiveness |
0.226 |
3.223*** |
Do Not Reject |
|
Accessibility → Fairness |
0.202 |
2.59*** |
Do Not Reject |
|
Accessibility → Promptness |
0.476 |
5.308*** |
Do Not Reject |
|
Accessibility → Trust |
0.199 |
1.534 |
Reject |
|
Accessibility → Inertia |
0.168 |
1.05 |
Reject |
|
Accessibility → Satisfaction |
0.067 |
1.213 |
Reject |
|
Promptness → Responsiveness |
0.693 |
11.066*** |
Do Not Reject |
|
Promptness → Fairness |
0.32 |
3.233*** |
Do Not Reject |
|
Promptness → Trust |
0.017 |
0.145 |
Reject |
|
Promptness → Inertia |
0.034 |
0.535 |
Reject |
|
Promptness → Satisfaction |
0.011 |
0.157 |
Reject |
|
Responsiveness → Fairness |
0.321 |
2.899*** |
Do Not Reject |
|
Responsiveness → Trust |
0.336 |
3.206*** |
Do Not Reject |
|
Responsiveness → Inertia |
0.467 |
1.635* |
Do Not Reject |
|
Responsiveness → Satisfaction |
0.336 |
3.361*** |
Do Not Reject |
|
Fairness → Trust |
0.521 |
4.660*** |
Do Not Reject |
|
Fairness → Inertia |
0.402 |
3.173*** |
Do Not Reject |
|
Fairness → Satisfaction |
0.557 |
5.317*** |
Do Not Reject |
|
Satisfaction → Loyalty |
0.919 |
55.138*** |
Do Not Reject |
|
Trust → |
Moderation |
1.713* |
Do Not Reject |
|
Inertia → |
Moderation |
3.6358*** |
Do Not Reject |
|
Trust & Inertia → |
Moderate |
1.948* |
Do Not Reject |
*significance at 90% confidence interval/ ***significance at 99% confidence interval
Hypothesis (A) postulates that, higher level of perceived accessibility to information (pre and post complaining) positively impacts the perception of complainants about responsiveness, fairness and promptness of the utility supplier with respect to complaint resolution, however it does not cause complainants’ satisfaction to change in a direct manner. Hypothesis (A) also suggests that, complainants’ trust in the utility provider and their resistance to change their supplier is not affected by the past experience of accessibility to a selected set of information.
Hypothesis (P) posits, better judgment of perceived promptness of the utility supplier to act upon a raised complaint positively impacts complainants’ view of responsiveness and fairness of the utility supplier with regard to their handling of customers’ complaints, however similar to accessibility, a direct causal relation between promptness and satisfaction could not be detected. It is also evident that complainants’ trust in a utility supplier and their inertia is not directly affected by the past experience of their suppliers’ promptness in handling the complaints.
Hypothesis (R) suggests a positive and statistically significant (direct) causal effect between complainants’ opinion about firm’s responsiveness (level and quality) and their satisfaction with the complaint resolution process. It also supports the idea that, customers’ trust in a utility supplier and their resistance to change their supplier is positively affected by the past experience of their utility companies’ responsiveness (level and quality) to an issue raised.
Hypothesis (F) postulates that higher level of perceived fairness (both procedural and distributive), positively drives complainants’ satisfaction with the complaint resolution up, and their past experience about their utility suppliers’ fairness in resolving their complaints positively impacts their trust to their current utility provider and their inertia.
Hypothesis (S) evidently supports the claim that, higher level of satisfaction with the complaint resolution process directly causes complainants’ loyalty to utility providers to improve, in the spite of facing a problem for which they had to raise a complaint.
The last hypothesis (T&I) theorize that trust and inertia moderate the impact of satisfaction on loyalty. To understand this hypothesis better, we encourage looking at the inner model’s linkage between complainant’s satisfaction and loyalty with trust and inertia moderating this relationship, which could be presented in a standardized format as:
Reflects the path coefficient (causal effect) of satisfaction to loyalty (hypothesis S) and and show path coefficients of trust and inertia on loyalty respectively, however the items of interest are and which posit that trust (inertia) moderates the impact of complainant satisfaction on loyalty in such a way that the effect is lower at higher levels of trust (inertia), and trust and inertia together, as these two dimensional effects could complement each other as well. Note that unlike trust, inertia’s effect would only be analysed as a one-tailed effect as it would only moderate the impact of satisfaction on loyalty whereas trust could both moderate and modify the impact (trust and distrust). The moderating effect of trust and inertia could also be presented formally by partially differentiating the equation above with respect to satisfaction.
In the above equation ceteris paribus, if and are negative, they imply that inertia and trust moderate the impact of satisfaction on loyalty, such that the impact is lower at higher levels of trust and inertia. Coefficients of (-0.15) and (-0.087) for and respectively, imply that our priori hypothesis that trust and inertia moderate the impact of satisfaction on loyalty could not be rejected.
Figure 19, illustrates the final model based on the analysis presented in Table 5. (Refer Appendix 12.2 Figure 23 for the detailed graphical illustration)
Figure 19 PLS path-modeling result (graphical illustration)
Now that the relationships between the inner latent constructs and the variable of interest are modelled, we assess the relationship between the outer constructs and the KPIs to see whether the variables of interest are addressed by the KPIs or not using intuition. Moreover, co-integration and correlation analysis are performed on selected existing KPIs for which we expected to detect relationships. (Refer Appendix 12.3 for detailed information)
Percentage of ‘% of upheld cases’, ‘QA level’, ‘QA control’ and ‘QNSB’ could address fairness, ‘same day resolution’ and ‘49 day resolution’ as well as the ‘age of complaints’ could highlight promptness, ‘reopened complaints’ and ‘multiple complaints’ could relate to responsiveness, and ‘NPS’- ‘customers staying or leaving’ and ‘reduction in prioritised complaints’ could be seen to address satisfaction and loyalty respectively. However, none of the KPIs could be related to accessibility and this indicates a gap in the performance measurement within E.ON.
6.1.1 Implications
Customer complaint resolution strategies designed to improve customer satisfaction may not translate directly to customer loyalty in utility industry. Since customer satisfaction is central to the marketing concept both penetration strategy and retention strategy, it is necessary to acquire a richer understanding of the key drivers of satisfaction in utility industry with respect to customer complaint resolution and redefine processes and redesign KPIs to ensure that customers’ (complainants’) concerns are appropriately addressed and there are indicators in place to closely watch and alert potential deviations (deficiencies) from the ideal/targeted levels beyond the tolerance levels. To this end, four factors are identified that significantly drive complainant satisfaction (based on the marketing literature and Harris market research report, 2012) and their relationship with the KPIs adopted by E.ON are assessed.
Table 6 Intuitive Gap Analysis
Variables |
KPIs |
Accessibility |
|
|
* abandon rate of calls |
* customer accessibility rating |
|
Time/ Promptness |
|
|
* time weighted ratio |
same day resolution |
|
49 day resolution |
|
age of complaints |
|
Responsiveness |
|
|
* average queue of incoming calls |
reopened complaints |
|
multiple/repeat complaints |
|
% planned Ombudsman cases |
|
% complaints escalated from step1 to step 2 |
|
Satisfaction |
|
|
reduction of complaints in prioritised categories |
NPS |
|
Fairness |
|
|
% of upheld cases |
QNSB |
|
QA level |
|
QA controls |
|
Internal Measures |
|
|
* % of service requests with all data fields complete |
OS cases – E.ON% in industry |
|
internal NPS |
|
Not Applicable |
|
|
COS of escalation to director’s office |
COS hand off from frontline to complaints team |
|
* KPIs Suggested |
6.2 AHP & SMART Results
The integrated AHP and SMART approach is established based on the guidelines given in figure 17 (Section 5.2).
Step 1: Define and list all of the KPIs
The list of KPIs to be used for the analysis is:
Time Weighted Ratio (TWR)
Abandon rate of calls
Customer Accessibility Rating (CAR)
% of service requests with all data fields complete
Average queue of incoming calls
Step 2: Build AHP hierarchy based on SMART criteria
The AHP hierarchy is built on SMART criteria and it includes five KPIs, Similar to Figure 18, the hierarchy structure for our case is illustrated below:
Figure 20 AHP hierarchy based on SMART criteria
Step 3: Pairwise comparison
A nine point scale as depicted in Table 6 is used for pairwise comparison and the relative importance of the nodes is determined at each level with respect to the nodes preceding levels. Subjective judgement is used for defining the weights in the subsections of this step. Calculations are presented in Tables 7 and 8.
Table 7 Nine point scale for AHP analysis (Saaty, 1994)
Rankings |
Definition |
1 |
Equal importance |
2 |
Weak |
3 |
Moderate importance |
4 |
Moderate plus |
5 |
Strong importance |
6 |
Strong plus |
7 |
Very strong importance |
8 |
Very, very strong |
9 |
Extreme importance |
Table 8 Pairwise comparisons of KPIs – Utility industry
|
Time weighted ratio |
Customer Accessibility Rating |
Abandon rate of calls |
% of service requests with all data fields complete |
Average queue time of incoming calls |
Specific |
|
|
|
|
|
Time weighted ratio |
1.00 |
2.00 |
2.00 |
2.00 |
2.00 |
Customer Accessibility Rating |
0.50 |
1.00 |
2.00 |
2.00 |
1.00 |
Abandon rate of calls |
0.50 |
0.50 |
1.00 |
0.20 |
0.33 |
% of service requests with all data fields complete |
0.50 |
0.50 |
5.00 |
1.00 |
0.20 |
Average queue time of incoming calls |
0.50 |
1.00 |
3.03 |
5.00 |
1.00 |
Total |
3.00 |
5.00 |
13.03 |
10.20 |
4.53 |
|
|
|
|
|
|
Measureable |
|
|
|
|
|
Time weighted ratio |
1.00 |
1.00 |
2.00 |
2.00 |
1.00 |
Abandon rate of calls |
1.00 |
1.00 |
2.00 |
3.00 |
1.00 |
Customer Accessibility Rating |
0.50 |
0.50 |
1.00 |
0.50 |
0.33 |
% of service requests with all data fields complete |
0.50 |
0.33 |
2.00 |
1.00 |
0.20 |
Average queue time of incoming calls |
1.00 |
1.00 |
5.00 |
1.00 |
1.00 |
Total |
4.00 |
3.83 |
12.00 |
7.50 |
3.53 |
|
|
|
|
|
|
Attainable |
|
|
|
|
|
Time weighted ratio |
1.00 |
2.00 |
2.00 |
3.00 |
2.00 |
Customer Accessibility Rating |
0.50 |
1.00 |
2.00 |
0.50 |
1.00 |
Abandon rate of calls |
0.50 |
0.50 |
1.00 |
2.00 |
2.00 |
% of service requests with all data fields complete |
0.33 |
2.00 |
0.50 |
1.00 |
2.00 |
Average queue time of incoming calls |
0.50 |
1.00 |
0.50 |
0.50 |
1.00 |
Total |
2.83 |
6.50 |
6.00 |
7.00 |
8.00 |
|
|
|
|
|
|
Realistic |
|
|
|
|
|
Time weighted ratio |
1.00 |
2.00 |
4.00 |
2.00 |
2.00 |
Customer Accessibility Rating |
0.50 |
1.00 |
0.50 |
0.33 |
1.00 |
Abandon rate of calls |
0.25 |
2.00 |
1.00 |
2.00 |
0.50 |
% of service requests with all data fields complete |
0.50 |
3.03 |
0.50 |
1.00 |
0.33 |
Average queue time of incoming calls |
0.50 |
1.00 |
2.00 |
3.03 |
1.00 |
Total |
2.75 |
9.03 |
8.00 |
8.36 |
4.83 |
|
|
|
|
|
|
Time-sensitive |
|
|
|
|
|
Time weighted ratio |
1.00 |
3.00 |
2.00 |
5.00 |
3.00 |
Customer Accessibility Rating |
0.33 |
1.00 |
1.00 |
2.00 |
1.00 |
Abandon rate of calls |
0.50 |
1.00 |
1.00 |
0.50 |
0.33 |
% of service requests with all data fields complete |
0.20 |
2.00 |
2.00 |
1.00 |
0.33 |
Average queue time of incoming calls |
0.33 |
3.03 |
3.03 |
3.03 |
1.00 |
Total |
2.37 |
10.03 |
9.03 |
11.53 |
5.66 |
Table 9 Normalized pairwise comparisons of KPIs – Utility industry
|
Time weighted ratio |
Customer Accessibility Rating |
Abandon rate of calls |
% of service requests with all data fields complete |
Average queue time of incoming calls |
Total |
Specific |
|
|
|
|
|
|
Time weighted ratio |
0.33 |
0.40 |
0.15 |
0.20 |
0.44 |
1.52 |
Customer Accessibility Rating |
0.17 |
0.20 |
0.15 |
0.20 |
0.22 |
0.94 |
Abandon rate of calls |
0.17 |
0.10 |
0.08 |
0.02 |
0.07 |
0.44 |
% of service requests with all data fields complete |
0.17 |
0.10 |
0.38 |
0.10 |
0.04 |
0.79 |
Average queue time of incoming calls |
0.17 |
0.20 |
0.23 |
0.49 |
0.22 |
1.31 |
|
|
|
|
|
|
|
Measureable |
|
|
|
|
|
|
Time weighted ratio |
0.25 |
0.26 |
0.17 |
0.27 |
0.28 |
1.23 |
Customer Accessibility Rating |
0.25 |
0.26 |
0.17 |
0.40 |
0.28 |
1.36 |
Abandon rate of calls |
0.13 |
0.13 |
0.08 |
0.07 |
0.09 |
0.50 |
% of service requests with all data fields complete |
0.13 |
0.09 |
0.17 |
0.13 |
0.06 |
0.57 |
Average queue time of incoming calls |
0.25 |
0.26 |
0.42 |
0.13 |
0.28 |
1.34 |
|
|
|
|
|
|
|
Attainable |
|
|
|
|
|
|
Time weighted ratio |
0.35 |
0.31 |
0.33 |
0.43 |
0.25 |
1.67 |
Customer Accessibility Rating |
0.18 |
0.15 |
0.33 |
0.07 |
0.13 |
0.86 |
Abandon rate of calls |
0.18 |
0.08 |
0.17 |
0.29 |
0.25 |
0.96 |
% of service requests with all data fields complete |
0.12 |
0.31 |
0.08 |
0.14 |
0.25 |
0.90 |
Average queue time of incoming calls |
0.18 |
0.15 |
0.08 |
0.07 |
0.13 |
0.61 |
|
|
|
|
|
|
|
Realistic |
|
|
|
|
|
|
Time weighted ratio |
0.36 |
0.22 |
0.50 |
0.24 |
0.41 |
1.74 |
Customer Accessibility Rating |
0.18 |
0.11 |
0.06 |
0.04 |
0.21 |
0.60 |
Abandon rate of calls |
0.09 |
0.22 |
0.13 |
0.24 |
0.10 |
0.78 |
% of service requests with all data fields complete |
0.18 |
0.34 |
0.06 |
0.12 |
0.07 |
0.77 |
Average queue time of incoming calls |
0.18 |
0.11 |
0.25 |
0.36 |
0.21 |
1.11 |
|
|
|
|
|
|
|
Time-sensitive |
|
|
|
|
|
|
Time weighted ratio |
0.42 |
0.30 |
0.22 |
0.43 |
0.53 |
1.91 |
Customer Accessibility Rating |
0.14 |
0.10 |
0.11 |
0.17 |
0.18 |
0.70 |
Abandon rate of calls |
0.21 |
0.10 |
0.11 |
0.04 |
0.06 |
0.52 |
% of service requests with all data fields complete |
0.08 |
0.20 |
0.22 |
0.09 |
0.06 |
0.65 |
Average queue time of incoming calls |
0.14 |
0.30 |
0.34 |
0.26 |
0.18 |
1.22 |
Step 4: Calculation of composite priority
The local weights are calculated as depicted in Table 7. Values in Table 8 are calculated by dividing any node of each column by the total value of the corresponding column in Table 7. Similarly, values in Table 10 are calculated by diving any node of each column by the total value of the corresponding column in Table 9. The composite priority of each alternative KPI is then calculated based on the principle of hierarchic composition. Starting from top to bottom of the hierarchy, the global weight for each node is calculated by multiplying the local weight by the global weight of the node on the higher level to which it is connected. E.g. the global weight of ‘Time weighted ratio’ is calculated as follows:
(1.52*1.30) + (1.23*0.53) + (1.67*1.26) + (1.74*1.11) + (1.91*0.80) = 6.2166
Similar calculations are carried out for rest of the alternative KPIs. The final results are presented in Table 11. For this case, excel software was used for calculating the weights.
Table 10 Pairwise comparison of SMART criteria – Utility industry
|
Specific |
Measureable |
Attainable |
Realistic |
Time-sensitive |
Specific |
1.00 |
3.00 |
2.00 |
1.00 |
1.00 |
Measureable |
0.33 |
1.00 |
1.00 |
0.33 |
0.50 |
Attainable |
0.50 |
1.00 |
1.00 |
3.00 |
2.00 |
Realistic |
1.00 |
3.03 |
0.33 |
1.00 |
2.00 |
Time-sensitive |
1.00 |
2.00 |
0.50 |
0.50 |
1.00 |
Total |
3.83 |
10.03 |
4.83 |
5.83 |
6.50 |
Table 11 Normalised pairwise comparison of SMART criteria – Utility industry
|
Specific |
Measureable |
Attainable |
Realistic |
Time-sensitive |
Total |
Specific |
0.26 |
0.30 |
0.41 |
0.17 |
0.15 |
1.30 |
Measureable |
0.09 |
0.10 |
0.21 |
0.06 |
0.08 |
0.53 |
Attainable |
0.13 |
0.10 |
0.21 |
0.51 |
0.31 |
1.26 |
Realistic |
0.26 |
0.30 |
0.07 |
0.17 |
0.31 |
1.11 |
Time-sensitive |
0.26 |
0.20 |
0.10 |
0.09 |
0.15 |
0.80 |
Step 5: Selection of KPIs
The following table depicts the ranking of the alternative KPIs. The top three ranked KPIs are suitable for E.ON to use alongside its existing lists of KPIs for improving and measuring performance in complaint resolution department.
Table 12 Global weights of KPIs – Utility industry
Rankings |
Alternatives |
Global Weights |
1 |
Time weighted ratio |
6.2166 |
2 |
Average queue time of incoming calls |
5.3930 |
3 |
Customer Accessibility Rating |
3.0323 |
4 |
Abandon rate of calls |
2.4658 |
5 |
% of resolved complaints not closed |
2.1415 |
7. Limitations
The findings of this research must be interpreted taking its limitations into account. As mentioned earlier in the Methodology part, the variance based estimation approach of SEM parameters has a number of advantages which led to selection of PLS-path modelling but, the downside of PLS-path modelling in comparison to the co-variance based parameter estimation approaches, is its inability to be tested for a global model fit using a formal testing procedure. Furthermore, the model specification (based on the literature and the venture’s objectives) captures four business level factors (constructs) that derive customer satisfaction, plus two individual-level factors that moderate the impact of satisfaction on loyalty which are by no means exhaustive. Future research will need to expand the scope of this research to include more customer- and business-level factors that may potentially explain satisfaction and loyalty in the context of complaint resolution.
Another major limitation of this research that is worth noting is the quality and quantity of the data used for SEM analysis. First of all, the results and implications of this research would have been more robust, had it been possible to target E.ON’s complainants for the survey questionnaire thereby mitigating the heterogeneity of the sample data. One potential source of problem in our analysis is the diversity of customer expectation levels about the quality of service, customers received from their respective service providers, which could be due to various reasons including corporate image or, price differentials.
The AHP & SMART integrative approach is ideal for multi-criteria decision making. Compared to other techniques e.g. goal programming; AHP & SMART uses quantitative values making its use widely applicable and more effective. The proposed approach helped in prioritising KPIs that are more ‘SMART’ than other alternatives. However, there are limitations to this approach. First, the rating scales used in AHP analysis is conceptual. Second, results might be influenced by various internal or external factors e.g. variations in the view of the people applying the weights of the KPIs might lead to a result that is not certain. Since, this approach is used for prioritisation; the selection of KPIs before they are rated is an important factor to be considered in the analysis. Third, this approach does not provide any guidance on actions to be taken in order to address deficiencies. In addition, this approach takes into consideration SERVQUAL service dimensions. Further studies could explore other dimensions in the framework (Shahin et al., 2007).
8. Conclusion
In this research we attempted to verify the KPIs used by E.ON as part of the new set of initiatives rolled out recently, that is aimed at bolstering E.ON’s customer service (customer complaint resolution in specific) and thus its stance in external rankings governed by “Which?” and Customer Futures based on the guidelines provided by Ofgem. The literature on Key Performance Indicators fall under the greater notion of performance measurement which is used by enterprises to evaluate the performance of each unit against set targets. The business model that has been presented can be used loosely by E.ON to assess its’ own organisational structure, and see how and where its’ customer function falls within the organisation as a whole. The KPIs should have some relevance to the model presented and have a looped feedback mechanism with a bidirectional flow of information. The best practices presented should further help guide key actions that the customer function should be incorporating if it aims to be in line with some of the leading utility organisations in the world.
Revision of customer service and customer relationship marketing literature suggests that, the essence of any customer service division (and customer complaint resolution) is to assure the maximum possible customer retention rate given the competitiveness of an industry. Although it is logical in its nature, there is an extensive body of research supporting the claim that increased loyalty leads to higher customer retention rate which puts (I) managing customers’ expectations and (II) ensuring alignment of business practices with customer expectations in the top of the agenda list for customer service units in any given organization. Additionally, we believe that, achieving the targeted (five stars in ‘Which?’ and top of the ranking table in CF) on their own do not add value to the business and its customers, unless improved rankings could be translated to the indispensable goal of any customer service division. To this end, we reviewed customer relationship marketing literature in a reasonable depth to identify the potential determinants of loyalty and verified them by aligning the most cited determinants with the areas that have been suggested by Ofgem (Harris, 2012) and conducted primary research to understand complainants’ behaviour with respect to their judgements regarding how well have their expectations been met.
Structural Equation Modelling (PLS Path-modelling) was used as the most advanced technique, to analyse the responses, looking for psychometric patterns in the respondents’ answers which helped us gain a valuable insight into the complainants’ behaviour with respect to the understudied aspects of the complaint resolution process (with a reasonable degree of specificity). Through the analysis, responsiveness of the customer service and the fairness of resolution process as well as (fairness of) the outcome appeared to directly cause customer satisfaction with the complaint resolution process to change, and accessibility to required information and promptness in resolution are found to cause the customer satisfaction to change (indirectly) via responsiveness and fairness. Furthermore, customer satisfaction is found to cause customer loyalty, with trust (to the company) and inertia (to change) moderating this causal relationship.
Given the abovementioned results, the relationship amongst the existing KPIs and the underlying principal determinants of loyalty have been examined intuitively, in order to assess whether they are being addressed by the existing KPIs or not, and identify any gap if there found to be a broken link (For example, there appeared to be no KPI addressing E.ON’s performance with respect to providing necessary information or the accessibility of necessary information from the complainants’ end). Once the gaps were found, AHP SMART framework were used to rank potential candidates that could fill the voids discovered and recommendations are made accordingly.
9. Recommendations
Based on our analysis, we propose the following recommendations to E.ON:
· Our study of business model and best practices in the customer complaint resolution division within the utility sector affirms that the existing KPIs used by E.ON are suitable in achieving the company’s strategic objectives such as increased customer loyalty and top rakings in ‘Which?’ and Consumer Future. An improvement in these KPIs will subsequently lead to E.ON’s high performance within the industry.
· However, we observed the ‘49 day resolution’ KPI has least relevance owing to the fact that ‘49’ day period has least statistical significance with any of the set objectives. Therefore, we propose ‘time weighted ratio’ KPI. It is a weighted ratio of average resolution time in which one numerical indicator not only takes into account the complaints that have been resolved in the first day but also the complaints that are taking longer time to be resolved. A weighted ratio of this nature would reflect E.ON’s promptness in dealing with complaints by giving more weight to the complaints that have taken a longer than anticipated time to resolve. The weights should be given based on the importance of timely resolution of complaints from the complainants’ perspective. The weighted value could be assigned precisely by E.ON.
A hypothetical example is presented below to clarify how to calculate the ‘time weighted ratio’ KPI.
Since the survey results suggest that, time is of the essence for complainants’ satisfaction with the complaint resolution process; this example puts a progressive higher weight on complaints that take longer to be resolved. The time range through which a complaint could be resolved is divided to five distinct time ranges: (A) resolved within the first week, (B) resolved during the second week, (C) resolved during weeks 3-5, (D) resolved during weeks 6-8, and (E) resolved beyond 8 weeks (including open cases).
A, B, C, D and E indicate the number of complaints resolved during the specified time periods. The weights are allocated as follows:
Table 13 Allocated weights
Weight indicators |
|
|
|
|
|
Weights |
0 |
10 |
15 |
30 |
45 |
This ratio follows a strict assumption that
The time weighted ratio could be calculated as:
The TWR should be assessed against a threshold which shall be set in accordance to the weights assigned to each time period. Note that the weights increase with time, so the threshold (and potential tolerance level) should be set in line with the weights assigned to the targeted resolution time. In this example the threshold could be set at 15, with a 2% tolerance level.
· Accessibly of information[4] was found to be one of the criteria for rakings in ‘Which?’
o However, none of the existing KPIs of E.ON measures this criterion. Therefore, we propose ‘customer accessibility rating’ KPI which could be included as part of customer survey to find out how well E.ON’s customer rate the accessibility of information related to customer complaint process. Again, the threshold for the target ratings could be assigned precisely by E.ON.
o Additional KPIs such ‘abandon rate of calls’ and ‘average queue of incoming calls’ could be adopted by E.ON to measure the accessibility as well as the responsiveness criteria. These KPIs would indicate how easy it is for E.ONs customer to reach the complaint department to lodge their complaint.
· In addition, a KPI such as ‘% of service requests with all data fields complete’ could be used for internal quality measure purpose. This KPI will ensure that all relevant data is updated to the calls before closure. The data could prove to be valuable MI data for future analysis.
· Last but not the least; the regression test indicated an unusual positive correlation between the ‘reopened complaints’ and ‘multiple/repeat complaints’ KPIs. This proves our initial assumption to be wrong – the assumption that reduction in multiple cases will increase the reopened cases to a similar degree. This implies either of the following scenarios:
o E.ON should give equal importance in reducing the KPI figures for both reopen cases as well as multiple cases.
o E.ON should revise the process through which these KPIs are measured, because this discrepancy could be due to the measurement error.
10. References
Altman, E. I. (1986), “Financial Ratios, discriminant analysis and the prediction of corporate bankruptcy”, Journal of Finance, 23, pp. 589-609.
Alvarez L.S.; Casielles, R.V. and Martin, Ana M.D. (2011), “Analysis of the role of complaint management in the context of relationship marketing”, Journal of Marketing Management, Vol. 27, No. 2, pp. 143-164.
Anderson, E. W., and Sullivan, M. W. (1993), “The antecedents and consequences of customer satisfaction for firms”, Marketing Science, Vol. 12, No. 2, pp. 125-143.
Andreassen, Tor Wallin. (1999), “What Drives Customer Loyalty with Complaint Resolution”, Journal of Service Research, Vol. 1, No. 4, pp. 1-41.
Ambrose, M., Hess, R.L., &Ganesan, S. (2007), “The relationship between justice and attitudes: An examination of justice effects on event and system-related attitudes”, Organizational Behavior and Human Decision Processes, Vol.107, Issue. 1, pp. 21–36.
Bagozzi, R. P. and Philipps, L. W. (1982), “Representing and testing organizational theories: A holistic construal”, Administrative Science Quarterly, 27, pp. 459-489.
Bagozzi, R. P. (1984), “A prospectus for theory construction in marketing”, Journal of Marketing, 48, pp. 11-29.
Baron, R. M. & Kenny, D. A. (1986), “The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations”, Journal of Personality and Social Psychology, 51, pp. 1173-1182.
Beatty, S. E. & Smith, S. M. (1987), “External Search Effort: An Investigation Across Several Product Categories”, Journal of Consumer Research, Vol. 14., No. 1. pp. 83-95.
Bernolak, I. (1997), “Effective measurement and successful elements of company productivity: the basis of competitiveness and world prosperity”, International Journal of Production Economics, Vol. 52, pp. 203-13.
Bitichi, U.S. (1994), “Measuring your way to profit”, Management Decision, Vol. 32 No. 6, pp. 16-24.
Bitner, M. J., Booms, B. H. &Tetreault, M. S. (1990), “The Service Encounter: Diagnosing Favourable and Unfavourable Incidents”, Journal of Marketing, Vol. 54, pp. 71-84.
Blodgett, J. G., Hill, D. J., and Tax, S. S. (1997),“The effects of distributive, procedural, and interactional justice on postcomplaintbehaviour”.Journal of Retailing, Vol. 73, No. 2, pp. 185-210
Bollen, K. & Lennox, R. (1991), “Conventional wisdom on measurement: A structural equation perspective”, Psychology Bulletin, 110, pp. 305-314.
Boston Consulting Group (1988), quoted in Calori and Ardisson“Differentiation strategies in Stalemate Industries”,Strategic Management Journal, May/June 1988, Vol. 9, Issue. 3, pp. 255–269.
Brignall, S. and Ballantine, J. (1996), “Performance measurement in service business revisited”, International Journal of Service Industry Management, Vol. 7, No. 1, pp. 6-31.
Brown, G. H. (1952), “Brand Loyalty-Fact or Fiction?”,Advertising Age, Vol. 23. pp. 53-55.
Butler A, Letza Steve R. and Neale Bill (1997), “Linking the Balance Scorecard to Strategy”, Long Range Planning, Vol. 30, No. 2, pp. 242-253.
Campbell, A. J. (1997), “What Affects Expectations of Mutuality in Business Relationships?”,Journal of Marketing Theory and Practice, Vol. 5., No. 4. pp. 1-11.
Cassel, C. M., Hackl, P. &Westlund, A. H. (1999), “Robustness of partial least-squares method for estimating latent variable quality structure”, Journal of Applied Statistics, 26, pp. 435-446.
Chin, W. W. (1998a), “Issues and opinion on structural equation modelling”, MIS Quarterly, 22(1), pp. vii-xvi.
Chin, W. W. (1998b), “The partial least squares approach to structural equation modelling”, Modern Methods for Business Research, pp. 295-336.
Cho, C. K. &Johar, G. V. (2011), “Attaining Satisfaction”, Journal of Consumer Research, Vol. 38, pp. 622-631.
Churchill, G. A. J. (1979), “A paradigm for developing better measures of marketing constructs”, Journal of Marketing Research, 16, pp. 64-73.
Colgate, M.R. and Danaher, P.J. (2000), “Implementing a customer relationship strategy: The asymmetric impact of poor versus excellent execution”, Journal of the Academy Marketing Science, Vol. 28, Issue. 3, pp. 375-387.
Cross, K.F. and Lynch, R.L. (1992), “For good measure”, CMA Magazine, April, pp. 20-23.
Dixon, J.R., Nanni, A.J. and Vollmann, T.E. (1990), “The New Performance Challenge – Measuring Operations for World-Class Competition”, Dow Jones-Irwin, Homewood, IL.
Diamantopoulos, A. (1994), “Modelling with LISREL: A Guide for the Uninitiated”, Journal of Marketing Management, 10, 105-136.
Dick, A and Basu, K (1994), “Customer loyalty; toward an integrated conceptual framework”, Journal of the Academy of Marketing Science, 22 (2), pp. 99-113.
Dijkstra, T. (1983), “Some comments on maximum likelihood and partial least squares method”, Journal of Econometrics, 22, pp. 67-90.
Doney, P. M. and Cannon, J. P. (1997), “An Examination of the Nature of Trust in Buyer-Seller Relationships”, Journal of Marketing, Vol. 61., No. 2. pp. 35-51.
Engel, J. F., Kollat, D. T. & Blackwell, R. D. (1982), Consumer Behaviour, Edition. 4., Chicago, Dryden Press, US.
Estelami, H. (2000), “Competitive and Procedural Determinants of Delight and Disappointment in Consumer Complaint Outcomes”, Journal of Service Research, Vol. 2, No. 3, pp. 285-300.
Etienne, J. Erik, M. Arjan, J. (2005), “Performance management models and purchasing: relevance still lost”. The 14thIPSERA conference, Archamps, France, March 20-23, pp. 687-697.
Ferreras, Ana M. and Crumpton-Young, Lesia L. (2013), “The measure of success”, Industrial Management, May-June 2013, pp. 26-30.
Flint, D.J., Woodruff, R.B., and Gardial, S.F. (1997), “Customer value change in industrial marketing relationships: A call for new strategies and research”, International Marketing Management, Vol. 26, pp. 163–175.
Folkes, V. S. (1984), “Consumer Reactions to Product Failures: An Attributional Approach”, Journal of Consumer Research, Vol. 10. pp. 398-409.
Fornell, C. & Lacker, D. F. (1981), “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error”, Journal of Marketing Research, Vol. 18, No. 1, pp. 39-50.
Fornell, C. & Bookstein, F. L. (1982), “A comparative analysis of two structural equation models: LISREL and PLS applied to market data”, In: Fornell, C. (ed), A Second Generation of Multivariate Analysis, New York, Praeger Publishers, Vol. 1, pp. 289-324.
Fry, T.D. (1995), “Japanese manufacturing performance criteria”, International Journal of Production Research, Vol. 33 No. 4.
Garengo, P., Biazzo, S., &Bititci, U. (2005).“Performance measurement systems in SMEs: A review for a research agenda”, International Journal of Management Reviews, Vol. 7, No.1, pp. 25-47.
Gates, S. (1999), “Aligning Strategic Performance Measures and Results”, The Conference Board, New York, NY.
Gefen, D., Straub, D. W. & Boudreau, M. C. (2000), “Structural equation modelling and regression: Guidelines for research practice”, Communications of the Association for Information Systems, 4, pp. 1-79.
Ghalayini, A.M., Noble, J.S. and Crowe, T.J. (1997), “An integrated dynamic performance measurement system for improving manufacturing competitiveness”, International Journal of Production Economics, Vol. 48, pp. 207-225.
Goodwin, C. & Ross, I. (1992), “Consumer Responses to Service Failures: Influence of Procedural and Interactional Fairness Perceptions”, Journal of Business Research, Vol. 25, No. 2, pp. 149-163.
Gordon, I (2002), “Best Practices: Customer Relationship Management”, Ivey Business Journal, pp. 1 – 6.
Greenberg, J. (1990), “Looking Fair Versus Being Fair: Managing Impressions of Organizational Justice”, Research in Organizational Behaviour, Vol. 12, pp. 11-157.
Haenlein, M. & Kaplan, A. M. (2004), “A Beginner’s Guide to Partial Least Squares Analysis”, Understanding Statistics, 3(4), pp. 283-297.
Harris Interactive (2012), “Customer complaint handling report”, Harris Interactive Report – prepared for Ofgem, March 2012 [Online]. [Accessed: 20 Jul 2013] Available at:
https://www.ofgem.gov.uk/ofgem-publications/57616/customer-complaints-research-2012.pdf
Hirschman, A. O. (1970), “Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States”, Cambridge: Harvard University Press, pp. 1-162.
Hofstede, G. (1983), “The cultural relativity of organizational practices and theories”, Journal of International Business Studies, 14, pp. 75-89.
Hope, Jeremy and Fraser, Robin. (2003), “Beyond Budgeting: How Managers Can Break Free from the Annual Performance Trap”,Harvard Business School Press Books. Jan2003, pp. 1-256.
Hulland, J. (1999), “Use of partial least squares (PLS) in strategic management research: A review of four recent studies”, Strategic Management Journal, 20, pp. 195-204/
Jagdev, H., Bradley, P. and Molloy, O. (1997), “A QFD based performance measurement tool”, Computers in Industry, Vol. 33, pp. 357-366.
Kaplan, R. S. (1986), “Accounting lag – the obsolescence of cost accounting systems”, California Management Review, Vol. 28 No. 2, pp. 174-199.
Kaplan, Robert S. and Norton, David P. (1992), “The Balance Scorecard – Measures that Drive Performance”, Harvard Business Review, Jan/Feb1992, Vol. 70, Issue 1, pp. 71-79.
Kuhen, A. (1962), “Consumer Brand Choice as a Learning Process”, Journal of Advertising Research, Vol. 2. pp. 10-17.
Leventhal, G. S. (1980), “What should be done with equity theory? New approaches to the study of fairness in social relationships”, In K. J. Gergen, M. S. Greenberg and R. S. Willis (Eds.),Social Exchange: Advances in Theory and Research,New York: Plenum Press, pp. 27-55.
Levitt, Theodore (1980), “Marketing success through differentiation – of anything”, Harvard Business Review, Jan/Feb1980, Vol. 58, Issue. 1, pp. 83-91.
Lingle, J.H. and Schiemann, W.A. (1996), “From balanced scorecard to strategy gauge: is measurement worth it?”, Management Review, March, pp. 56-62.
Lohmöller, J. B. (1989), “Latent variable path modelling with partial least squares”, Heidelberg: Physica.
Lynch, R.L. and Cross, K.F. (1991), “Measure Up – The Essential Guide to Measuring Business Performance”, Mandarin, London.
Mauboussin, Michael J. (2012), “The true measures of success”, Harvard Business Review, Oct2012, Vol. 90, Issue. 10, pp. 46-56.
McAdam, R. (2000), “Quality models in an SME context: A critical perspective using a grounded approach”,TheInternational Journal of Quality & Reliability Management, Vol. 17, No. 3, pp. 305.
McCollough, Michael A. (2009), “The recovery paradox: The effect of recovery performance and service failure severity on post-recovery customer satisfaction”, Academy of Marketing Studies Journal, Jan2009, Vol. 13, Issue. 1, pp. 89-104.
McDonald, R. P. (1996), “Path analysis with composite variables” Multivariate Behavioural Research, 31, pp. 239-270.
Medori, D. and Steeple, D. (2000), “A framework for auditing and enhancing performance measurement systems”, International Journal of Operations & Production Management, Vol. 20, No. 5, pp. 520-533.
Mohr, L. A. &Bitner, M. J. (1995), “The Role of Employee Effort in Satisfaction with Service Transactions”, Journal of Business Research, Vol. 32, No. 3, pp. 239-253.
Moorman, R. H.(1991), “Relationship between organizational justice and organizational citizenship behaviors: Do fairness perceptions influence employee citizenship?”,Journal of Applied Psychology, Vol. 76, No. 6, pp. 845-855
Morgan, R. M. & Hunt, S. D. (1994), “The Commitment-Trust Theory of Relationship Marketing”, Journal of Marketing, Vol. 58., No. 3. pp. 20-38.
Neely, A. (1995), “Performance measurement system design: theory and practice”, International Journal of Operations & Production Management, Vol. 15, pp. 4.
Neely, A., Adams, C. and Crowe, P. (2001), “The performance prism in practice”, Measuring Business Excellence, Vol. 5, No. 2, pp. 6-12.
Neely, A., Mills, J., Platts, K., Richards, H. and Bourne, M. (2000), “Performance measurement system design: developing and testing a process-based approach”, International Journal of Operations & Production Management, Vol. 20 No. 10, pp. 1119-1145.
Neely, Andy., and Kennerley, Mike. (2003),“Measuring performance in a changing business environment”,International Journal of Operations & Production Management, Vol. 23 No. 2, pp. 213-229
Neely, A.D., Gregory, M. and Platts, K. (1995), “Performance measurement system design – a literature review and research agenda”, International Journal of Operations & Production Management, Vol. 15 No. 4, pp. 80-116.
Norreklit, H. (2000), “The balance on the scorecard – a critical analysis of some of its assumptions”, Management Accounting Review, Vol. 11, No. 1, pp. 65-88.
Norton, David P. and Kaplan, Robert S. (1996), “The Balance Scorecard: Translating Strategy into Action”, Harvard Business School Press Books,Jul1996, pp. 1-336.
Nunnally, J. C. & Bernstein, I. H. (1994), Psychometric Theory, 3rd Edition, New York, McGraw-Hill.
Oliver, R. L. & Swan, J. E. (1989), “Consumer Perceptions of Interpersonal Equity and Satisfaction in Transactions: A Field Survey Approach”, Journal of Marketing, Vol. 53. pp. 21-35.
Otley, D. and Emmanuel, Clive R. (1985), Accounting for Management Control, Chapman & Hall, London, 2nd Edition.
Parasuraman, A., Zeithaml, V.A., and Berry, L.L. (1985), “A conceptual model of service quality and its implications for future research”, Journal of Marketing, Vol. 49, pp. 41–51.
Parmenter, David. (2010), “Key Performance Indicators (KPI): Developing, Implementing, and Using Winning KPIs”, John Wiley & Sons, 2nd Edition, pp. 1-242.
Pesce, B (2002), “What’s in a brand? Public Utilities Fortnightly 1 (2), 24-26.
Peterson, R. A. & William, R. W. (1992), “Measuring Customer Satisfaction: Fact and Artifact”, Journal of the Academy of Marketing Science, 20(1), pp. 61-72.
Raynor, Michael E. and Ahmed, Mumtaz (2013), “Three rules of making a company really great”, Harvard Business Review, Apr2013, Vol. 91, Issue .4, pp. 108-117.
Reijers, H.A and Mansar, S.L. (2005), “Best practices in business process redesign: an overview and qualitative evaluation of successful redesign heuristics” The International Journal of Management Science, Vol 33, pp. 283 – 306.
Reichheld, Frederick F. and Sasser Jr., W. Earl. (1990), “Zero Defections: Quality Comes to Services”, Harvard Business Review, Sep/Oct90, Vol. 68, Issue. 5, pp. 105-111.
Richardson, P.R. and Gordon, J.R.M. (1980), “Measuring total manufacturing performance”, Sloan Management Review, Winter, pp. 47-58.
Ringle, C., Wende, S. & Will, A. (2005), “SmartPLS 2.0”, SmartPLS, Hamburg [Online]. [Accessed: 01 August 2013] Available at: www.smartpls.de
Rompho, Nopadol. (2011), “Why the Balance Scorecard Fails in SMEs: A Case Study”, International Journal of Business and Management, Vol. 6, No. 11, pp. 39-46.
Ross, S.A., Westerfield, R.W. and Jaffe, J.F. (1993), “Corporate Finance”, Irwin, Burr Ridge, IL., 3rd Edition.
Saaty, T. (1994), “Highlights and critical points in the theory and application of the analytic hierarchy process”, European Journal of Operational Research, Vol. 74, Issue. 3, pp. 426–447.
Salem, MiladAbdelnabi.,Hasnan, Norlena., and Osman, Nor Hasni. (2012), “Balance Scorecard: weakness, strength, and its ability as performance management system versus other performance management system”, Journal of Environment and Earth Science, Vol 2, No.9, 2012, pp. 1-9.
Sangareddy, Sridhar R. Papagari; JhaSanjeev; Chen Ye, and Desouza, Kevin c. (2009), “Attaining superior customer resolution”, Communications of the ACM, Oct2009, Vol. 52, Issue. 10, pp. 122-126.
Shackman, J. D. (2013), “The Use of Partial Least Squares Path Modeling and Generalized Structured Component Analysis in International Business Research: A Literature Review”, International Journal of Management, Vol. 30, No. 3, pp. 78-85.
Shahin, Arash and Mahbod, M. Ali (2007), “Prioritisation of key performance indicators – An integration of analytical hierarchy process and goal setting ”, International Journal of Productivity and Performance Management, Vol. 56, No. 3, pp. 226-240.
Shugan, S. M. (2002), “Marketing science, models, monopoly models, and why we need them”, Marketing Science, 21, pp. 223-228.
Skinner, W. (1974), “The decline, fall and renewal of manufacturing”, Industrial Engineering, pp. 32-8.
Skinner, Wickham (1986), “The productivity paradox”, Harvard Business Review, July-August, Vol. 64, Issue 4, pp. 55-9.
Spearman, C. (1904), “General intelligence, objectively determined and measured”, American Journal of Psychology, 15, pp. 201-293.
Stewart, K. (1998), “The customer exit process: A review and research agenda”, Journal of Marketing Management, Vol. 14, Issue. 4, pp. 235–250.
Szymanski, D. M. &Hise, R. T. (2000), “E-Satisfaction: An Initial Examination”, Journal of Retailing, Vol. 76., No. 3. pp. 309-322.
Tangen, Stefan. (2004), “Performance measurement: from philosophy to practice”, International Journal of Productivity and Performance Management, Vol. 53, No. 8, pp. 726-737
Tax S. S; Brown, S.W. and Chandrashekaran, M. (1998), “Customer evaluation of service complaint experiences: implications for relationship marketing”, Journal of Marketing, Vol. 62, Issue. 2, pp. 60-76.
Technical Assistance Research Programs Institute (1986), Consumer Complaint Handling in America: An Update Study, Part II, Washington DC: US office of Consumer Affairs.
Viswanathan, M., Rosa, J. & Ruth, J. A. (2010), “Exchanges in Marketing Systems: The Case of Subsistence Consumer Merchants in Chennai, India,”,Journal of Marketing, Vol. 74. No. 3. pp. 1-17.
Weber, Al. and Thomas, Ron. (2005), “Key Performance Indicators – Measuring and Managing the Maintenance Function”, White Paper – Ivara Corporation, November 2005, pp. 1-16
Wold, H. (1993), “Path models with latent variables: The NIPALS approach”, In H. M. Blalock, A. Aganbegian, F. M. Borodkin, R. Boudon, & V. Capecci (Eds), Quantitative Sociology: International perspectives on mathematical and statistical modelling (pp. 307-357), New York.
www.brighthub.com (2011), “Measure and Improve Customer Service Performance with KPIs”, BrightHub, [Online]. [Accessed: 01 August 2013] Available at:
http://www.brighthub.com/office/entrepreneurs/articles/118495.aspx
www.consumerfutures.org.uk (2013), “Consumer Futures”, [Online]. [Accessed: 01 August 2013] Available at: http://www.consumerfutures.org.uk/
www.energy-uk.org.uk (2013), “EnergyUK”, [Online]. [Accessed: 01 August 2013] Available at: http://www.energy-uk.org.uk/publication.html
www.epa.gov (2013), “Utility Best Practices Guidance Providing Business Customers with Energy Use and Cost Data”, [Online]. [Accessed: 15 August 2013] Available at: http://www.epa.gov/cleanenergy/documents/suca/utility_data_guidance.pdf
www.gov.uk (2012), “UK Energy in Brief 2012”, [Online]. [Accessed: 01 August 2013] Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/65898/5942-uk-energy-in-brief-2012.pdf
www.kpilibrary.com (2013), “KPIs in ITIL service desk”, KPI Library, [Online]. [Accessed: 01 August 2013] Available at: http://kpilibrary.com/categories/helpdesk
www.parliament.uk (2013), “Energy Prices – raising energy prices”, [Online]. [Accessed: 16 August 2013] Available at:
http://www.publications.parliament.uk/pa/cm201314/cmselect/cmenergy/108/10806.htm#note43
www.rwe.com (2013), “History of the UK electricity industry”, [Online]. [Accessed: 01 August 2013] Available at:
http://www.rwe.com/web/cms/en/286400/rwe-npower/about-us/our-history/history-of-electricity-industry
www.which.co.uk (2013), “Eon”, [Online]. [Accessed: 01 August 2013] Available at:
http://www.which.co.uk/switch/energy-suppliers/eon
Yang, Z. and Peterson, R.T. (2004), “Customer perceived value, satisfaction and loyalty: the role of switching cost”, Psychology of Marketing, Vol. 21, Issue. 10, pp. 799-822.
Zinkhan, G.M. (2002), “Relationship marketing: theory and implementation”, Journal of Market-Focused Management, Vol. 5, pp. 83-89.
11. Appendix
11.1 Big 6 Profit Margin
Figure 21 Big 6’s breakdown of cost structure and profit margin (2%-5%) energy
Source: WWW.parliament.uk, 2013
11.2 Graphical Illustration of PLS Analysis
Figure 22 Graphical illustration of PLS Path-modelling Analysis
11.3 Regression Test
Although the interrelationships between the KPIs have already been addressed by the interrelationships of their underlying determinants, OLS regressions, variance and correlation analysis have also been used to highlight potential correlations amongst certain KPIs (number of re-opens, number of multiples, average age and the number of complaints over 56 days) or potential impacts they could have on one another. One potential reason was to assess how well the KPIs could measure variations in their principal factors. We expected to detect a significant negative relationship between changes in the re-opened complaints and multiple complaints. Also we expected to detect a significant relationship between average age of the complaints and the number of complaints over 56 days with the number of re-opened and multiple com-plaints. The correlation matrix corresponding to the above four KPIs for the time period of 27th week of 2012 to 34th week of 2013 is presented in table XXX(below):
|
Re-opens |
Multiples |
Average age |
Complaints over 56 days |
Re-opens |
1 |
|
|
|
Multiples |
0.3929 |
1 |
|
|
Average age |
0.6017 |
-0.4097 |
1 |
|
Complaints over 56 days |
0.7221 |
-0.0031 |
0.7237 |
1 |
The number of re-opened complaints are positively correlated with the number of multiple complaints, average age of the complaints as well as the number of complaints over 56 days. The number of multiple complaints are negatively correlated with the average age and the number of complaints over 56 days, and the average age of the complaints are positively correlated with the number of complaints over 56 days. All of the correlation results are in line with our expectation except for the positive correlation between the re-opens and multiples for which we expected a negative correlation. (Below figure) figure XXX illustrates the number of re-opened and multiple complaints against each other over the last 60 weeks.
Figure 23 Time line of Re-opened and Multiples
Re-opens and Multiples |
Natural log of Re-opens and Multiples |
|
|
The first differences of Re-opens and Multiples |
|
|
It could be seen that, the number of re-opened and multiple complaints moved together in the past 60 weeks. To assess if changes in the number of re-opened complaints explain the changes in the number of multiple complaints, we used OLS regression analysis which suggests that based on the available data a unit change in the number of re-opened complaints results in 2.68 unit change in the number of multiple complaints in the same direction and this coefficient is statistically significant at 95% confidence interval (p<0.05). Also it is evident that, a unit increase in the number of re-opened complaints leads to 0.5719 increase in the average age of the complaints and 0.2735 increase in the number of complaints over 56 days (p<0.05) at 95% confidence interval. Furthermore, an increase in the number of multiple complaints by one unit leads to 41% (p<0.05) drop in the average age of the complaints at 95% confidence interval.
11.4 Survey Questionnaire
1. What energy supplier do you use?
· British Gas
· EDF
· E.ON
· Npower
· Scottish Power
· SSE
· Others
2. Have you ever lodged a complaint to your energy supplier?
· Yes
· No
3. Accessibility:
a. I found it easy to obtain enough information and correct contact details for registering a complaint:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
* Enough information – e.g.: complaint handling information, contact details (be it phone numbers, E-mail/URL addresses or freepost address)
b. I was provided with further information and contact details from my utility company to discuss the complaint if I was not happy with the response given:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
* Further information – information regarding options available to complainants, should you feel the response given was not adequate. (e.g. if complainant can take the complaint to a higher ranked manager, or taking the complaint to Energy Ombudsman knowing they are independent, free of charge, could offer financial compensation and their decision is binding on the supplier but not the complainant)
c. I could easily track the status of my complaint:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
d. I found it very convenient to raise a complaint to my utility supplier company via the website or telephone:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
4. Time:
a. How long did it take for your case to be resolved?*
· Less than 1 week
· 1 – 2 week
· 2 – 3 week
· 3 – 4 week
· 4 – 6 weeks
· 6 – 8 weeks
· More than 8 weeks
b. I do not feel that my utility supplier company dealt with my complaint in a timely manner:*
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
c. How long do you think a complaint should take to be resolved?*
· Less than 1 week
· 1 – 2 week
· 2 – 3 week
· 3 – 4 week
· 4 – 6 weeks
· 6 – 8 weeks
· More than 8 weeks
d. How important is the timeliness of complaint resolution?
· Not at all important
· Low importance
· Slightly important
· Neutral
· Moderately important
· Very important
· Extremely important
5. Responsiveness:
a. I did not have to contact my utility provider again regarding a complaint that I raised already:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
b. I found the staff handling my complaint were polite, helpful and caring:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
c. I received notifications regarding the progress of my complaint:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
d. I did not have to re-open a complaint or raise a new complaint on an issue that I complained about earlier:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
e. I got timely call/ response back from the staff as promised during the complaint resolution process:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
6. Fairness:
a. I found the outcome of the complaint resolution to be fair:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
b. Irrespective of the outcome of the resolution, I think my utility provider followed fair procedures in resolving my complaint:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
c. I do not think that my utility provider was an unbiased and impartial representative of my interest:*
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
7. Satisfaction with Customer Complaint Resolution
a. I am satisfied with my decision to raise a complaint:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
b. If I had a problem again I would feel differently about raising a complaint:*
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
c. I think I was treated as a valued customer:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
d. After I raised a complaint to my utility provider, I regret my decision to choose them as my utility supplier:*
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
e. I would not expect to encounter an identical problem more than once:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
8. Trust
a. How has your experience of raising a complaint influenced the confidence you have in your utility supplier’s service?*
· Greatly increased
· Reasonably increased
· Slightly Increased
· Remained the same
· Slightly Decreased
· Reasonably Decreased
· Greatly Decreased
b. I trust my utility supplier company to treat me honestly and fairly:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
c. I can trust my utility supplier company to handle my complaint appropriately:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
d. Encountering high severity problem significantly damages my trust in my utility supplier company:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
9. Inertia
a. Unless I was very dissatisfied with my utility supplier company’s services, I would not change my utility supplier:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
b. For me the cost in time and effort to change my utility supplier is high:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
10. Loyalty:
a. I seldom consider switching to another utility company:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
b. As long as the present customer service level continues, I would not switch to a new utility company:
· Strongly disagree
· Disagree
· Somewhat disagree
· Neither agree or disagree
· Somewhat agree
· Agree
· Strongly agree
11. What is your gender?
a. Male
b. Female
12. What is your age?
a. 18 – 29 years old
b. 30 – 49 years old
c. 50 – 64 years old
d. 65 and above years old
13. What is your current marital status?
a. Single
b. Married
c. Separated
d. Divorced
e. Widowed
14. What is your total household income?
a. £10,000 – £19,999
b. £20,000 – £29,999
c. £30,000 – £39,999
d. £40,000 – £49,999
e. £50,000 – £59,999
f. £60,000 – £69,999
g. £70,000 – £79,999
h. £80,000 – £89,999
i. £90,000 – £99,999
j. £100,000 and above
*Scale items are reverse coded
[1] Mediation or moderation refer to a situation where the influence of an independent variable on a dependent variable is transmitted through another variable (i.e. a mediator variable). A mediator variable could be of a modifying or a moderating nature
[2] errors that could be caused by the order of items in a questionnaire or respondent fatigue
[3] variance attributable to the measurement method
[4] Complaint handling information, contact details (be it phone numbers, E-mail/URL addresses or freepost address), information in website regarding options available to complainants, should you feel the response given was not adequate. (e.g. if complainant can take the complaint to a higher ranked manager, or taking the complaint to Energy Ombudsman knowing they are independent, free of charge, could offer financial compensation and their decision is binding on the supplier but not the complainant)