Skip to main content

Online reviews give businesses with strategic insight

 Introduction

Online reviews give businesses with strategic insight that is critical for pricing setting, demand forecasting, product quality evaluation, and customer relationship management. However, fully using the strategic potential of online reviews is dependent on the underlying IT infrastructure. IT systems, according to Bharadwaj (2000), "such as groupware and expert systems, when supplied with firm-specific information and insights, are turned into specialized assets that are practically hard to copy by rivals" (p. 175). Following Bharadwaj, we argue that when stocked with online evaluations, review systems represent specialized assets in the form of the (reviewing) customers' experiences and expertise. According to the resource-based theory of the company (Barney, 1991), review systems meet the required requirement of representing a valuable, uncommon, unique, and non-substitutable resource to the business, with which it may gain a competitive advantage. 1 As a result, review systems have grown into strategic information systems employed by e-commerce platforms such as Amazon or Yelp for this reason (Gable, 2010). Amazon's feature that allows users to answer the question, "Was this review helpful?" exemplifies the strategic necessity of converting review systems into strategic information systems by making the "correct" design choices. According to Spool (2009), this one feature generates more than US $2.7 billion in additional income for Amazon each year.



Previous reviews of the literature on online reviews began to synthesize the current state of knowledge and presented research findings on two aspects: (1) the impact of online reviews on economic outcomes – which we refer to as direct outcome effect in the following – such as prices and sales (Cheung and Thadani, 2012, Floyd et al., 2014, King et al., 2014, You et al., 2015, Babic Rosario et al., 2016), and (2) the factors that drive online review (Matos and Rossi, 2008, King et al., 2014, Hong et al., 2017). To the best of our knowledge, a literature review integrating the expanding body of material on the design of review systems has yet to be published. As a result, we conduct a review led by the following three research questions:

1. What is the most recent state-of-the-art in review system design?


2. What are the current research needs in review system design?


3. What are some potential approaches to bridging research gaps?



To address these research issues, we conduct a scoping review (Paré et al., 2015). In all, we find 312 papers on online reviews from various fields such as information systems, marketing, and strategy. 58 of these research expressly examine the moderating influence of review system design (direct driver effect and direct outcome effect). We extract three research gaps from this literature, which we organize around three main themes: (1) Design features: A considerable number of studies study the economic results or drivers of online reviews while also proposing implications for the design of review systems, such as providing reviewers with a predetermined review template (e.g., Yin et al., 2014). Although many review system design aspects have been offered throughout the years, only a handful have been studied. (2) Environments: In recent years, new online business models and environments have evolved, including two-sided platform enterprises (e.g., Airbnb, Uber). These allow two-sided reviews and need adaptive design elements to, for example, minimize reciprocity in two-sided review systems. (3) Devices: The majority of review system design characteristics have been studied for stationary devices such as personal computers. However, online evaluations are rapidly being written and viewed through mobile devices, necessitating the use of specialized design characteristics. We offer three study directions in which we demonstrate how these gaps may be filled.


Background and research design

The initial wave of e-commerce platforms, such as Amazon and eBay, allowed geographically distant customers to make purchases online. eBay and Amazon used review systems in order to build confidence with sellers and ease transactions. Since then, a broad variety of e-commerce sites, including Yelp, Airbnb, and TripAdvisor, have incorporated review systems, which have become a fundamental element of online purchasing.


The occurrence of review systems piqued the attention of researchers, who have subsequently created a vast body of study on internet reviews. Figure 1 depicts the conceptual paradigm we use to categorize research on internet reviews. Online reviews typically include at least two components: a number rating (e.g., a star rating) and a written evaluation. The numerical rating indicates the reviewer's opinion of a product or service, whilst the written review portion supplements the numerical rating with extra information. Typically, review systems include a variety of indicators for evaluating or aggregating online reviews. Individual-level measures, such as the perceived usefulness of an online review, and aggregate-level metrics, such as volume (i.e., the number of online reviews), valence (i.e., the average numerical rating), and variation, are examples of such metrics (i.e., the numerical rating distribution).



Download: High-resolution picture (154KB)



Download: Click here to see the full-size picture.

Figure 1 shows a conceptual model of current research.


Economic consequences of online reviews often indicate the impacts immediately coming from the metrics of online reviews (i.e., direct outcome effect (a) in Fig. 1) and may be quantified at the consumer, company, or market level. For example, a growing valence of online reviews may enhance a product's sales and so constitute a firm-level direct outcome impact. Drivers2 refers to any factors that impact individual online reviews or any online review measure (i.e., direct driver effect (b)), which might be review-related or reviewer-related. For example, the social influence bias is a review-related factor that implies reviewers modify their own reviewing behavior when exposed to current reviews (Muchnik et al., 2013).


A more recent line of study looks at the design of review systems as a moderator of the direct result impact (i.e., moderating outcome effect (c)) or the direct driver effect (i.e., moderating driver effect (d)). For example, the cardinality of the rating scale (e.g., binary vs 1 to 5 stars) moderates the association between valence and sales, with a high rating scale cardinality perceived as promoting the selling of mainstream items (Jiang and Guo, 2015). Regarding the moderating driver effect, some system designers enable reviewers to give identity-descriptive information (such as a genuine name, for example), which increases the perceived usefulness of these evaluations (Forman et al., 2008). One primary goal of our study is to identify all papers that investigate the moderating influence of such design characteristics.


Methodology for conducting literature searches

In step one, all authors collaborated to identify the collection of relevant venues for our research. We opted to include only high-quality publications in order to synthesize proven and peer-reviewed scientific information. The following were the criteria for inclusion: (a) starting with previous review studies, we included all the journals these reviews had searched (Cheung and Thadani, 2012, You et al., 2015, Floyd et al., 2014); (b) because journals in the information systems discipline are predisposed to publish studies on the design of review systems, we included all journals in the AIS Senior Scholars' Basket of Journals, supplemented by the design science-oriented outlet Business & Information Systems Engineering and by Information & Table A2 in the appendix shows the final list of 38 journals. 3


In step 2, each journal was allocated to one of the co-authors. To guarantee that we captured all relevant research papers from our target journals, we did a manual issue-by-issue search between 1991 and 2017 (including "online first" items) for relevant articles based on the title and abstract. The co-author chose whether the paper will deal with one or more aspects of internet reviews.


In step 3, a separate co-author did a theme analysis inspired by Roberts et al. (2012), classifying each relevant research using our conceptual model (see Fig. 1). Any articles that could not be categorized as addressing any of the relationships in our model (arrows (a) to (d) in Fig. 1) were removed. For the 312 papers that could be categorized, a quick coding using a standardized template was undertaken for the studies reporting direct impacts (254 articles), while a complete classification was performed for the articles assessing moderating effects (58 articles). 4 The quick coding includes a categorization based on the direct effects (direct outcome effect or direct driver effect) and the key results. The comprehensive coding also included, among other things, a classification based on the moderating effects (moderating outcome effect or moderating driver effect), the analyzed design features, information about the independent and dependent variables, the research method, and a data characterization. 5


To achieve consistent quality for our method, we employed four measures:

• In step 2, we conducted an additional keyword-based search6 for a subsample of three non-IS journals to ensure that no relevant articles were overlooked by mistake. This robustness test produced no more articles, which is reassuring.


• In step 2, interrater coding between two co-authors and between graduate student assistants and co-authors was performed on a subsample of nine journals (Cohen's Kappa and Krippendorf's Alpha between 0.73 and 1).


• Prior to executing step 3, the co-authors utilized twelve example articles selected as relevant in step 2 to develop a shared understanding of the inclusion and classification criteria in step 3, as well as the following coding technique.


• Following the coding of all publications, papers categorized as design-related articles were discussed in depth by the group of co-authors to guarantee accurate categorization and coding for the 58 articles at the heart of our research. In the rare occasions when the co-authors disagreed, they addressed their positions to reach a consensus on inclusion or exclusion, as well as coding (Paré et al., 2015).



Synthesis of research results and research gaps

To help the reader understand the language used in the following discussion, we list the design feature categories synthesized from the relevant literature (see Section 'Literature search technique') and offer a description of each in Table 1.


Table 1 shows the many design categories.


Design feature category


Description

Templates must be reviewed.

Provide reviewers with templates and recommendations to help them enhance their reviews.

Examine the presentations.

Influence the order in which reviews are shown; give ranking and filtering options for review readers; alter the appearance of individual reviews (e.g., introducing information on the reviewer)

Metrics that have been modified

Adapt current measurements such as valence, volume, or variance.

Dimensional evaluation

Represent the numerical rating of a review (e.g., multi-dimensional vs. single-dimensional, binary vs. 1–5 rating scale).

Responses from management

Allows vendors to openly respond to internet reviews.

Review elicitation

Requests evaluations from previous customers and offers both non-monetary and monetary incentives for submitting reviews.

Reviewer Reputation

Introduces principles for describing a reviewer's standing within the reviewer community (e.g., ranking of reviewers, friendships between reviewers)

Mutual peer review

Allows both buyers and sellers to evaluate one other and make changes to the mutual reviewing process (e.g., double-blind reviewing)

Systems for making recommendations

Introduce a system that gives customers with product or service suggestions, and change recommender systems depending on their connection with internet reviews.

Dishonesty

Detects and mitigates fraudulent reviewer and seller activity (e.g., fake review filters, verified review mechanisms, measures to punish dishonest sellers)

This section is organized by differentiating between publications addressing outcome impacts and those discussing driving effects. In each subsection, we begin by briefly summarizing the direct impacts before concentrating on the moderating effects of design elements, with references (in squared brackets) to the design feature categories mentioned in Table 1. This section is concluded with a discussion of the highlighted research gaps.


Effects on the outcome

Consumer level

On the consumer level, studies have investigated the direct outcome effects of the informational value of online reviews on consumer outcomes such as learning (Dellarocas, 2003, Hu et al., 2017a, Wu et al., 2015, Koh et al., 2010), consumer satisfaction (Benlian et al., 2012), and product disposition (Ein-Gar et al., 2012). For example, if platforms include online reviews or suggestions, customer satisfaction (and associated variables such as perceived ease of use) are better since the platform aids the consumer search process (Benlian et al., 2012).


Effects of outcome moderating

First, the review system design may aid a consumer's learning process by displaying reviews with the sort of information that is most relevant to each step of the purchase process (Li et al., 2017, Huang et al., 2014) [Review Presentation] Allowing videos in review texts may improve consumer learning over basic photographs or text-only evaluations since consumers see these reviews as more trustworthy and compelling (Xu et al., 2015).


Second, system design may aid in increasing perceived ease of use and happiness with the e-commerce platform. This involves tailoring the rating dimensions to the characteristics of the traded items (Fang et al., 2014) [Rating Dimensions] or employing automated suggestions (Benlian et al., 2012, Hostler et al., 2011) [Recommender Systems].


Third, when customers are given with a list of both good and negative evaluations, it may improve their attitude toward a product (Ein-Gar et al., 2012) [Review Presentation]. The same impact indicates that when customers are presented with highly disaggregated online ratings (i.e., without aggregated metrics like average rating or variance), it helps sellers of items with unfavorable outlier reviews (Camilleri, 2017) [Adapted Metrics].


firm level

A number of studies have been conducted to explore the direct influence of internet reviews on sales. Evidence from observational data and field studies supports the concept that measures such as helpfulness, valence, or number of online reviews might cause increased sales (e.g., Chevalier and Mayzlin, 2006, Duan et al., 2008, Forman et al., 2008). According to several research, adopting product/service suggestions boosts revenue by easing customer search procedures (Cheung et al., 2003). Scholars have also investigated the direct result influence on price (e.g., Ba and Pavlou, 2002; Pavlou and Dimoka, 2006), finding a positive association between the valence of a seller's online evaluations and her pricing power.


Effects of outcome moderating

In terms of the valence of online reviews, system designers may change the representation of the rating by adjusting the scale and dimensionality of ratings [Rating Dimensions]. According to one research, adopting a low rating scale cardinality for specialized items and a high rating scale cardinality for general products both boosts sales (Jiang and Guo, 2015).


The impact of review valence may also be modified by situating reviews and ratings [Review Presentation]. Displaying average ratings in the product list may impede buyers from finding niche items that genuinely meet their requirements while increasing sales of mainstream products (Li, 2017). When the review system permits merchants to include their online reviews into their product description, they may use this feature to improve their sales (Wang et al., 2016).


The design of a review system may mitigate the influence of recommendations on sales [Recommender Systems]. Empirical research suggests that when providing recommendations for rival items, the rating valence of the latter reduces sales of the focused product (Jabr and Zheng, 2014). Balancing the individual relevance of and profit obtained from suggestions has a favorable influence on a firm's profit without adversely damaging customer trust (Panniello et al., 2016). Review texts (Ghose et al., 2012), review sequence (Piramuthu et al., 2012), and missing ratings (Ying et al., 2006) may all be utilized to enhance the effectiveness of recommender algorithms.


Market level A key feature of market efficiency is that all market players engage in transactions honestly so that participants may judge the quality of offerings based on the valence and number of evaluations. However, owing to the anonymity of buyers and sellers in electronic marketplaces, the potential of moral hazard is considerable, particularly for sellers (Dellarocas, 2003).


Effects of outcome moderating

According to one research, review mechanisms increase the performance of consumer-to-consumer (C2C) auction marketplaces, and penalizing dishonest market players is more successful than rewarding honest members (Yang et al., 2007) [Untrustworthy Behaviour]. Design elements such as the granularity of feedback [Review Dimensions], the shape of the reputation profile, or the policy on missing input [Review Presentation] may successfully decrease moral hazard and, as a result, boost market efficiency (Dellarocas, 2005).


In terms of the impact of review valence in market efficiency, research have shown that updating a seller's online review profile after every k transactions rather than after every transaction may boost market efficiency (Dellarocas, 2006). Flexible time periods, depending on the seller type, may boost market efficiency even further (Aperjis and Johari, 2010) [Adapted Metrics] Furthermore, in a market where sellers provide a variety of commodities, the valence is closer to the real quality if, instead of a single rating score for the seller, there is one for each commodity supplied (Samak, 2013) [Adapted Metrics]. Reciprocity between customers and sellers, in which they jointly review one other after a transaction, might contribute to an upward bias in online ratings [Mutual Reviewing]. If all of the ratings are high, regardless of the underlying purchase, online ratings are "inflated," and therefore fail to distinguish between apparently "excellent" and "poor" trade partners (Bolton et al., 2013). Thus, in order to minimize inflation and improve market efficiency, the design of the review system should account for reciprocity in C2C marketplaces, such as only allowing for blind, one-sided, and anonymous evaluations (Bolton et al., 2013).


Studies have also shown empirical evidence that vendors (Li and Xiao, 2014) and platform owners (Avery et al., 1999) may boost the amount of reviews by offering rebates to customers who contribute an online rating [Review Elicitation]. Even if the feedback gathered in this manner is mostly positive, it has the potential to boost market efficiency since customers are more inclined to offer an online review (Li and Xiao, 2014).


The consequences of the driver

Naturally, design decisions are a key strategic tool for obtaining ideal economic results. These results were often generated from good reviews (e.g., Chevalier and Mayzlin, 2006), a large number of reviews (e.g., Duan et al., 2008), or a large number of helpful reviews (e.g., Forman et al., 2008). As a result, it is critical to examine research that includes design characteristics that mitigate direct driving impacts.


Helpfulness

First, confidence in the reviewer has a direct impact on perceived helpfulness. Reviews are not viewed as useful if the author is not perceived as reliable (Schlosser, 2011, Chen and Lurie, 2013). Second, there is a link between confidence in the review and perceived helpfulness. Trust in the review is often dependent on the information supplied in the review text and the rating. For example, if the rating deviates significantly from the average rating, the review is judged less reputable and hence less useful (Yin et al., 2016). Third, the reviewer's reason for writing the review influences how useful the review will be. If reviewers are sufficiently motivated to devote time and effort to writing a review, the information they express may be more useful (Korfiatis et al., 2012).


Modifying the influence of the driver

System designers may assist reviewers improve their credibility and hence boost the perceived usefulness of their evaluations by allowing them to include identity-descriptive information (e.g., name, geographic location, or profile image) in their reviews (Forman et al., 2008, Karimi and Wang, 2017) [Review Presentation] Another method of boosting the perceived usefulness of reviews is to issue badges or certifications to credible reviewers (Kuan et al., 2015, Chang and Wu, 2014, Chang et al., 2013) [Reviewer Reputation].


Systems may assist reviewers in creating trustworthy reviews [Review Presentation]. When a system enables reviewers to include photographs or videos in their evaluations, perceptions of usefulness rise depending on the product type (Xu et al., 2015). To boost confidence in reviews, systems may apply user-controllable filters to pick reviews based on various criteria (e.g., filter reviews on TripAdvisor by travel season) (Hu et al., 2017b) [Review Presentation]


The influence of reviewer motivation and helpfulness may be adversely mitigated by providing a design element that enables customers to be asked to become reviewers [Review Elicitation]. It has been discovered that requesting prior customers to submit a review by email results in evaluations that are considered as less helpful than reviews supplied spontaneously by consumers (Askalidis et al., 2017).


Volume

Several studies indicate that reviewers are motivated to submit a review for a variety of reasons (e.g., Hennig-Thurau et al., 2004). For example, customers post evaluations to assist the firm or to retaliate after a terrible encounter. Writing a review, on the other hand, is time and effort consuming. Because a consumer's motive seldom overcomes the expense of submitting a review, they abstain from posting one, resulting in an underreporting bias (Hu et al., 2017a).


Modifying the influence of the driver

Design elements might mitigate the direct influence of motivation on review volume by introducing external non-monetary or monetary incentives [Review Elicitation]. Review system designers may use social comparisons [Review Elicitation] (Chen et al., 2010) or management replies [Management Responses] (Proserpio and Zervas, 2015) as a new feature to positively control a reviewer's motivation and boost review volume. In terms of monetary incentives, allowing vendors to provide a refund to customers may encourage them to submit a review (Li and Xiao, 2014, Chen et al., 2017).


The desire to achieve reputation influences the link between a reviewer's motivation and the amount of reviews [Reviewer Reputation]. If the review system is designed to allow for follower or friendship ties, reviewers with a large number of relationships will post more reviews than those with a low number (Sun et al., 2017, Goes et al., 2014). As a result, publicly accessible reputation information motivates users to provide more evaluations. However, such reviewer reputation algorithms may have drawbacks since highly renowned reviewers prefer to avoid evaluating popular items (Shen et al., 2015).


Finally, double blind reciprocal reviewing, which is standard practice on two-sided platforms like Airbnb, may reduce the proclivity to post a review (Bolton et al., 2013) [Mutual Reviewing].


Valence

Naturally, reviewers offer higher ratings if they have a better taste match with the product or service (Sun, 2012). Furthermore, since customers with a greater preference for a product are more likely to purchase and evaluate a product, online reviews tend to be more favorable. This is referred to as preference bias (Li and Hitt, 2008, Hu et al., 2017a). Because of the social impact bias, reviewers adjust their own judgment of a product/service after viewing current reviews (Muchnik et al., 2013). When past evaluations are good, reviewers tend to adjust their planned rate higher. If the reviews are bad but the reviewer's personal experience is good, she may seek to remedy the already existing poor ratings by providing an even better rating. Because there is no "value for price" dimension in single-dimensional rating systems, pricing bias causes ratings to fall as the price of the product rises (Li and Hitt, 2010).


Another factor influencing review valence is attentiveness. When reviewers anticipate attention from other customers or the vendor, they strive to be more neutral in their writings and ratings (Shen et al., 2015, Proserpio and Zervas, 2015). Reviewers will also adjust their rating behavior if their rating has the potential to affect their personal reputation. When submitting a review on a reciprocal reviewing site (e.g., AirBnB), both parties consider possible negative consequences on their own reputation (Dellarocas and Wood, 2008). Some reviewers can attempt to lower a business's online review score by leaving false reviews or to raise the review score by leaving false positive reviews (e.g., Mayzlin et al., 2014). In this scenario, the valence of internet evaluations is plainly determined by dishonest motives.


Modifying the influence of the driver

When compared to single-dimensional rating systems, multi-dimensional rating systems make it simpler for customers to identify items or services that meet their interests. This design aspect, in turn, favorably moderates the taste match effect (Chen et al., 2018) [Rating Dimensions] To lessen the effect of past ratings and solve the social influence bias, designers may construct their system such that it contacts customers requesting them to post evaluations on a page where they are not exposed to current reviews (Askalidis et al., 2017) [Review Elicitation] Allowing friendships amongst reviewers on the system may also help to lessen the intensity of the social influence bias. [Reviewer Reputation]. Allowing merchants to give refunds to purchasers interacts with the pricing bias and indicates that evaluations are more favorable (Li and Xiao, 2014) [Review Elicitation] Using a multi-dimensional rating system eliminates this prejudice (Li and Hitt, 2010) [Rating Dimensions]


Introducing a reviewer reputation system fosters more distinct evaluations since reviewers become aware of the attention they get (e.g., Shen et al., 2015) [Reviewer Reputation]. Furthermore, reviews may attract the attention of merchants. Enabling management replies results in more positive reviews since sellers who reply to unfavorable reviews earn more positive ratings in the future (Proserpio and Zervas, 2015; van Noort and Willemsen, 2012) [Management Responses]. If, on the other hand, vendors only reply to a few bad reviews, buyers who do not get a response become less happy in the future and provide lower ratings (Gu and Ye, 2014).


Systems that enable mutual reviewing include design characteristics that mitigate the direct driving influence of a reviewer's personal reputation [Mutual Reviewing]. Buyers and sellers both abstain from leaving unfavorable evaluations for fear of punishment from the other side (Bolton et al., 2013). Similarly, restricting the seller's capacity to respond against unfavorable customer ratings raises the amount of negative reviews (Ye et al., 2014). This shift to a one-sided review system prohibits low-quality vendors from masquerading as high-quality sellers and motivates them to improve the quality of their items (Ye et al., 2014). Furthermore, the design element that allows a bad review to be retracted after a dispute settlement encourages both sides to initially post more unfavorable evaluations in order to enhance each other's negotiating position (Bolton et al., 2018). Furthermore, vendors often depart and re-enter the system in order to clear their unfavorable reputation. Assigning a minimal amount of reputation to new merchants is one potential solution (Zacharia et al., 2000) [Adapted Metrics].

Comments

Popular posts from this blog

22BET CASINO LOGIN AND REGISTRATION 2022

  LOGIN AND REGISTRATION FOR 22BET CASINO Since its launch in 2017, 22Bet has quickly established itself as one of the most well-known online casino sites, which also provides a live casino experience. The enormous and diverse choice of sports and casino offerings on display is truly magnificent, and the company has maintained a very high quality of reputation, which is well appreciated across the online gaming industry. In this review, we'll go through the reasons why they've been able to maintain their stellar reputation. 22Bet is owned and maintained by Techsolutions Group, and it operates under the standard Curacao License, which is well-regarded in the online casino industry. 22Bet, which is based in Cyprus, features a visually appealing design that makes casino betting and live casino betting seem to be extremely easy to browse. What is the difference between user registration and login? A system's ability to customise itself is enabled via user registration and login

10 Top SEO Gaming Agencies Reviewed

  10 Top SEO Gaming Agencies Reviewed Are you looking for an SEO gaming agency that can help you reach the top of the search engine rankings? Look no further! In this post, we will be reviewing 10 of the top SEO gaming agencies available today. We’ll go over each agency’s services, pricing, and customer reviews, to give you an in-depth look into the best companies for your gaming SEO needs. So if you’re looking to get ahead in the gaming industry, be sure to read through our top 10 SEO gaming agencies' reviews. 1) SEOmoz SEOmoz is one of the most popular SEO agencies for gaming websites. It offers comprehensive services such as keyword research, on-page optimization, link building, content creation, and technical optimization. Their team of experienced professionals provides top-notch support and helps gaming sites get the results they need in a timely manner. Additionally, they have an extensive blog that covers topics related to search engine optimization and gaming. With their y

How to Become a Member of an Online Casino

 We entirely realize how intimidating it might be for a newcomer to an online casino. Without a doubt, you'll have a slew of questions swirling about in your brain that you'd want to have addressed before making any judgments. Questions such as: How can I determine whether the operator is trustworthy? How can I determine the difference between licensed and unauthorized rogue operators? What information do I really need to offer the online casino if I want to join up? Fortunately, our knowledgeable staff has removed all of the uncertainty from the process, and all of the information you need to make an educated choice on which operator to play with can be discovered by continuing to read. Rest assured - if you're looking for a straight path to the top online casinos, you can simply locate them by playing with any of the operators featured in our Recommended section here . Why Should You Listen to Us? It might be hard to choose a source to listen to in an industry notorious w