1 Introduction
Language plays a central role in persuasion. Small differences at phonological, lexical, syntactic, and discourse levels have been shown to impact the outcome of persuasive communication (Hosman, Reference Hosman, Dillard and Shen2012; Niebuhr & Barbosa, this volume). To date, a large body of research has documented the effects of linguistic choices on how persuasive messages are understood, processed, and ultimately complied with (cf., among others, Xiao et al., this volume; Langedijk & Fischer, this volume). However, within this line of work, language has been reduced to a set of inputs for individual cognitive processes. This conception of language is at odds with insights from studies of naturally occurring language-in-use. Conversation analysis (CA) and discursive psychology (DP) have shown that, in interaction, individuals treat language as the medium through which social actions are accomplished (te Molder & Potter, Reference te Molder and Potter2005). Moreover, the sequential properties of language-in-use, such as rendering particular responsive actions conditionally relevant or preferred/dispreferred, could be employed as means towards persuasive outcomes (Humă, Reference Humă2023).
This study examines persuasion within a corpus of naturally occurring business-to-business ‘cold’ sales calls. An initial scrutiny of the data led to the identification of two frequently employed formats for designing sales requests: when-formulated requests such as ‘wha- when would be a good time to cal- come down to see yourselves about this.’ and if-formulated requests such as ‘#U::h I >w’s jus’ really seein’< if we could come down and have a chat to you about Yeltel.’ These formats appeared to differ in their effectiveness in eliciting productive responses. Using CA as the method best equipped for the examination of naturally occurring data, the current study aims to identify the mechanisms that underpin the differential outcomes of when-formulated and if-formulated requests. By working out the answer to this question, the current chapter makes both an empirical contribution to the body of knowledge on preference organisation and turn design in persuasion, as well as a theoretical contribution to our understanding of how language-in-use features in persuasive communication.
The rest of the chapter is organised as follows: first, I review the conversation analytic literature on preference organisation within adjacency pairs in order to highlight that and how preference organisational resources could be implicated in persuasion. Then, after presenting the study’s data and method, I present the empirical evidence that illuminates the preference organisational features of when-formulated and if-formulated sales requests that render them more or less effective in eliciting productive responses. The chapter concludes with a discussion of the study’s main findings and implications thereof, as well as a review of possible limitations and directions for future research.
2 Preference Organisation as an Infrastructure for Persuasion
Preference organisation within adjacency pairs represents a set of normative principles that explain the differential relationship between first-pair parts (FPP) and alternative types or formats of second-pair parts (SPP) (Pomerantz & Heritage, Reference Pomerantz, Heritage, Sidnell and Stivers2013). To illustrate this, a first-pair part, such as an offer, ‘prefers’ – that is, invites or promotes – the production of a second-pair part, such as an acceptance, that aligns with the initiating action by carrying out its project, as opposed to a refusal, that frustrates the accomplishment of the course of action initiated by the first-pair part (Schegloff, Reference Schegloff2007). For extensive overviews of preference organisation with adjacency pairs, see Schegloff (Reference Schegloff2007) and Clayman (Reference Clayman, Lawler and Thye2002).
Turns that implement preferred and dispreferred SPPs come in different packages. Specifically, dispreferred SPPs feature one or more of the following design features (Clayman, Reference Clayman, Lawler and Thye2002; Schegloff, Reference Schegloff2007): (1) they are elaborated, usually through accounts, excuses, or disclaimers, (2) they are mitigated, through phrasing that minimises or even omits antagonistic actions, and (3) they are delayed, via inter-turn gaps (but see Kendrick & Torreira, Reference Kendrick and Torreira2015 for further specifications) or via preparatory action such as inbreaths, or delay tokens. These practices for marking dispreferred SPPs mitigate their disaffiliative character (Clayman, Reference Clayman, Lawler and Thye2002; Heritage, Reference Heritage1984b) and can promote the production of aligned responses, for example by providing the space for the FPP speaker to redeploy the initiating action with reversed preference (Schegloff, Reference Schegloff2007).
Not all dispreferred responsive actions are designed with the features of a dispreferred turn format described above. Conversely, preferred responsive actions can sometimes be packaged in a dispreferred format, for example by being delayed or mitigated. The former have been termed ‘flat rejections’ and the latter ‘qualified acceptances’ (Kendrick & Torreira, Reference Kendrick and Torreira2015, p. 273). A delayed and mitigated acceptance can convey possible reluctance (Bilmes, Reference Bilmes2014) or limited commitment to the ongoing course of action (Kendrick & Torreira, Reference Kendrick and Torreira2015). By contrast, delayed but unqualified preferred responses produced in the aftermath of resistance enable respondents to claim having independently changed their position (Stokoe et al., Reference Stokoe, Humă, Sikveland and Kevoe-Feldman2020). Thus, it is not only the type of responsive action, but also its design that should be taken into consideration when studying persuasion-in-interaction. I will come back to this claim in the analysis of the last extract presented in this chapter and in the discussion.
Returning to FPPs, there is compelling evidence that their design is implicated in preference organisation in a way that bears a direct relevance for persuasion. Specifically, initiating actions can be designed to intensify or weaken the preference for an aligned response. Conversation analysts have documented a range of practices and resources for modifying the strength with which an initiating action invites a preferred responsive action. One such resource is epistemic stance. Questions designed to embody a knowing epistemic stance may be better equipped to elicit agreement than questions designed with an agnostic epistemic stance. For example, compared to positively formatted interrogatives, negative interrogatives are usually treated as strongly preferring agreement on the basis of their conduciveness; that is, their presumed embodiment of the speaker’s predisposition for an agreeing response (Clayman & Heritage, Reference Clayman and Heritage2002; Heritage, Reference Heritage2002; Robinson, Reference Robinson2020a). Similarly, prefacing or following a question with a statement in favour of a particular response can exert pressure on the respondent to produce the invited response (Clayman & Heritage, Reference Clayman and Heritage2002).
So far, preference organisation has been discussed with respect to the responsive action invited by the initiating action and the features of the FPP that weigh in on this. Parallel to this action preference dimension, there is also a design preference dimension. Specifically, particular FPPs prefer SPPs delivered in particular formats. The most prominent example is that of yes/no interrogatives that prefer type-conforming responses (i.e., featuring ‘yes’ or ‘no’ tokens) over nonconforming responses (Raymond, Reference Raymond2003). A similar conjecture has been made about wh-questions (Hayashi & Kushida, Reference Hayashi and Kushida2013; Schegloff, Reference Schegloff2007), but empirical studies conducted so far have not yet provided sufficient evidence in support of it (Thompson et al., Reference Thompson, Fox and Couper-Kuhlen2015).
Response design preferences have already been shown to have consequences for individuals’ conduct in commercial encounters. Llewellyn (Reference Llewellyn2015, p. 497) found that asking patrons of a museum to buy a slightly more expensive gift aid ticket using an alternative interrogative such as ‘Would’ya like to gift a:id those today or just pay the standa:rd.’ was less effective than using a yes/no interrogative such as ‘D’you want to add gift aid toda:y?’ While the former provided patrons with the option of choosing to buy the standard ticket, the latter did not offer such a choice and required patrons to first refuse the more expensive ticket. Customers can also take advantage of the constraints on response design mobilised by various initiating actions. Examining requests for service at a copy shop Vinkhuyzen and Szymanski (Reference Vinkhuyzen, Szymanski, Richards and Seedhouse2005, p. 96) noticed that customers’ use of self-oriented declarative requests such as ‘I wanna get uhm some quality copies of this’ allowed shop assistants to swiftly direct customers to the do-it-yourself machines. By contrast, when customers used other-oriented interrogatives ‘can you (.) can you ta:: (.) print up (0.2) on this paper five copies of each of these for me’ shop assistants were constrained by the response options built into the yes/no interrogatives which left them with little room for manoeuvre and required them to produce elaborated dispreferred responses through which to introduce the do-it-yourself option.
Lastly, there is evidence to suggest that some initiating actions can mobilise combined action and turn design preferences. Specifically, remote requests have been documented to prefer acceptances embodied by a response token accompanied by a display of commitment (Lindström, Reference Lindström, Raymond, Lerner and Heritage2017). More recently, a study into the preference organisation of unpolarised positively formatted polar interrogatives used to implement simple questioning found that they prefer unconditionally formatted affirmative and disaffirmative responses over conditional responses (Robinson, Reference Robinson2020b). These studies suggest that both action and design dimensions of a responsive turn are implicated in the production of preferred and dispreferred SPPs.
Having reviewed extant conversation analytic research on preference organisation within adjacency pairs, this section has highlighted how FPPs (1) can be designed to intensify the pressure for a preferred SPP and how (2) through the design of the FPP and its preference for a particular response format respondents can be provided with more or less space to extricate themselves from the constraints of producing an aligning response. Thus, it has become hopefully apparent that and how preference organisation can lend itself as one of the interactional structures that underpin persuasion. As a conversational activity, persuasion is successful if the persuaders’ FPPs manage to elicit preferred SPPs from the persuadees. As such, persuaders need to design their requests, offers, or invitations in ways that more readily encourage their acceptance and/or restrict their rejection (cf. Sorlin, this volume). These insights will be applied to the analysis of when-formulated and if-formulated sales requests, after the study’s data and method have been introduced in the next section.
3 Data and Method
The corpus examined in this study comprises 159 business-to-business ‘cold’ (unsolicited) calls recorded between 2014 and 2016 in four British companies. Ethical approval for collecting and analysing these recordings was given by Loughborough University’s Ethics Approval (Human Participants) Sub-Committee.
All calls were initiated by salespeople who were contacting prospective customers (‘prospects’ in sales parlance) on behalf of companies selling multifunctional printers, telecommunication systems, or advertising services. As ‘cold’ calls, these conversations were mainly geared towards gathering information about the prospects’ companies and setting up appointments for subsequent sales visits (Humă, Reference Humă2018; Humă & Stokoe, Reference Humă, Stokoe and Sikveland2020). Even though the calls were treated by all parties as sales-oriented, the eventual sale would typically occur after several rounds of discussions and negotiations.
To assemble the collection (Hoey & Kendrick, Reference Hoey, Kendrick, de Groot and Hagoort2018), I first scrutinised the corpus (by listening to the audio recordings while reading the transcripts thereof) for all instances of sales-related if-formulated and when-formulated requests. By that, I mean requests implementing courses of action that were implicated in the progress of the conversation towards an eventual sale. The two request configurations were chosen because of their comparability and because of the relative high frequency with which they appeared in the corpus, compared to other configurations.
Fifty-four candidate cases were identified: seven when-formulated and 47 if-formulated requests. An eligibility check led to the removal of 12 if-formulated requests that initiated presequences (e.g., ‘↑I was w↑onderin’ if you could help me.’) because an equivalent when-format was not available for this action in this slot. The final collection, comprising 42 cases, consists of the following five actions, each exemplified below by either an if-formulated or a when-formulated request:
Information requests: ‘Do you happen to know when these are due to expire and when you’re .h uhm >next reviewing<¿’
Meeting requests (whereby the salesperson asks the prospect for a meeting): ‘#U::h I >w’s jus’ really seein’< if we could come down and have a chat to you about Yeltel.’
Scheduling requests (whereby the salesperson asks for or proposes a particular timeframe for the meeting): ‘U:h just (a) wonderin’ if u::h you’re available sometime (.) °December or January time°?’
Future contact requests (whereby the salesperson asks for or proposes a non-specific future contact opportunity such as a call, an email exchange, or a meeting): ‘So when do you want me to recontact you Tom.’
Switchboard requests: ‘Wonder if I could .hhh ↑just have a brief m- u:hm moment of the: (.) the business manager’s time there?’
The data were transcribed using the Jefferson (Reference Jefferson and Lerner2004) system developed for conversation analysis in order to capture the minutiae of speech production such as silences, overlaps, intonation, or laughter as features that participants treat as meaningful resources for building social actions. The data were analysed using conversation analysis, with a focus on preference organisation, turn design, and action formation (Schegloff, Reference Schegloff2007). The analytic process consisted of multiple rounds in which each case was scrutinised individually and then emergent observations were compared across the entire collection. The cases selected to be included in this chapter represent the clearest and most concise examples of the identified patterns.
4 Analysis
This section will show that, compared to if-formulated sales requests, when-formulated sales requests appear to be more effective in eliciting a productive response (Stokoe et al., Reference Stokoe, Humă, Sikveland and Kevoe-Feldman2020). Productive responses not only align with the initiating action, but also move the conversation forward towards a successful completion of the overarching activity (Robinson, Reference Robinson, Sidnell and Stivers2013) that the interlocutors are engaged in, for example setting up a sales meeting or extracting useful information about the prospect’s company (see Humă & Stokoe, Reference Humă, Stokoe and Sikveland2020, for an overview of the different activities comprised in cold calls). I start by examining when-formulated requests.
4.1 When-formulated Sales Requests
Let us start by zooming in on the interactional relevancies mobilised by a when-formulated request. Through its sequential position as a first-pair part and its grammatical design, a when-formulated request projects a response that should contain a time referenceFootnote 1, as in the extract below. As such, it restricts not only the range of options for designing a response, but also the actions available for a responsive slot. Let us have a look at Extract 3.1, wherein the salesperson’s (S in the transcript) when-formulated request, highlighted in grey, receives a delayed, albeit productive response from the prospect (P in the transcript).
Extract 3.1 Eplus 44
| 1 | S: | .hh So, (0.2) >↑yeah.=When can I come and ↓see you.< | |
| 2 | (0.6) | ||
| 3 | P: | U::::HM, (1.0) O:h(hh). (0.2) One second. (3.3) #U::::h | |
| 4 | M:onday? | ||
In line 1, the salesperson asks the prospect ‘When can I come and ↓see you.’, thereby initiating an appointment scheduling. The responsive turn harbours a prefaced phrasal response (Thompson et al., Reference Thompson, Fox and Couper-Kuhlen2015) consisting of several components: (1) a turn-initial elongated delay token and 1.0 second gap that could index some trouble in answering, (2) a change of state token (Heritage, Reference Heritage, Atkinson and Heritage1984a) that potentially marks the questions as unexpected, (3) a further indication of delay, and (4) a time reference. The first three components are not produced as the answer that the question projects, but as conveying some local trouble in responding (Bilmes, Reference Bilmes2014; Thompson et al., Reference Thompson, Fox and Couper-Kuhlen2015). Instead, it is the time reference ‘M:onday?’ that serves as the recognisable answer to the question. Via this sequence, the salesperson and prospect agree to have a sales meeting the following Monday. For the former, this constitutes a successful call.
When-formulated requests also embed a presupposition that the request can and should be fulfilled. In that sense, they appear to be presumptive sales requests. They are designed to index the speaker’s high entitlement for making the request and on their own, do not index any contingencies associated with its fulfilment (Curl & Drew, Reference Curl and Drew2008). Often the embedded presupposition can be traced back to a moment in the interaction that provides at least minimal support for it. For example, in the following extract, in lines 1–7, the prospect announces that the company has tentative plans to look for a new telecommunication services provider. In lines 10–11, the salesperson produces a when-formulated request to contact the prospect about this matter.
Extract 3.2 Eplus 9
| 1 | P: | S[o we w]ill be: again be lookin’ round about= | |
| 2 | S: | [Oka:y.] | |
| 3 | P: | =I imagine .hhhh (0.3) u::::h June whatever to to: uh well | |
| 4 | an’ we won’t be looking openly again at various different | ||
| 5 | solutions whether it be [ th]e Ay Pee telephony = | ||
| 6 | S: | [Okay.] | |
| 7 | P: | = or whether it’s= | |
| 8 | S: | =Yeah yeah.=Absol[u:tely. Well we’re still g]onna= | |
| 9 | P: | [( )] | |
| 10 | S: | =we’ll still be here anyway. | So when do you want me to |
| 11 | recontact you Tom. | ||
| 12 | (0.9) | ||
| 13 | P: | We’ll probably start lookin’ I imagine around June | |
| 14 | again,=But I would sugge:st you: (0.4) contact me befo:re | ||
| 15 | then. | ||
Lines 1–7 are occupied with the prospect laying out the company’s plans to search for a new provider in terms that construct the search as not-yet fully planned out and as narrow in scope. The salesperson treats this description as forecasting an improbable future collaboration. His turn in line 8 cuts off the prospect’s unfinished turn. He receipts the turn with a double ‘yeah’ and a high-grade agreement token ‘Absolu:tely.’, thus treating his interlocutor’s turn as complete (Antaki et al., Reference Antaki, Houtkoop-Steenstra and Rapley2000). Even though the prospect extends his turn, which is unintelligible due to overlap (line 9), the salesperson does not concede the floor. Instead, he reassures the prospect of his company’s continued availability (lines 8 and 10), thereby both aligning with his interlocutor, by acknowledging the tentativeness of the plans to find a new provider, and reaffirming his company’s readiness to do business in the future. In this environment, the salesperson produces the when-formulated request for future contact ‘So when do you want me to recontact you Tom’ (lines 10–11). The ‘so’ preface frames the request as having been pending (Bolden, Reference Bolden2009) and, thus, as drawing its support from prior talk, while the verb format ‘do you want’ ascribes agency for taking a decision about future contact to the prospect (Weatherall, Reference Weatherall2020). The latter’s response matches this stance. The prospect’s answer provides a ‘my-side’ perspective, whereby the future contact is timed to fit with the company’s agenda. The time reference, even though less specific than in Extract 3.1, nonetheless enables the salesperson to recontact the prospect at an opportune time.
Not all calls featuring when-formulated requests end up with a positive outcome for the salesperson. In the final extract shown in this section, we will see a prospect resisting the request. In doing that, he has to break with the sequential constraints set up by the question and produce a response that refutes the embedded presupposition that the meeting should take place.
Extract 3.3 Eplus 1
| 1 | S: | But in[ iti ]ally: wha- when would be a good time to= | |
| 2 | P: | [Yeah.] | |
| 3 | S: | =cal- come down to see yourselves about this. | |
| 4 | (.) | ||
| 5 | P: | .h U#:hm I mean I- I’d have to speak to:: u::hm (0.8) it— | |
| 6 | (.) #uh- there wouldn’t really be much point in just coming | ||
| 7 | down havin’ a meeting with me because I’m sure it would | ||
| 8 | sound impressive but I don’t really have no idea as to | ||
| 9 | whether or not it would be any advantage to us. .Hh uh | ||
| 10 | Either financially or logistically. | ||
In lines 1 and 3 the salesperson asks the prospect to indicate a time that would be convenient for him to receive a sales visit. Through its grammatical design the question projects an answer featuring a time reference. Such an answer would align not only with the format of the question, but also with its presupposition that the meeting should take place. The response produced by the prospect resists these implications. He starts by introducing a potential contingency which could preclude him from giving an answer on the spot. This turn-constructional unit (TCU) is repaired several times and then finally abandoned in favour of a stronger form of resistance: a rejection-implicative account that constructs the meeting as not worthwhile, thus pushing back against the question’s presupposition. The account consists of three elements, bearing a striking similarity to the pattern described by Antaki and Wetherell (Reference Antaki and Wetherell1999) in their study of concessions used in argumentative talk to dismiss an opposing point of view: (1) a conclusion, which is delivered first ‘there wouldn’t really be much point in just coming down havin’ a meeting with me’, (2) an interpolated concession ‘because I’m sure it would sound impressive but’, and (3) an explanation ‘because […] I don’t really have no idea as to whether or not it would be any advantage to us. .Hh uh Either financially or logistically’. Note that the concession downplays the argument in favour of having the meeting, while the conclusion and explanation are bolstered through the use of the intensifier ‘really’ (Gray, Reference Gray2012), and extreme case formulation ‘have no idea’ (Pomerantz, Reference Pomerantz1986).
This extract shows that resisting the agenda of when-formulated requests requires extensive interactional manoeuvring on the prospects’ side. In effect, the onus is on them to find a suitable way to decline the request and this entails excavating and refuting the presuppositions embedded in the initiating action. In other words, simply saying ‘no’ is not an option in the sequential environment constructed by a when-formulated request. By contrast, as we will see in the next section, saying ‘no’ to an if-formulated request is entirely possible.
4.2 If-formulated Sales Requests
The first extract examined in this section features an if-formulated request, highlighted in grey, that is declined via a type-conforming (Raymond, Reference Raymond2003) no-prefaced response.
Extract 3.4 Eplus 13
| 1 | S: | Jolly good.=Uhm we spoke u- uhm in Ju:ly: abou:t you:r | |
| 2 | Yeltel system. | ||
| 3 | (.) | ||
| 4 | S: | About a hundred and eighty handsets. .h A:#:nd you asked me | |
| 5 | to give you a call in January: >t’=see if we could come down | ||
| 6 | and have a chat to you¿<,=.h ‘Cause you’re already- you’re | ||
| 7 | already alternative networks aren’t you. | ||
| 8 | (0.2) | ||
| 9 | P: | That’s right.=Yeah(p). | |
| 10 | (0.3) | ||
| 11 | S: | Yeah,=I’m just seein’ if I can get Steve Brown to come in | |
| 12 | and have a chat to you#: uh- the back end of February:: or | ||
| 13 | early Ma:rch? | ||
| 14 | (.) | ||
| 15 | P: .hh #U::HM mcht No::. That won’t be required at the Moment | ||
| 16 | now. U:hm (0.3) We’re probably gonna stay with thee:# uhm | ||
| 17 | >stay with the alternative< for the moment, | ||
In the preamble that precedes the request (lines 1–7), the salesperson provides grounds for calling the prospect and for the upcoming meeting request. Specifically, he frames both actions as responsive to the prospect’s own request which grants them legitimacy and also constructs the prospect as possibly interested in his company’s services. Note that the latter does not confirm this framing which has sequential implications, as misalignment between salesperson and prospect in the presequence has been shown to facilitate the declination of an ensuing meeting request (Humă et al., Reference Humă, Stokoe and Sikveland2019). The prospect’s ‘That’s right’ is a confirmation that their current telephony is set up as ‘alternative networks’ (Gardner, Reference Gardner2007).
The ensuing request for a meeting comprises multiple components that will be scrutinised in turn. It starts with a preface ‘I’m just seein’’ that minimises the request. The if-component, which introduces the contingency that conditions the fulfilment of the request, consists of a self-oriented construction ‘if I can’ through which the salesperson seeks permission to arrange his colleague’s sales visit (Maynard & Hollander, Reference Maynard and Hollander2014). The TCU arrives at a possible transition relevance place after an elongated ‘you#:’ and it is then extended via a free constituent ‘uh- the back end of February:: or early Ma:rch?’ (Couper-Kuhlen & Ono, Reference Couper-Kuhlen and Ono2007). Both the elongation and the increment pursue the interlocutor’s missing response (Davidson, Reference Davidson, Atkinson and Heritage1984).
After some delay, the prospect declines the request with a type-conforming response ‘No::.’ followed by two accounts. The first justifies the refusal of the sales visit by framing it as unnecessary, while the second explains that the company will continue to make use of their current service arrangements. Both accounts are softened via the TCU-final ‘at/for the moment’ (Llewellyn, Reference Llewellyn2015) that frame the state of affairs reported in the accounts as temporary.
By contrast, prospects who can be seen to have indicated tentative commitment, or readiness to meet decline if-formulated requests with nonconforming responses. A case in point is featured in 3.5.
Extract 3.5 Tech 19
| 1 | S: | U:hm: so it’s probably not a good time=but I- I just wanted | |
| 2 | to bear in mind that we- we- we did want to set up an | ||
| 3 | appointment to come and do a cost analysis for you. | ||
| 4 | P: | Oh yes. Ye[ah. | |
| 5 | S: | [Okay,=So it just means half an hour of your time? | |
| 6 | .h A:ndu:h >because I think< there’s some cost savings | ||
| 7 | there.=I know you’ve got the two Bolts pri[nters ]= But you = | ||
| 8 | P: | [Yea:h.] | |
| 9 | S: | =.h spend about fifty pounds a month on (those/them) and | |
| 10 | you’re not sure on paper, yo- >fifty pound a month< on | ||
| 11 | toners,=So: .h it’s just really to come i:n, .h See what | ||
| 12 | you’ve got,=Wha- which devices, .h (0.4) How much per- it | ||
| 13 | works out cost per copy and then take all those details away | ||
| 14 | and put them in a cost analysis back to you:. .hh So: I mean | ||
| 15 | we can set=up an appointment for Ma:rch if- if y- if you- if | ||
| 16 | you: if it’s u:hm .h that’s bett[er for you.] | ||
| 17 | P: | [↑U : : h m ] at the mo:ment | |
| 18 | (0.5) because I’m: (0.4) ↑u:hm not currently doing my role, | ||
| 19 | (0.3) my normal role. .h ↑U:hm (0.2) pkt (1.3) it’s probably | ||
| 20 | best just to give me a call back? | ||
Through the preamble (lines 1–14) that precedes the if-formulated request in lines 14–16, the salesperson accounts for the upcoming meeting request. Her turn in lines 1–3 consists of a disclaimer that acknowledges that the sales visit may be untimely, while also dismissing this issue, followed by a report of back-dated plans to have the appointment. The prospect displays her recollection of these plans (Shaw & Kitzinger, Reference Shaw and Kitzinger2007), thus aligning with the salesperson’s course of action. Next, the salesperson describes the meeting by highlighting its minimal imposition on the prospect’s schedule and its potential benefits in terms of saving money (lines 5–7). She then goes on to provide an overview of the process behind the cost analysis that will be provided to the prospect after the meeting (lines 10–14), after which she initiates the request for a meeting.
Through the turn-initial ‘so’, the if-formulated request is marked as resulting from the preceding account (Schiffrin, Reference Schiffrin1987). It takes the form of a contingent request (Taleghani-Nikazm, Reference Taleghani-Nikazm2005) and consists of two components: the consequence – setting up an appointment in March – and its condition of fulfilment – the opportuneness of the appointment for the prospect. The design of the latter component is worth further scrutiny. The if-clause in a contingent request acknowledges potential hindrances that may preclude its fulfilment and that have been already introduced in prior talk (Taleghani-Nikazm, Reference Taleghani-Nikazm2005). However, through its positively valanced design ‘if it’s u:hm .h that’s better for you’, the proposal to meet in March is framed as having already addressed the issue that the meeting may not have been opportune for the prospect at a different time.
The prospect’s reply consists of an account and a counter-request that together accomplish the refusal of the meeting. The account invokes the prospect’s current circumstances to suggest that the meeting is not timely – a contingency that the salesperson had already oriented to in line 1. The counter-request proposes an alternative trajectory that essentially slows down the progress of a potential sale, by replacing the proposed face-to-face meeting with a telephone call. At this point, the meeting is not simply delayed, but it is also not guaranteed that it will eventually take place.
So far it has been shown that, through their grammatical design, if-formulated requests can receive both type-conforming and nonconforming dispreferred responses, thus providing prospects with more opportunities to refuse them. What we will see next is that, even when a prospect accepts an if-formulated request, they can format their response in a way that forestalls the progress of the sale.
Extract 3.6 Eplus 44
| 1 | S: | .hhhhhh And it can save you on postage and all sorts of | |
| 2 | thi:ngs.=You can scan documents over, .hhHHh #U::HM (0.9) | ||
| 3 | >especially< to solicitors And your own clients. .hhhhh= | ||
| 4 | P: | =Uyeah. | |
| 5 | (0.6) | ||
| 6 | S: | >I wonder if I can come and see you.< | |
| 7 | (0.7) | ||
| 8 | P: | U::::::hm, (0.6) Uy:eah=you could do: | |
| 9 | (0.3) | ||
| 10 | P: | U:::::hm uh- at the moment I think (0.6) uh o- u- I do spend | |
| 11 | a bit on my toners and that (.) but I think #I: refill ‘em: | ||
Prior to line 1, the salesperson had been extolling the benefits of a multifunctional printer for the prospect’s business, which we also see in lines 1–3. The prospect receipts that presentation with a noncommittal ‘Uyeah.’, after which the salesperson launches the if-formulated meeting request ‘>I wonder if I can come and see you.<’. The present tense used in the ‘I wonder if’ preface frames the low-entitlement request (Curl & Drew, Reference Curl and Drew2008) as touched off by prior talk rather than as the reason for calling (Fox & Heinemann, Reference Fox and Heinemann2017), while the ‘If I can’ construction orients to contingencies associated with having the prospect’s permission to visit him (Maynard & Hollander, Reference Maynard and Hollander2014).
The prospect’s response is packaged in a dispreferred turn format, indicative of what Kendrick and Torreira (Reference Kendrick and Torreira2015, p. 273) call a ‘qualified acceptance’. It starts, after a 0.7 seconds gap, with an elongated delay token followed by another 0.6 pause and a hesitant ‘Uy:eah’ that receipts the request. The second TCU ‘you could do:’ conveys that a sales visit would be possible, but does not actually grant permission for it. Thus, the stance taken by the prospect towards the meeting is one of low interest and commitment towards the meeting and implicitly the possible sale it entails.
After such a designedly indifferent/disinterested response, it is hard to imagine the salesperson being able to progress with scheduling the meeting. In effect, after a short gap, it is the prospect who grabs the floor. He does not initiate the scheduling himself and instead returns to the matter of printing costs that had been discussed before line 1. While the salesperson eventually manages to secure a meeting with the prospect, this happens 34 lines later and in response to a when-formulated request (see Extract 3.1).
5 Discussion
Drawing on a large corpus of naturally occurring business-to-business ‘cold’ calls, this study investigated the comparative effectiveness of when-formulated and if-formulated sales requests in eliciting responses that advance the commercial activity. The reviewed empirical evidence suggests that the former are more effective than the latter in mobilising a productive response that not only aligns with the ongoing course of action, but that also enables the overarching activity to progress, for example by moving from an agreement to meet to a scheduling of the meeting. The basis for the differential outcomes of these request configurations seems to lie in how they leverage preference organisational principles to restrict the opportunities for producing dispreferred responsive actions.
Across the collection, when-formulated sales requests appear to set up several constraints upon the responsive action. As FPPs of request sequences, they promote acceptance as the preferred responsive action. The preference for prospects to accept what the salesperson is asking for is further intensified through the design of the request. Specifically, most when-formulated sales requests do not index any contingencies that could preclude their fulfilment. Instead, they embed the presumption that the requestable can and should be provided. Furthermore, when-formulated sales requests also restrict the possibilities for formatting the SPP by projecting aligned responses that feature a time reference. Thus, producing a rejection of the request requires prospects to break with both the action preference of the FPP and with the projected design of their responsive action. Taken together, these characteristics provide prospects with little room to manoeuvre out of the constraints set up by the request. Simply refusing to comply is not an option for them.
By contrast, if-formulated sales requests put both acceptance and refusal on the table. While, as requests, the action preference of these FPPs promotes the former response, still their design does not reinforce it. If-formulated sales requests index the salesperson’s low entitlement to have the request granted as well as possible contingencies that may hinder the prospect in complying with it. Importantly, the yes/no interrogative format of these requests invite type-conforming ‘yes’/’no’ responses (Raymond, Reference Raymond2003). As a result, they provide more flexibility to prospects refusing the request. Thus, saying ‘no’ is an entirely possible, albeit still accountable response (see Extract 3.4). Interestingly, not all aligned responses to if-formulated requests are also productive ones. It appears that responses which lack a display of commitment or investment on the part of the prospect hamper the progressivity of the sale and can jeopardise the success of the call (Lindström, Reference Lindström, Raymond, Lerner and Heritage2017).
The current study directly builds on prior research by Llewellyn (Reference Llewellyn2015) and Vinkhuyzen and Szymanski (Reference Vinkhuyzen, Szymanski, Richards and Seedhouse2005) on the interactional consequences of different request formats in commercial encounters. It extends this line of work by comparatively examining two new request configurations and their use as vehicles for five different actions: information requests, meeting requests, scheduling requests, future contact requests, and switchboard requests. Thus, the comparative advantage of when-formulated requests over if-formulated requests is evidenced across a range of actions, which increases the robustness of the pattern.
The study contributes to a growing body of work that examines persuasion in naturally occurring service and sales encounters using conversation analysis and discursive psychology (Humă et al., Reference Humă, Stokoe and Sikveland2019, Reference Humă and Stokoe2020; Llewellyn, Reference Llewellyn2015; Mazeland, Reference Mazeland2004; Niemi et al., Reference Niemi, Pullins, Kaski, Lindström, Laury, Peräkylä and Sorjonen2021; Pinch & Clark, Reference Pinch and Clark1986; Sikveland & Stokoe, Reference Sikveland and Stokoe2016). It shows how the infrastructure of preference organisation underpins persuasive conduct in interaction. Specifically, it illuminates how persuasion can be accomplished by designing sales requests that restrict interlocutors’ range of responsive opportunities to promote aligning responses more strongly. While similar observations have already been put forward – for example Bone (Reference Bone2006) notes that salespeople in the home improvement sector avoid yes/no questions like ‘are you interested in a free quotation’ in favour of questions like ‘are windows/kitchen a priority when in the future you are going to refurbish your house’ – this chapter is the first to provide an empirically grounded explanation for why the latter formulation may be more persuasive.
The strength of the approach taken here lies in investigating language-in-use holistically as it is employed spontaneously in real-life persuasive interactions (see also Humă, Reference Humă2023). Thus, it minimises both the researcher’s intervention into and preconceptions about the linguistic realisation of persuasion. Linguistic features are examined in the sequential and interactional environments in which they are employed and are analysed on the basis of participants’ orientations to them. These methodological principles are crucial for providing an accurate and empirically grounded view of persuasion-in-interaction. To give just one example, a decontextualised treatment of ostensibly aligned responses to if-formulated requests such as ‘Uy:eah=you could do:’ (Extract 3.6) might have missed noting that in fact this is an unproductive response. As such, detailed, context-sensitive analyses of persuasion-in-interaction can produce insights that standardising and quantifying approaches (i.e., coding and counting cases) might obscure.
This study is not without limitations. First, all cases originate from one specialised institutional setting – business-to-business ‘cold’ calls – in which caller and call taker align as salesperson and prospective customer in the accomplishment of various sales-oriented activities. While a wider range of actions are considered than other similar studies have done in the past, still, the applicability of the findings to other institutional settings should be further investigated (Fuchs, this volume). Second, it is worth noting that the collection on which this study is based encompasses a larger proportion of if-formulated than when-formulated requests. This uneven distribution and the natural configuration of the cases prevent direct comparisons across request formats, actions, and their sequential environment. Given that there is compelling CA research documenting the importance of the sequential and interactional environment in the selection of request formats (Curl & Drew, Reference Curl and Drew2008; Fox & Heinemann, Reference Fox and Heinemann2017) subsequent research could further attempt to illuminate the consequence for persuasion of the co-constitutive nature of request formulations and the environment in which they are deployed. Finally, it must also be acknowledged that the analysis presented in this chapter focused solely on proximal interactional consequences of if-formulated and when-formulated requests. An exploration of the ultimate outcome of the calls was beyond the scope of the study and was considered uninformative for the research question under investigation.
To conclude, this study is one of the few that examines language and persuasion in a ‘natural laboratory’ environment and thus has been able to uncover the crucial role that sequential properties of language-in-use have in persuasion. As such, it calls into question the hitherto exclusive conceptualisation of persuasion as the result of social-cognitive information processing mechanisms. Without doubt, more research is needed that uses naturally occurring data in order to expand our understanding of the interactional mechanisms that underpin persuasion.
1 Introduction
Election campaigns have been subject to considerable scientific scrutiny over time, with scholars exploring the persuasive effects of campaign discourse on different voter communities (cf. Lanning, Reference Lanning2005; Mayhew, Reference Mayhew2008). Similarly, linguists have examined how Aristotle’s three ‘aspects involved in rhetorical persuasion’ (Claridge, Reference Claridge2010, p. 217) can be exploited to sway electorates (cf. Partington & Taylor, Reference Partington and Taylor2017; Reyes, Reference Reyes2011; Sorlin, this volume).
Investigations into the use of stance expressions in campaign discourse are a case in point and have been gaining momentum in recent years (Breeze, Reference Breeze2018, Reference Breeze2020; Cabrejas-Peñuelas & Díez-Prados, 2014). This study seeks to enhance this line of research in three ways: first, it will show how a method previously applied in corpus linguistics (i.e., key semantic domain analysis; Rayson, Reference Rayson2008) can be fruitfully combined with Martin and White’s (Reference Martin and White2005) Appraisal Theory to classify features of evaluation in larger text corpora.Footnote 1 Second, the study will provide systematic insights into the ways negative evaluations are strategically deployed in online election campaigns. Finally, the study reveals how specific (types of) negative evaluations can change during a main phase of the US presidential elections.Footnote 2
More precisely, the study tracks negative evaluations in the online Twitter profile of Donald J. Trump and Joe Biden from the start of the respective online election campaigns to Election Day. To reveal how two different political parties in the US (Democrats and Republicans) shape the direction and the persuasive appeal of their respective election Twitter campaigns, a self-compiled corpus of 5,888 tweets (and retweets), totalling 105,773 words (tokens) and containing 1,667 negative evaluations was compiled. The corpus was analysed systematically, employing a mixed-method approach including the semi-automatic detection of attitudinal lexis as well as a later manual classification (and interpretation) of types, targets and objects of evaluation in their contexts of use.
2 Attitudinal Evaluation in Political Discourse
This study examines the negative evaluations of political candidates in the US presidential elections in 2020 and the persuasive effects such evaluations are likely to have on US voters. A primary upshot of a systematic analysis of (types of) negative evaluations is that it can expose the cultural (and ideological) beliefs and values to which politicians ascribe and that they wish to evoke in their audience. Kreis (Reference Kreis2017, p. 613), for instance, shows that politicians deploy ‘negative other-presentations’ of political opponents that often coincide with ‘positive self-presentations’. This campaign tactic, that Kreis (Reference Kreis2017, p. 614) calls ‘evaluative scapegoating’, helps politicians construct and negotiate their own public image in political campaigns, especially on social media. To illustrate, evaluations in the tweets of former US president Donald J. Trump have been shown to be based on ideological concepts firmly linked to the ‘alt-right’ movement (Lorenzo-Dus & Nouri, Reference Lorenzo-Dus and Nouri2021, p. 413). Such views include revisionist and polarised descriptions of a nefarious liberal elite that aims to undermine the rights of ‘ordinary’ US citizens – the latter being a fictional group of its own kind (Hoffmann, Reference Hoffmann2018, p. 76). Studying evaluations can be a useful practice to uncover how such beliefs are discursively constructed, reproduced, or indeed, how they make their way into mainstream political communication to begin with.
The study takes Martin and White’s appraisal theory as one of its conceptual cornerstones since the framework helps to capture and classify types of negative evaluations in political tweets.Footnote 3 It is based on the identification of three categorical domains of evaluation (also called systems), i.e., attitude, graduation and engagement. While the latter two systems are devoted to ‘engage with, source and amplify […] various evaluations and ‘positions’ within a text’ (Ross & Caldwell, Reference Ross and Caldwell2020, p. 16), the system of attitude captures the linguistic resources used to ‘express emotions, evaluations, and valuations’. Attitude can arguably be considered as a description of the semantic core of any given act of evaluation, while the other two systems boost, mitigate or enrich its semantic (and pragmatic) effects. Within the system of attitude, three subsystems can be distinguished, namely affect, appreciation and judgment (see Table 4.1).
Table 4.1 Subcategories of attitudinal evaluation (based on Martin & White, Reference Martin and White2005 and Cabrejas-Peñuelas & Díez-Prados, 2014).
| Evaluation (Type) | Evaluation (Subcategory) | Descriptors / Description | Examples |
|---|---|---|---|
| Affect | Un-/Happiness | Misery/Cheer | down, sad, miserable, laugh, rejoice |
| Antipathy/Affection | dislike, hate, fond, loving | ||
| Dis-/Satisfaction | Ennui/Interest | flat, tuned-out, involved, absorbed | |
| Dis-/Pleasure | cross, furious, satisfied, reward | ||
| In-/Security | Dis-/Quiet | uneasy, scared, confident, assured | |
| Dis-/Trust | astonished, suspicious, commit, entrust | ||
| Dis-/lnclination | keen, wish, wary, unwanted | ||
| Judgment | Normality | How special or unusual? | natural, familiar, peculiar, strange, odd |
| Capacity | How competent or able? | ||
| Tenacity | Psychological disposition or determination | brave, reliable, reckless, hasty, tired | |
| Propriety | Ethical or moral standing | good, fair, generous, immoral, corrupt, arrogant | |
| Veracity | How truthful, how honest? | credible, sincere, liar, devious | |
| Appreciation | Reaction | Impact | How does it strike me? |
| Quality | Do I like it? | ||
| Composition | Balance | Does it hang together? | |
| Complexity | Was it hard or easy to follow? | ||
| Social Valuation | - | Is it useful or worthy? |
Affect-based resources are often employed to induce emotional responses in recipients, but they may also reflect speakers’ affective stance toward a given state of affairs (see Example 1). In contrast, judgment evaluations refer to moral qualifications of human beings (see Example 2), that are typically realised in a range of subsystems related to different degrees of personal capacity, normality, tenacity, propriety or veracityFootnote 4. Such value judgments are clearly based on speakers’ individual or social beliefs, norms or expectations.
The third subsystem of appreciation (see Example 3) ‘gives expression to the speaker’s aesthetic views [on non-sentient objects or entities] rather than their emotions or feelings’ (Hoffmann, Reference Hoffmann2018, p. 61).
I love Colorado!
Wheeler is incompetent, much like Sleepy Joe Biden […]
@BillHagertyTN […] is running an excellent campaign […]
The study classifies patterns of attitudinal evaluation since ad hominem evaluations are particularly suited to provide insights into the underlying socio-political ideologies of politicians. Attitudinal evaluations can take the form of inscribed or invoked evaluations (Martin & White, Reference Martin and White2005, p. 67). Inscribed evaluations are ‘explicitly presented by means of a lexical item carrying the [negative or positive] value’ (White, Reference White1998, p. 53). In contrast, invoked evaluations cannot be tied to the evaluative load of a single expression. Instead, they need to be lifted implicitly from linguistic contexts in which they are embedded. Then, a context-sensitive meaning may suggest itself and ultimately override the (negative or positive) polarity of the single expression under study. The key domain method used in this study primarily targets inscribed evaluations. Invoked evaluations are not explored in any systematic way which is likely to impact the findings of this study.
Attitudinal evaluations can also take different formal shapes. Formal realisation patterns can range from single expressions, collocations, strings of words to single or multiple adjacent utterances (Hoffmann, Reference Hoffmann2018, p. 62). Since the study focuses on inscribed evaluations exclusively, the investigation delimits the scope of its quantitative analysis to the word classes of nouns, verbs, adjectives and adverbs. This implies that larger syntagms, such as complex phrases or clauses, are not considered as basic units of analysis. The quantitative findings reported in Section 6 will thus be based on this fundamental practice. For the (qualitative) discussion of negative evaluation some more implicit (or invoked) types of negative evaluation will be presented.
The scope of the study is constrained since it takes a closer (quantitative and qualitative) look at negative evaluations but excludes positive evaluations. This decision seems justified on three grounds: first, recent election campaigns have been shaped by what Ross and Caldwell (Reference Ross and Caldwell2020, p. 14) have called a trend of ‘going negative’ in political campaigning, stressing the relevance of negative evaluations for the overall campaign strategy. Second, a quantitative investigation of negative evaluations in this study allows for a systematic comparison to earlier studies focusing on negative evaluations in political discourse (Cabrejas-Peñuelas & Díez-Prados, 2014; Hoffmann, Reference Hoffmann2018). Third, the analysis of negative evaluations already provides a rich collection of findings that must be contextualised and discussed within the scope of this chapter. This notwithstanding, future research on politician’s systematic use of positive evaluations is encouraged since it promises to yield insights that could potentially refute or reinforce the arguments made in this study.
3 Previous Research
Despite a growing body of work on political rhetoric and political discourse analysis, studies that adopt the framework of appraisal theory (Martin & White, Reference Martin and White2005) to explore the evaluative load of political discourse are rather scarce and piecemeal (cf. Angouri & Wodak, Reference Angouri and Wodak2014). Some studies have explored how attitudinal lexis is used in political debates (Cabrejas-Peñuelas & Díez-Prados, 2014), in online campaign tweets (Hoffmann, Reference Hoffmann2018) or the newspaper coverage of mid-term elections (Mayo & Taboada, Reference Mayo and Taboada2017). While the latter uncover the ideological positions adopted by newspaper editors, authors and readers, Cabrejas-Peñuelas and Diez-Prados (Reference Cabrejas-Peñuelas and Diez-Prados2014) track how politicians’ evaluations change strategically as the televised electoral debate moves from one topical section to the next. Hoffmann (Reference Hoffmann2018) shows how the strategic use of negative evaluations over time can spread far-right ideology, based on a recurring set of objects and targets of evaluation. Similarly, Ross and Caldwell show how Trump strategically ‘goes negative’, employing a set of negative evaluations to ‘attack and undermine the character of his political opponent’ (2020, p. 13).
Note that most of these studies focus on attitudinal (judgment) evaluations. Other scholars have explored the use of emotional lexis in political discourse (cf. Wahl-Jorgensen, Reference Wahl-Jorgensen2019). For instance, Breeze (Reference Breeze2018) adopts a corpus-assisted approach to discourse analysis to study how expressions of emotion are used differently in a corpus of press releases by the British far-right UKIP compared to the centre-left Labour party. Breeze shows that the two parties ‘offer different repertoires of affective-discursive practices’ (2018, p. 39). In addition, the study revealed that the UKIP party tends to use more affective expressions in their press releases than Labour; and these affective devices commonly instigate feelings of fear and anger in readers. In contrast, the affective lexis in Labour press releases seemed to privilege emotions of ‘worry’ or ‘concern’ (Breeze, Reference Breeze2018, p. 35f.). In a more recent paper, Breeze (Reference Breeze2020) investigates the use of affective expressions in tweets of the British UKIP politician, Nigel Farage. We find that Farage’s recurring use of emotional expressions is not accidental but designed to ‘bring anger into the public sphere’ (Breeze, Reference Breeze2020, p. 18). These studies provide evidence for the fact that political communication rests on socio-political ideologies that are repeatedly evoked to reach a target votership and persuade readers (or viewers). In addition, they show that, based on the systematic and recurring use of persuasive devices, evaluations and emotions are likely to be employed in a strategic and deliberate manner rather than being incidental.
There is a large body of work on the persuasive use of rhetorical strategies in political discourse. It ranges from studies that explore the persuasive dimension of speech acts (House & Kádár, Reference House and Kádár2021; Stopfner, Reference Stopfner2021), patterns of interaction (Ekström, Reference Ekström2001; Lauerbach, Reference Lauerbach, Cap and Okulska2013; Loeb, Reference Loeb2017; Montgomery, Reference Montgomery, Ekström and Patrona2011), pragmatic inferences (Fetzer & Bull, Reference Fetzer and Bull2012), discourse markers (Fetzer, Reference Fetzer2014) and transitivity patterns (Hart, Reference Hart2013; Zhang, Reference Zhang2017), to the representation of social actors (Lundell, Reference Lundell2010; Page, Reference Page2003), the strategic use of personal pronouns (Bello, Reference Bello2013; Coesemans & de Cock, Reference Coesemans and De Cock2017) and conceptual metaphors (Charteris-Black, Reference Charteris-Black2018, Reference Charteris-Black2019; Lakoff, Reference Lakoff2010; Lakoff & Wehling, Reference Lakoff and Wehling2016). While these features can all trigger decisive persuasive effects in political discourse, their analysis is not part and parcel of this chapter, I will thus forego an extensive discussion of their individual research goals and findings in this section. Instead, I shall now take a closer look at some linguistic work that has explored the linguistic (and persuasive) styles of US American presidents, including features of evaluation and emotion (cf. Schoor, Reference Schoor2017, p. 660). Proctor and Su, for instance, examine how US presidential candidates strategically employ personal pronouns to ‘evoke nationalistic views when delivering their speeches’ (Reference Proctor and Su2011, p. 3251). Capone (Reference Capone2010) analyses how former US president Barack Obama makes strategic use of personifications, short personal stories and aspects of polyphony to convince his audience. Oddo (Reference Oddo2011, p. 287) uncovers how US presidents Roosevelt and (George W.) Bush draw on ‘polarizing lexical resources that constitute “Us” and “Them” as superordinate thematic categories that covertly legitimate war’. Sowińska (Reference Sowińska2013, p. 792) conducts a critical analysis of ‘metaphors, assertion-based patterns and thesis-antithesis’, revealing the ideological basis of US president George W. Bush’s concept of freedom. In a similar vein, Bhatia’s (Reference Bhatia2006) thematic analysis of political press conferences contrasts how the Chinese President Jiang Zemin and the former US president George W. Bush address controversial topics quite differently, stressing (or de-emphasising) aspects of positivity, mutual interest or power relations. This work notably shifts the focus of analysis from an intracultural to the important intercultural domain of political discourse analysis. The study thus reflects the subtle discursive modifications that are required by (and in) intercultural acts of diplomacy.
More recently, scholars have focused on the rhetorical style of former US president Donald J. Trump (cf. Schneider & Eitelmann, Reference Schneider and Eitelmann2020), initially finding his style of oratory to exhibit high degrees of spontaneity and simplicity, ‘never leav[ing] the realm of the casual’ (McWhorter, Reference McWhorter2017).Footnote 5 More recently, however, Clarke and Grieve (Reference Clarke and Grieve2019; see also Egbert & Biber, Reference Egbert, Biber, Schneider and Eitelmann2020) showed that Trump’s expressivity is indeed much more versatile. In fact, Trump makes use of four different writing styles on Twitter, and they are used in different degrees of frequency and dominance before, during and after the 2016 US presidential election phase (cf. Egbert & Biber, Reference Egbert, Biber, Schneider and Eitelmann2020). For instance, features of what the authors call campaigning style was arguably used more frequently in the run-up phase to the 2016 US presidential elections. It includes frequent mentions of first-person pronouns in both subject and object position, modals of predication, time adverbials, possessive determiners and a heavy use of hashtags and capitalisations (Clarke & Grieve, Reference Clarke and Grieve2019, p. 15). In contrast, other expressive styles, such as the engaged style lost prominence during the election phase but turned out to be more dominant in pre-and post-election times.Footnote 6
Such expressive diversity notwithstanding, there appears to be a consensus among discourse scholars that Trump’s styles are typically characterised by a frequent use of evaluative resources, particularly expressed by attributive and predicative adjectives (Egbert & Biber, Reference Egbert, Biber, Schneider and Eitelmann2020, p. 30f.) or a heavy use of intensifers (cf. Stange, 2020). Ronan (Reference Ronan2021, p. 81), for instance, finds that in Trump’s typical Twitter style ‘features of evaluative language are strongly represented’. However, in most of these studies, such remarks on evaluation are made in passing and different types and dispersions of evaluative features are not measured and analysed in more detail. This makes investigations targeting the systematic patterns of evaluations in political discourse even more relevant. To this end, this study hopes to shed some light on this important facet of persuasive communication in political online discourse.
4 The Key (Semantic) Domain Method
The key (semantic) domain method (KDM) is used in this study to capture evaluative lexis in larger collections of text. It has already been widely adopted in corpus stylistics (Culpeper, Reference Culpeper2009; McIntyre & Walker, Reference McIntyre, Walker, O’Keeffe and McCarthy2010) and in the corpus-assisted discourse analysis of (conceptual) metaphors (Potts & Semino, Reference Potts and Semino2019; Semino et al., Reference Semino, Hardie, Koller and Rayson2005). More recently, evaluation research has discovered the method and employed it for its own aims and purposes but not in political text types (Hidalgo-Downing & Hanawi, Reference Hidalgo-Downing, Hanawi, Aijmer and Lewis2017).
The tool used to engage in KDM for the present purposes is Wmatrix. Texts can be uploaded to the tool and are automatically POS tagged (CLAWS tagger; Rayson & Garside, Reference Rayson and Garside1998). In a second stage, Wmatrix then links each expression in a text (automatically) to ‘general fields of meaning using a taxonomy derived from the labels in the Longman Lexicon (McArthur, Reference McArthur1981)’ (Semino et al., Reference Semino, Hardie, Koller and Rayson2005, n.p.). The result is an outline showcasing the frequency of semantic domains present in the corpus, thereby reflecting their relative dominance. The overview thus mirrors the topical landscape of the corpus in question. The linguistic expressions (and their linguistic co-texts) can be accessed in relation to the semantic domains to which they are subsumed. To determine the semantic prevalence of semantic domains, each target corpus needs to be compared to a reference corpus, compatible in size and design. In this study, the two subcorpora (Biden subcorpus vs Trump subcorpus) serve as reference corpora for each other, yielding a string of positively keyed semantic domains in a corpus (relative overuse) and negatively keyed semantic domains (relative underuse). It turns out that some key semantic domains are more likely to reveal attitudinal lexis than others (see Table 4.2). This study thus selected a set of attitudinal domains exclusively and analysed the lexical expressions (and their linguistic co-texts) which were subsumed to these categories. More specifically, nouns, verbs, adjectives and adverbs which proved to have a negative polarity were selected and manually classified in different types of attitudinal evaluation, using Martin and White’s attitudinal framework, described in Section 2. In the end, the results of this analysis open a window into the use of attitudinal evaluation in each subcorpus (see Table 4.2).Footnote 7
Table 4.2 Select stance categories from the USAS Semantic Tagger (http://ucrel.lancs.ac.uk/usas/USASSemanticTagset.pdf).
5 Corpus Design
The Election Twitter Corpus (ETC) compiled for this study is partly sourced from the Trump Twitter Archive (http://www.thetrumparchive.com/) and partly built by means of manual tweet extraction from Joe Biden’s Twitter account (@JoeBiden) from 1 July to 23 October 2020. The corpus comprises 5,888 tweets (and retweets), including 105,773 words (tokens). Note that the corpus size is larger than the corpus of an earlier study that also focused on the use of negative evaluations in the 2016 US presidential election Twitter campaign between Donald Trump and Hillary Clinton (Hoffmann, Reference Hoffmann2018). This difference can be explained by the fact that the corpus in this study includes author retweets in its analysis while my earlier study did not. Retweets (by the two presidential candidates themselves) were deliberately retained in this study for two reasons:
(a) Author retweets are used quite pervasively by Donald Trump (see Figure 4.1), while Biden uses them much more rarely. At least for Trump’s Twitter activity, they contribute considerably to the overall evaluative prosody linked to his Twitter handle and should thus be considered.
(b) Author retweets frequently contain negative evaluations (either as part of the quoted message or as part of a meta-comment that the candidates attach to individual retweets). Note that the inclusion of author retweets affects total word counts and relative frequencies of evaluative lexis in the subcorpora. In addition, formally identical retweets are discarded.

Figure 4.1 Number of tweets (dark grey) and retweets (light grey) for @realDonaldTrump in October 2020; source: www.tweetstats.com.
Four different subcorpora (for each presidential candidate) comprise all tweets and retweets published on the official Twitter accounts of both candidates in the months leading up to the presidential election in late October 2020.Footnote 8 The time period for the extraction of tweets is limited to 01 July to 23 October 2020, capturing the key phase of the presidential race from the official nomination of candidates to Election Day. Tweets before or after this period are not considered.
During this time period, @JoeBiden tweeted 42,732 words in 1,383 tweets, while the corpus of @realDonaldTrump in the same time frame comprises 63,041 (word) tokens in 4,505 tweets (see also Figures 4.2 and 4.3).

Figure 4.2 Number of tweets/month for @JoeBiden and @realDonaldTrump during the US presidential election campaign 2020; source: www.tweetsstats.com.

Figure 4.3 Average tweets/day for @JoeBiden and @realDonaldTrump during the US presidential election Campaign 2020; source: www.tweetsstats.com.
All tweets are analysed using the Wmatrix software (https://ucrel.lancs.ac.uk/wmatrix/, version four). It provides a suitable online interface for the key semantic domain method (Rayson et al., Reference Rayson, Archer, Piao and McEnery2004), as explained in Section 4. In line with previous studies, ‘the different types of evaluative devices [detected are] weighed against the total amount of evaluation in the text’ (Hoffmann, Reference Hoffmann2018, p. 66; cf. Cabrejas-Peñuelas & Díez-Prados, Reference Cabrejas-Peñuelas and Diez-Prados2014, p. 168). Quantitative results are presented in percentages, relative frequencies are listed when statistically relevant to the findings.
While epithets or nicknames are precluded from the quantitative analysis due to their high frequency in the dataset, they are relevant at a later stage of the chapter, in the ensuing qualitative analysis (see Section 8). The respective section explores how both presidential candidates describe each other, using specific patterns of negative evaluations. Duplicates or formatting errors are deleted or corrected. Statistical significance is determined only in cases in which quantitative findings proved to be relatively close (p-values, log-likelihood) between variables (specific attitudinal categories) of the Trump and the Biden subcorpora.
6 Attitudinal Evaluations in the ETC
The quantitative analysis of negative evaluations in the ETC yields a sum total of 1,667 negative evaluations. This is a slight increase to the 1,383 negative evaluations that were used by the two candidates of the 2016 US presidential election in their campaigns on Twitter (cf. Hoffmann, Reference Hoffmann2018, p. 66). Of these 1,667 negative evaluations, roughly twice as many are published from Donald Trump’s (n=1,141) account than from Joe Biden’s (n=526).Footnote 9 To this end, the frequent number of negative evaluations, detected in the Trump subcorpus, corroborates what previous studies have found to be the case, namely a ‘strong negative emotional loading’ of Trump’s language use (Ross & Rivers, Reference Ross and Rivers2020, p. 9).
Figure 4.4 shows the dispersion of attitudinal evaluations across the ETC. What stands out is the high frequency of judgment evaluations. Almost three-quarters of all attitudinal evaluations can be assigned to this category (cf. Cabrejas-Peñuelas, Reference Cabrejas-Peñuelas2020; Cabrejas-Peñuelas & Diez-Prados, Reference Cabrejas-Peñuelas and Diez-Prados2014; Hoffmann, Reference Hoffmann2018). The second most frequent attitudinal subsystem is appreciation, followed by affect. More substantial insights are revealed in Figure 4.5 that shows different types of attitudinal evaluations.

Figure 4.4 Percentage of attitudinal subcategories in the ETC.

Figure 4.5 Percentage and RF/10,000 of attitudinal subcategories for negative evaluations in @JoeBiden and @realDonaldTrump in the ETC.
As can be seen, judgment evaluations rank highest in both evaluation profiles (62.3% of all evaluations for Biden and 58.3% of all evaluations for Trump). While these percentages seem to suggest that Biden uses judgment evaluations more frequently than Trump, a closer look at the respective relative frequencies (rf per 10,000 words) shows that Trump effectively deploys judgment evaluations more often than Biden (Biden: rf/judgment: 76.8 vs Trump: rf/judgment: 108.8). As will be illustrated, one of the main reasons for the elevated number is Trump’s more flexible choice of different subtypes of judgment evaluations in his tweets (see Section 7).
Likewise, Trump deploys more appreciation evaluations than Biden. In fact the relative frequencies of appreciation evaluations (Trump: rf/appreciation: 57.6 vs Biden: rf/appreciation: 29.1) reveal that Trump uses them almost twice as often as Biden. With a view to topical choices, Biden’s use of appreciation largely revolves around the impact of the coronavirus pandemic on the health of US citizens. He describes the virus as ‘grim’, ‘deadly’, ‘bad’ or ‘devastating’ (Example 4). He likewise discusses the social and natural effects of global climate change as ‘unrelenting’ or ‘horrifying’ (see Example 5). There is also criticism of racial discrimination, challenging Trump’s efforts to criminalise protestors who rallied to address the use of systematic violence in the police force, such as ‘unwarranted police violence’, ‘divided nation’, ‘racial justice’.
There’s a devastating human toll to this pandemic – and we can’t forget that.
The unrelenting impact of climate change affects every single one of us.
On several occasions, Donald Trump also tweets about the global pandemic and its negative impact on US society, but he does so much less frequently and in a different fashion: while Biden primarily focuses on the personal hardships of infected patients, Trump frequently uses COVID to launch verbal attacks on the Chinese state or its government, holding it responsible for the outbreak of the pandemic (‘the Chinese virus’).
Trump’s appreciation evaluations dominantly target objects or events linked to what Trump calls the ‘liberal democratic left’ (see also Section 7). For example, he criticises the ‘feckless Democratic leadership’ or ‘the radical left movement’. More generically, he warns of ‘dangerous cities’, ‘unsolicited ballots’ or ‘socialist healthcare schemes’. Such entities are often used metonymically to stand for larger social groups or collectives, such as buildings for parties or movements for protestors. So, one could equally class them as implicit judgment evaluations. Note that this type of demeaning tweet tends to surface more often in Trump’s retweets than in his tweets. It is probable that this trend suggests that retweets allow the candidate to deflect authorial responsibility for the original tweet to a third party while they support the authors of these tweets and authorize their ideological position through the act of quoting (see Example 6).
‘Oh shut up w the patronizing garbage. YOU’RE HIDING IN YOUR BASEMENT while the rest of us are dealing w riots enabled by Democratic politicians.’
Trump’s negative affect evaluations primarily centre on negative surprisal markers (Example 7). Examples below show tweet-initial interjections (‘Wow!’, ‘Oh’), followed by a statement with negative prosody. Such constructions frequently describe the political opponent (their actions or behaviour) in the dataset. As can be seen, the resulting negative prosody of the linguistic context typically overrides a potential positive reading of the initial interjection:
Wow, nobody realized how far Mini Mike Bloomberg went in bribing ex-prisoners to go out and vote for Sleepy Joe.
In addition, the same negative affect evaluations are frequently located at the end of tweets, where they can function as global stance markers, expanding the negative qualification linked to the entire utterance. Again, the negative affect, in this position, expresses Trump’s discontent with the actions of his political opposition (see Example 8). By contrast, tweet-initial affect markers typically reflect emotions of anger or sadness, expressing Trump’s emotional response to victims of natural disasters or crimes (Example 9). In the latter case, the lexical stance markers form integral parts of the elliptical clauses in that they are embedded.
[…] Saw no Biden supporters, and yet some in the Fake News said it was an equal number. Sad!
Sad to see the damage from the derecho in Midwest. 112 mile per hour winds in Midway, Iowa! The Federal government is in close coordination with State officials. We are with you all the way - Stay safe and strong!
Let us now shift our attention from appreciation to judgment evaluations. More precisely, the different subsystems of the judgment category will be examined more closely, explaining how each of the categories is used in the Democratic and Republican Twitter profiles under study.
7 Judgment Evaluations in the ETC
Figure 4.6 shows the dispersion of judgment evaluations in the Biden subcorpus of the ETC. As can be seen, capacity and propriety evaluations appear in much larger numbers than all of the other judgment subsystems.

Figure 4.6 Temporal dispersion of negative judgment evaluations on @JoeBiden during the US presidential election campaign 2020.
Hoffmann (Reference Hoffmann2018, p. 71) revealed similar results in his study on negative evaluations in the 2016 US presidential election campaign. He showed that Hillary Clinton, the Democratic candidate for the US presidential elections in 2016, equally made dominant use of capacity and propriety evaluations in her Twitter campaign (see Figure 4.7). Note that the remaining judgment categories, namely, veracity, tenacity and normality, are used much more rarely by the Democratic candidates.

Figure 4.7 Temporal dispersion of negative judgment evaluations on @HillaryClinton during the US presidential election campaign 2016.
In addition, we can observe that at the beginning of Biden’s election campaign capacity evaluations dominate, while propriety evaluations increase at the end of the election phase. The number of capacity evaluations drops in September 2020, while the frequency of propriety evaluations simultaneously increases. The steady rise of propriety evaluations can be possibly explained by the event of two televised election debates that were broadcast on national television in late September and early October 2022. Both debates arguably represent ‘focal points in the electoral process’ (Jennings et al., Reference Jennings, Bramlett, McKinney and Hardy2020, p. 2). The rise of propriety-based negative evaluations in the run-up to both debates might thus indicate that both contenders increasingly try to debase the (moral) character of the political opponent (Jennings et al., Reference Jennings, Bramlett, McKinney and Hardy2020, p. 4).
A closer look at capacity evaluations in July and August 2020 in their respective contexts of use suggests that Biden frequently targets Trump’s leadership abilities. More precisely, frequent negative capacity evaluations challenge official government policies for which Trump as US President is held accountable. This includes Trump’s allegedly ‘poor’ track record of creating jobs, his dismantling of the US health care system or his ‘inefficient’ fight against the COVID pandemic (see Examples 10 or 11).
[Donald Trump is] on track to become the first president in modern history to leave office with fewer jobs than when he entered. American workers can’t take four more years of his failed leadership.
I urge President Trump to join me in saying that while peaceful protest is an important right – violence is wrong. Period. If he can’t say that, then he is unfit to be President.
The frequency of capacity evaluations by Biden’s 2020 Twitter election campaign is notably higher than the number of capacity evaluations on Hillary Clinton’s Twitter account in the 2016 US presidential election campaign. At the time, Hillary Clinton entered the election campaign as the running candidate for the governing party, in other words, the Democrats. So, she found herself in a position to defend rather than challenge government policies. The high frequency of propriety evaluations, relative to a much lower number of capacity evaluations, reflects this fact. Note that Clinton’s election team appeared to focus her Twitter campaign on her political expertise (Hoffmann, Reference Hoffmann2018, p. 66). At the same time, the team questioned Trump’s moral integrity, yielding a consistently large number of negative propriety evaluations throughout the entire online campaign (Hoffmann, Reference Hoffmann2018, p. 72). In contrast, in the 2020 Democratic Twitter campaign, Biden’s election team was actually able to directly attack the governing party, namely the Republicans. It is thus not surprising to find that positive self-praise of Biden’s ambition to change current politics is often dovetailed to negative capacity evaluations, criticising Trump’s presidential decisions (see Example 12). The upshot of this campaign strategy is that Joe Biden is characterised as an active, motivated contender who is about to bring change to a dysfunctional administration (see also Section 8).
When the virus came, President Trump wasn’t thinking about you, he was thinking about his own reelection [sic!]. It’s how Trump looks at the world: It’s always about him, never about us. You deserve a president who will put you first [positive self-praise].
Compare these findings to the use of negative judgment evaluations on Donald Trump’s Twitter account during the main election phase from July to October 2020 (see Figure 4.8). Trump makes more flexible use of different judgment subsystems. Most dominant are capacity, propriety and veracity evaluations, with tenacity evaluations in fourth place and normality evaluations appearing in much smaller numbers. It is notable that capacity and propriety evaluations first decline from July to August 2020 and then gradually increase toward the end of the election campaign. Tenacity evaluations equally trail along this path but, in contrast to propriety and veracity evaluations, peak in early September 2022.

Figure 4.8 Temporal dispersion of negative judgment evaluations on @realDonaldTrump during the US presidential election campaign 2020.
In Figure 4.9, we can see the use of negative judgment evaluations by Trump’s Twitter team in the online election campaign of the US presidential elections in 2016. Note that Trump’s Twitter team employs more veracity evaluations in 2022 than in 2016 (cf. Hoffmann, Reference Hoffmann2018). One reason for this could be that Trump – as governing president – was forced to account for his decisions and fight off criticism more frequently than in 2014, when he was merely considered a presidential candidate. Figure 4.10 illustrates the large number of targets of negative judgment evaluations that Trump chose to attack in 2020. It includes various media outlets, Democratic politicians, actors, academics and intellectuals or government officials.

Figure 4.9 Temporal dispersion of negative judgment evaluations on @realDonaldTrump during the US presidential election campaign 2016.

Figure 4.10 Targets of negative evaluation on @realDonaldTrump during the US presidential election campaign 2020.
When we compare this to the 2016 Twitter campaign, the number of targets for his negative judgment evaluations has widened considerably, and now comprises fellow Republican senators, foreign country officials and even representatives of sports clubs or movie companies or celebrities (see Figure 4.10).
The substantial increase in negative other-evaluations, that are particularly marked by veracity-related claims, including his notorious use of the term ‘fake news’ (see Figure 4.11), is used to repeatedly disqualify and ridicule larger centre-left media outlets, such as the New York Times.

Figure 4.11 Concordance lines of the node expression ‘fake’ in the @realDonaldTrump ETC subcorpus.
In contrast, the targets of evaluation in Biden’s 2020 Twitter election campaign are much more limited, and primarily target Donald Trump or his administration, with very few exceptions. The scope of evaluative targets on Biden’s account (cf. Table 4.3) is compatible with the one by Clinton in the 2016 presidential elections (cf. Hoffmann, Reference Hoffmann2018, p. 76).
Table 4.3 Targets of negative evaluation on @JoeBiden during the US presidential election campaign 2020.
| Targets of Evaluation | No. of Tokens | Semantic Domain |
|---|---|---|
| Donald Trump (+ Administration) | 311 | politics |
| Republicans | 3 | politics |
| China | 3 | foreign countries |
| Pence | 2 | politics |
| Maduro Regime | 2 | foreign countries (Representatives) |
| Iran | 1 | foreign countries |
While epithets were precluded from the quantitative analysis of this study, the next section takes a closer look at the way these constructions (and related attributive and predicative adjectives) are used to characterise their political counterpart. To this end, the objects of evaluation will be considered in more detail for both politicians in question, since they are likely to broaden and contextualise our understanding of the quantitative results we have discussed.
8 Reflexive Image Building on Twitter
In this section, we will take a closer look at the objects of evaluation used by both presidential candidates to describe each other in their online election campaign on Twitter in 2020. Such negative attributions reveal what both political camps consider to be unfavourable personality traits, so that the opposite characteristics are, by implication, regarded as virtuous characteristics and implicitly ascribed to the candidates themselves. Objects of evaluation thus provide a decisive map to the ideological framework on which both (Democratic and Republican) election campaigns rest. The evaluative descriptions commonly take the form of attributive or predicative adjectives (see Figures 4.12 and 4.13).

Figure 4.12 Adjectives with negative polarity used to describe Joe Biden, capacity/tenacity (grey); propriety (black); @realDonaldTrump, ETC subcorpus.

Figure 4.13 Adjectives with negative polarity used to describe Donald Trump, capacity/tenacity (grey); propriety (black); @JoeBiden, ETC subcorpus.
In Figure 4.12, we can see that Trump frequently employs both tenacity/capacity and propriety evaluations when attacking Biden’s personality. He dominantly targets Biden’s physical and psychological characteristics, suggesting that both might be interrelated. Instead, Biden’s choice of objects of evaluation centres almost exclusively on the moral dimension of Trump’s character (propriety-related attributions), with a few capacity-related attacks. When compared to the presidential election campaign in 2016, Biden’s current profile of adjectival Trump attacks seems very similar (cf. Hoffmann, Reference Hoffmann2018, p. 77). In contrast to Clinton’s 2016 online Twitter campaign, however, Biden’s campaign team uses negative propriety evaluations much less frequently in the 2020 election campaign. The resulting evaluative ‘gap’ is often surprisingly filled with descriptions of a different kind: Biden’s Twitter campaign team frequently characterises Trump, using capacity-based attributions, such as ‘wealthy’ and ‘well-connected’, stressing the opponent’s privileged social background. Without context, such attributions should be considered as positive evaluations. However, we must take the context into consideration: Wodak (Reference Wodak2020, p. 94) explains that in far-right discourse ‘an internal distinction and opposition between the rural and pure heartland and decadent, corrupt cities or the capital are constructed’. This division separates a fictional community of poor, hard-working people from the rural midwest from a rich and decadent elite in the large US metropoles (cf. Richardson & Wodak, Reference Richardson and Wodak2009). Within this ideological framework, being called ‘rich’ and ‘privileged’ is not something Trump wishes to be associated with, and the polarity of the evaluative expressions is reversed.
By using these descriptions, Biden feeds into Trump’s own alt-right narrative of poor against the rich, attempting to re-frame it against Trump. At the same time, Biden is being staged as the actual representative of ‘ordinary US working-class citizens’ (Fetzer & Weizman, Reference Fetzer and Weizman2018; see Example 14), while Trump is sketched as a powerful self-centred, eccentric New Yorker. In the same vein, Biden exploits his allegedly ‘rural’ upbringing in small town Philadelphia, stressing his devotion to create new jobs for regular ‘folks’ (see Example 15).
Now, as then, our heroes are ordinary people doing extraordinary things: nurses and doctors; delivery drivers and grocery clerks; public transit workers and educators; regular Americans thrust into courageous acts of sacrifice and service.
A job is about a lot more than a paycheck. It’s about dignity. It’s about respect. It’s about being able to look your kid in the eye and say everything will be okay.
Too many people today can’t do that – and it’s got to change.
This strategy, that couples negative other-evaluations with positive self-evaluations, clearly deviates from Clinton’s 2016 Twitter campaign that hoped to capitalise on decades of political experience in Washington. What we see here is a strategic shift from Clinton’s propriety-related attacks to Biden’s strategy of exploiting positive self-representations to re-frame the alt-right narrative on which Trump’s election ultimately rests. It is likely that this shift may have contributed (among other perhaps more significant factors) to tipping the scale in favour of the Democratic presidential candidate on Election Day 2020.
9 Conclusion
This study set out to capture evaluations in the two official Twitter accounts of the Democratic and Republican candidates for the US presidential elections in 2020. It could be shown that a blend of semantic annotation tools (USAS semantic tagger) and manual, context-based codings can supplement existing methods to capture the evaluative profile of mid and large size text corpora. More importantly perhaps, it was shown that the diachronic use and dispersion of negative evaluations in Joe Biden’s and Donald Trump’s election campaign tweets are in large parts consistent with the use of negative evaluations in previous Twitter election campaigns (Hoffmann, Reference Hoffmann2018).
Some notable differences could, however, be found: for Trump, the targets and objects of evaluation seem to have broadened, reflecting his attempts to repeatedly counter attacks and criticism from a wide array of social and political individuals and groups, even within his own political party. It shows that Trump was forced to defend himself and his political decisions in 2020, while Joe Biden found himself in a position to challenge official government policies. The situation was reversed in the US presidential elections in 2016. Joe Biden did not win the presidential election in 2020 solely because he used a specific collection of negative evaluations, nor did Trump lose it because of the way he discredited his contender. It seems more likely that Biden managed to balance (and connect) the use of negative and positive evaluations more successfully in 2020 than Hillary Clinton did in 2016.
While four years earlier, ‘Trump and his team appear to have [repeatedly] struck a balance between using Twitter for criticism and promotion’ (Clarke & Grieve, Reference Clarke and Grieve2019, p. 20), in 2020 the Democratic election team have increased their efforts to level the playing field. The study provides evidence for this view and further shows the need and relevance of researching evaluation to uncover the persuasive effects of political and other types of persuasive discourse (also demonstrated, e.g., in Decock & Plevoets, this volume).
1 Introduction
Social engagement and interaction are increasingly taking place through new and advanced forms of online communication. More and more of these interactions involve complex processes of persuasion and influence (Dey et al., Reference Dey, Duff, Karahalios and Fu2017; Nguyen, Dabbish, & Kiesler, Reference Nguyen, Dabbish and Kiesler2015; cf. also Hoffmann, this volume; Decock & Plevoets, this volume; Heritage, this volume; Lawson, this volume; Dance, this volume). Persuasion is commonly considered as ‘human communication that is designed to influence others by modifying their beliefs, values, or attitudes’ (Simons, Reference Simons1976, p. 21). Past research has shown that various linguistic features are indicative of an online comment’s persuasion power. For instance, surface-level linguistic cues such as the use of negation, numbers, and certain punctuation marks, can hold a certain persuasion power (Xiao & Khazaei, Reference Xiao and Khazaei2019). The use of discourse relations in online comments can reveal the writer’s reasoning strategy (Peldszus & Stede, Reference Peldszus and Stede2013; Xiao & Conroy, Reference Xiao and Conroy2017) and indicates the persuasiveness of the text (Li & Xiao, Reference Li and Xiao2021). The use of function words in persuasive comments also matters (Humă, this volume; Tan et al., Reference Tan, Niculae, Danescu-Niculescu-Mizil and Lee2016). From a psychological and social perspective, function words reflect how people communicate, whereas content words convey what they are saying. Therefore, function words are much more closely linked to measures of writers’ and readers’ social and psychological worlds (Tausczik & Pennebaker, Reference Tausczik and Pennebaker2010).
Most of these studies are conducted in the context of online discussions such as Reddit (Lawson, this volume; Tan et al., Reference Tan, Niculae, Danescu-Niculescu-Mizil and Lee2016) and Wikipedia discussions (Xiao, Reference Xiao2018). Examples of factors investigated include the norms and rules in the discussions, the user’s status within the discussion forum, and the participants’ prior interactions with other users. Together, these factors influence whether and how a comment in these online discussions is perceived to be persuasive, aside from the language used in the comment. However, online persuasion can occur in non-discussion contexts as well. For instance, people can be persuaded by someone’s personal experiences shared in their online reviews, when reading online blog posts or browsing online news content. The role and importance of persuasive context thus raises some interesting questions. How does context affect the persuasion power of an online text? Are the linguistic indicators of the text’s persuasion power independent of the context? Answers to such questions can help us better understand the influence of the communication context and better interpret the language used in persuasive comments.
As a step towards answering these questions, we examined the linguistic indicators of an online comment’s persuasion power in two different communication contexts that are non-discussion based. The first one presents standalone online argument pairs from the dataset developed by Habernal and Gurevych (Reference Habernal and Gurevych2016). The second one focuses on the online review context from Yelp’s Challenge dataset. We then compared the indicators from these two contexts with those that emerge in contexts that are indeed discussion based: the r/ChangeMyView forum from the Reddit platform and Wikipedia’s Article for Deletion (AfD) feature. In the rest of this chapter, we report on relevant work and our data analysis approaches. Next, we compare the linguistic indicators of the online argument’s persuasion power identified in these two datasets. Finally, we discuss how the linguistic features indicative of an online comment’s persuasion power can be leveraged to enrich online communication experiences.
2 Linguistic Indicators of an Online Comment’s Persuasion Power: Literature Review
The linguistic features that mark an online argument’s persuasiveness have been largely explored in the context of subreddit r/ChangeMyView, which is an active subreddit intended for Reddit users who have an opinion on a subject but are open to changing their current view. A user, referred to as the OP (original poster), starts the discussion thread by posting a view. Other Reddit users are then instructed to counter that view with their arguments. When a comment successfully changes an OP’s view, r/ChangeMyView asks the OP to give a delta point (∆) and provide an explanation of why and how the comment has changed their view. This subreddit is heavily moderated, controlling for any posts or comments that do not follow the rules.
The delta mechanism, along with the forum’s moderating rules on how to post and respond in a discussion, provides a unique and valuable place for studying online persuasion. In persuasion studies that are situated in the context of r/ChangeMyView, a comment is labelled as persuasive if it received a delta point and as non-persuasive otherwise. Language analysis of these comments shows that, compared to non-persuasive ones, persuasive comments tend to be longer (Tan et al., Reference Tan, Niculae, Danescu-Niculescu-Mizil and Lee2016), use more words per sentence, and contain more complex words (Xiao & Khazaei, Reference Xiao and Khazaei2019). Additionally, function words and punctuation marks can indicate the persuasiveness of an online comment. For instance, Xiao and Khazaei (Reference Xiao and Khazaei2019) found that in r/ChangeMyView discussions, persuasive comments use more past tense, prepositions, numbers, and punctuation marks. Non-persuasive comments, on the other hand, tend to use more present tense as well as more auxiliary verbs and personal pronouns.
Xiao (Reference Xiao2018) examined the linguistic features indicative of a comment’s persuasiveness in Wikipedia’s Article for Deletion (AfD) discussions. Members of the Wikipedia community collaborate openly to offer an online encyclopaedia. To ensure the quality of its articles, Wikipedia has established community practices to examine and delete inappropriate articles. One such practice deals with the situation when it is not clear what to do with the article. In this case, an online discussion called ‘Article for Deletion’ (AfD) is held for a period of time (often 7–10 days) to encourage people to offer their opinions (e.g., to keep the article). Wikipedia’s AfD policy requires participants to provide rationales to justify their opinion when participating in the discussion, and the final decision is made based on the rationales as opposed to the number of votes for each opinion.
In studying the persuasiveness of Wikipedia AfD comments, Xiao (Reference Xiao2018) considered a comment persuasive if its opinion is the same as the final decision and non-persuasive if opposite. For instance, a comment to delete an article is considered persuasive if the final decision is to delete the article. Otherwise, the comment is non-persuasive. Interestingly, while the length of a comment is a strong indicator of its persuasive power in r/ChangeMyView discussions (Tan et al., Reference Tan, Niculae, Danescu-Niculescu-Mizil and Lee2016; Xiao & Khazaei, Reference Xiao and Khazaei2019), this is not the case in Wikipedia AfD discussions. This implies that the online communication context influences the perceived persuasiveness of a comment and this effect can manifest in the form of linguistic indicators. However, regarding an online comment’s persuasion power, there is no systematic investigation, to our best knowledge, that explores the connection between the contextual influence and the linguistic features in the comment.
In persuasion studies about the language use in comments, researchers in communication/information studies have mainly used the Linguistic Inquiry and Word Count (LIWC) tool. LIWC contains a list of linguistic, social, and psychological categories and the common words that are associated with each category (Pennebaker et al., Reference Pennebaker, Booth, Boyd and Francis2015). The tool calculates the percentage of a category in the input text by counting the number of words in the text that are from the category and dividing it by the total number of words in the text. LIWC has been widely used to gain insights on these aspects from text (Bulkeley & Graves, Reference Bulkeley and Graves2018; Olagunju, Oyebode, & Orji, Reference Olagunju, Oyebode and Orji2020; Salas-Zárate et al., Reference Salas-Zárate, López-López, Valencia-García, Aussenac-Gilles, Almela and Alor-Hernández2014). Consistent with these previous studies, we also use the LIWC tool in the analysis. Furthermore, we focus on exploring the surface-level linguistic features that influence the persuasiveness of the dataset.
3 Non-discussion Context I: The Argument Pair Dataset
We first explore the linguistic indicators in an online argument dataset that was developed by Habernal and Gurevych (Reference Habernal and Gurevych2016). The dataset contains different topics such as banning water bottle use and debating the educational value of a TV show or a book. Each topic has two groups of arguments: persuasive and non-persuasive arguments. The dataset we used has 32 topics that contain 9,111 pairs of arguments in total. These arguments were obtained from two debate portals, namely, createdebate.com and procon.org. Workers through Amazon’s Mechanical Turk annotated the persuasiveness of the arguments. In the annotation task, a worker was provided with both arguments and asked to select the one that is more persuasive. Workers were required to state reasons to justify their choices. The authors applied multiple ways to control annotation quality and consolidate multiple workers’ annotations for an argument pair. They also examined the workers’ own opinions regarding these topics to ensure systematic biases were not in play, since it is known that people’s beliefs affect their judgment of arguments (O’Keefe, Reference O’Keefe1990; Sherif & Hovland, Reference Sherif and Hovland1961; Wolfe & Kurby, Reference Wolfe and Kurby2017). More details on the development of this dataset including the annotation process can be found in the original paper (Habernal & Gurevych, Reference Habernal and Gurevych2016).
3.1 Data Analysis and Results
For each topic, we applied the Mann Whitney U test to compare the use of LIWC features between its persuasive and non-persuasive arguments. Because there are 93 LIWC features, we applied Bonferroni correction to the test with the revised p value being 0.05/93 = 0.000538. We then identified a list of features that showed statistical significance in their use between the two groups (i.e., p < 0.000538) and that had an effect size of at least 0.1. As there are 32 topics, there are 32 such lists. We then sorted the features according to the number of times they appear in these 32 lists. Next, we sorted these top 20 LIWC features according to the average of their effect sizes in these U tests (reminder: each topic has a series of Mann Whitney U tests). The effect size of these features ranges from 0.22 to 0.46. We conducted these statistical tests with the Real Statistics Resource Pack for Excel.
From these analyses, we learned that a persuasive argument in this data context tends to:
be longer than a non-persuasive one, have more words per sentence, and contain more words that have six or more letters (i.e., complex words),
have more quantifiers,
have more of the following function words: articles, prepositions, conjunctions,
have more commas.
On the other hand, a non-persuasive argument tends to contain more pronouns, personal pronouns, and common verbs. It is also more likely to have a present focus.
4 Non-discussion Context II: Yelp Review Data
Our second exploration is in the context of Yelp reviews. We obtained the review data from Yelp’s Challenge dataset (www.yelp.com/dataset). In this dataset, reviews are not labelled as persuasive or non-persuasive. Instead, each review has the total number of votes it has received for its helpfulness. An online review’s helpfulness ‘facilitates the consumer’s purchase decision process’ (Mudambi & Schuff, Reference Mudambi and Schuff2010, p. 186). That a comment is helpful means that it positively influenced the consumer’s decision-making process, which implies successful persuasion in this context. In fact, many of the findings of persuasion studies are mirrored in helpfulness research, illustrating the strong connection between the two concepts. For instance, the credibility of the author is found to be important both in the perceived persuasive power of the text (Murphy, Reference Murphy2001) and in the perceived helpfulness of the online review (Jin & Liu, Reference Jin and Liu2010; Liu & Park, Reference Liu and Park2015; Weathers, Swains, & Grover, Reference Weathers, Swain and Grover2015). In addition, just as a low-comprehensibility text is perceived to be less persuasive (Murphy, Reference Murphy2001), an online review is also perceived to be less helpful when it bears characteristics of low comprehensibility, such as spelling errors (Ghose & Ipeirotis, Reference Ghose and Ipeirotis2011), low readability level (Cao, Duan, & Gan, Reference Cao, Duan and Gan2011; Gan, Cao, & Jones, Reference Gan, Cao and Jones2012; Huang et al., Reference Huang, Chen, Yen and Tran2015), and/or it is very short (Weathers, Swains, & Grover, Reference Weathers, Swain and Grover2015).
On the other hand, we acknowledge that the two concepts are not synonymous and might differ in certain specialized settings. For instance, Bigne and colleagues (Reference Bigne, Ruiz, Currás-Pérez and Martí-Parreño2020) find that, when focusing on a random sample of Spanish-heavy reviews on a restaurant, there is no strong positive correlation between the definitions of helpfulness and persuasiveness that they adopt in their work. This is not surprising, as randomly sampled reviews are expected to be average in terms of both helpfulness and persuasiveness. We use these findings to inform our experimental design. Rather than randomly sampling reviews from across the helpfulness distribution, we focus on reviews that have been marked as the most helpful by the users themselves. This allows us to focus our analysis where the signal is likely to be stronger.
We process the Yelp data by first sorting the reviews in descending order based on the number of times they were rated helpful. Then, for each business, we identified the most persuasive review – the review that was voted as being helpful by the highest number of users among all the reviews of this business. To eliminate trivial outliers, we eliminate businesses for which the top-rated review had less than 10 votes. Next, we select a non-persuasive review that has (i) not received any votes and (ii) appears before and as close as possible to the top-rated review on the list of reviews hosted on Yelp for this business. The no-votes stipulation is the most important one, as it allows us to pair reviews from the opposite sides of the helpfulness spectrum. The second stipulation is motivated by the fact that users tend to browse the reviews on a business or product in a top-down fashion. Therefore, this stipulation increases the chances that the two reviews have been read by users who read them both and chose to vote for one but not the other. While this is not a prerequisite for our analysis, it is an easy and reasonable step to take. Through this identification process, we obtained 14,140 pairs of reviews for further analysis, with each review pair including a persuasive review and a non-persuasive review for a business.
4.1 Data Analysis and Results
We conducted a Mann Whitney U test to compare the LIWC features in the two groups of reviews: the persuasive reviews and the non-persuasive reviews. The same Bonferroni correction was applied as mentioned in the analysis of the Argument Pair (AP) dataset. Our analysis shows that non-persuasive reviews tend to have more quotation marks (medianpersuasive = 1.27, mediannon-persuasive = 3.33, effect size = 0.56). As shown in Table 5.1, there are ten LIWC features that tend to appear more in the persuasive reviews with the effect size ranging from 0.21 to 0.65.
Table 5.1 LIWC features that tend to appear more in the persuasive reviews.
5 Persuasion Power of Online Comments: Cross-Context Examination
The two previous sections reveal prevalent linguistic indicators of persuasiveness in non-discussion based contexts. In this section, we compare these findings with those that emerge from two discussion-based settings: Reddit’s r/ChangeMyView forum (CMV) and Wikipedia’s AfD feature (AfD). We compare the similarities and differences of the four persuasion contexts at the process, individual, and platform level. At the process level, we consider how the comment’s persuasiveness is evaluated. The individual level explores the effects of persuasiveness on individuals across the four contexts. And finally, we consider how the platform shapes persuasiveness given the four persuasion contexts.
First, we consider persuasiveness at the process level, wherein we find that the four contexts differ in how a comment’s persuasiveness is evaluated. In the AP dataset, two arguments were explicitly evaluated and compared to each other and one was labelled as persuasive and the other non-persuasive. CMV comments were evaluated by OPs in a similar fashion – they read a list of comments and chose one that changed their minds. In the AfD discussion, however, there was no explicit comparison or evaluation of two comments’ persuasiveness. Instead, as explained in the relevant work section, the persuasiveness of the comment was measured based on how its opinion compared with the discussion outcome (Xiao, Reference Xiao2018). In the case of Yelp reviews, the persuasion power of a review was evaluated based on its perceived helpfulness. A helpful text is one that has influenced the reader to make an informed decision regarding the product or service. Since the helpful text ‘influences’, it is considered persuasive. From the persuasion process perspective, all four contexts have one thing in common. Specifically, according to the elaboration likelihood model (ELM) of persuasion, developed by Petty and Cacioppo (Reference Petty and Cacioppo1986), one’s attitudes can be changed through either central or peripheral routes. In the central route, one carefully considers the presented information on the matter, whereas in the peripheral routes the individual focuses on the contextual information. When an individual is presented with a persuasive message and is sufficiently motivated and able to process it, a central route is more likely to be used. An individual is ‘sufficiently motivated’ when the message is considered important to the individual and ‘sufficiently able’ when they are mentally capable and physically available to process and evaluate the message. Thus, at the process level, all four contexts seem to evaluate comments mainly through the central route because of the focus on the content of the comment.
Second, the four contexts vary in the expected behavioural outcomes of successful persuasion at the individual level. Miller (Reference Miller, Dillard and Shen2013) suggested three different behavioural outcomes after a person is successfully persuaded: view establishment, view reinforcement, and view change. In a view establishment situation, one does not have a pre-determined view or attitude and only establishes one after the persuasion process. In a view reinforcement situation, one’s pre-existing opinion or attitude is reinforced after the persuasion process. In the third situation, view change, one is persuaded to change their pre-existing view or attitude to something different or opposite. As mentioned in the relevant work section, in AfD discussions the person who makes the final decision is expected to review all the participants’ comments and makes the final decision based on their reasoning instead of the majority vote of the opinions. From this perspective, the persuasive power of an AfD comment contributes to the view establishment of the individual. In CMV discussions, the original posters (OPs) identify the comments that have persuaded them to change their original views. The persuasive comments therefore contribute to the view change of OPs. In the Yelp review situation, the consumer often has a pre-existing view or attitude prior to reading the reviews (Li & Zhan, Reference Li and Zhan2011). Therefore, the persuasion power mainly reinforces one’s purchase decision or changes one’s mind. In the case of the Argument Pair (AP) dataset, the situation is probably more complicated, as there is no prior assumption of the reader’s pre-existing view so the persuasion power may contribute to all three possible outcomes.
Third, we consider the four communication contexts at the platform level. Unlike the other three contexts, AfD participants have a collective identity – they are Wikipedians. There is a strong sense of community among them such that they are expected to follow Wikipedia policies in terms of how to participate and offer their reasoning, and share a collective goal: to maintain and improve the quality of Wikipedia articles. This influences the persuasion processes. For instance, users’ usernames and status are more easily recognized, and thus, they are expected to play a more significant role in the persuasion process (Xiao & Xiao, Reference Xiao and Xiao2020). This is different from Yelp, where the sense of community is based on review contributions but not based on knowledge of one another (Parikh et al., Reference Parikh, Behnke, Nelson, Vorvoreanu and Almanza2015). A similar argument can be made for the Argument Pairs and Reddit CMV discussions. The comparison of the four contexts is summarized in Table 5.2.
Table 5.2 Four contexts of online persuasion.
In Table 5.3, we present the linguistic indicators that are identified from at least two of the four contexts. The indicators for CMV and AfD discussions are obtained from the literature (Xiao, Reference Xiao2018; Xiao & Khazaei, Reference Xiao and Khazaei2019).
Table 5.3 The linguistic indicators identified from at least two contexts.
As shown in Table 5.2, the contexts of CMV discussions, Yelp reviews, and AP arguments seem to be more similar than Wikipedia AfD discussions. Interestingly, the three contexts have more overlapping indicators such as the length of a text and the use of punctuation marks (see Table 5.3). These are mainly related to the sophistication and comprehensibility of the text. Literature suggests that the length of a text (Crossley & McNamara, Reference Crossley and McNamara2012) and lexical difficulty and diversity (Graesser et al., Reference Graesser, McNamara, Louwerse and Cai2004) can be used to measure language sophistication. According to Mensah, Xiao, and Soundarajan (Reference Mensah, Xiao and Soundarajan2019), variations in language (e.g., sophistication of the text) can affect readers’ judgments of the author’s credibility. Furthermore, lexical difficulty is directly related to the readers’ judgments of the author’s competence and socioeconomic status based on the preference for complexity principle (Hosman, Reference Hosman, Dillard and Pfau2002). It is therefore possible that comments with these characteristics are perceived as more credible because the authors have higher perceived competence or education. At the very least, the characteristics may show that the participants have put more intellectual effort in the discussions and are therefore perceived, at a superficial level, to be more credible. Therefore, we speculate that the significance of these linguistic indicators reflects the influence of the contexts on the comment’s perceived credibility.
This interpretation also helps explain why AfD discussions show different linguistic patterns from the other three. Because of the strong sense of community, Wikipedia participants have various ways to establish their credibility in the community (e.g., different discussion forums in the same community, editing, administering). Long-term members, for instance, have usernames that substantially influence their perceived credibility. The text’s sophistication and comprehensibility is therefore one of the many ways to help establish the writer’s credibility. In addition, Wikipedia participants are expected to seriously engage in the discussions and provide their thoughts since they have a collective goal in the process. Therefore, the perceived intellectual efforts are similar regardless of the comments’ views, that is, the same as or opposite to the final decisions. These linguistic features are therefore not perceived to be that useful in differentiating persuasive comments from non-persuasive ones. Interestingly, quantifiers can be used to indicate a comment’s persuasion power in all four contexts. Prior literature shows that using quantifiers instead of descriptive modifiers provides integrity to the communication (Yalch & Elmore-Yalch, Reference Yalch and Elmore-Yalch1984). Even though some earlier studies showed negative effects of quantification on persuasion when the source is of low credibility (mainly advertisements) (Hosman, Reference Hosman, Dillard and Pfau2002), the fact that quantification is positively associated with persuasion in all the four online contexts implies the importance of using numbers in persuading an online audience. This observation also speaks to the challenge and importance of detecting misinformation and disinformation in online contexts, as they can be very persuasive and influential to the receivers.
Of the three contexts, CMV discussions and the AP dataset seem to be more similar than Yelp reviews (Table 5.2). This is also supported by how much their linguistic indicators overlap with each other (Table 5.3). A closer look at these linguistic indicators suggests that, while it is important to the perceived persuasion power in CMV discussions and in online argument pairs whether the comment uses complex words or sophisticated sentences, it does not seem to be the case in the Yelp review context. Ismagilova, Dwivedi, and Slade’s (Reference Ismagilova, Dwivedi and Slade2020) experiments suggest that perceived reviewer rationality has a positive effect on the perceived influence of both online product and service review. Yet, Li et al.’s (Reference Li, Zhang, Zhou and Zhang2016) linear regression analysis of the Yelp restaurant reviews suggest that a review perceived as helpful may not provide more detailed and valuable information on the restaurant. Additionally, research suggests that the main intention to read reviews was to reinforce purchase decisions (Li & Zhan, Reference Li and Zhan2011). Based on the persuasion literature, such pre-existing views regarding the business can affect judgments of the reviews such that the analytical aspect of the review is not as important as in the other two contexts. Specifically, one’s assessment of a persuasive comment is influenced by their position on the topic (O’Keefe, Reference O’Keefe1990; Sherif & Hovland, Reference Sherif and Hovland1961). When the proposed comment is about something that the person finds acceptable, then the person subconsciously minimizes the differences between the commenter’s position and his or her own position (assimilation effect), and vice versa (contrast effect).
Xiao’s (Reference Xiao2018) analysis shows that AfD persuasive comments tend to have more words and phrases related to seeing. On the other hand, it shows the effect size of that language indicator is very small (0.04) and the use of personal pronouns is not a significant indicator of a comment’s persuasion power. We notice that the way the word see is used in AfD discussions is not necessarily about one’s personal experiences (e.g., seeing things). For instance, we note that participants tend to use the word see when referring to Wikipedia policies in the discussion using phrases such as ‘see WP:FORBESCON’, ‘See WP:POLOUTCOMES’. There is a tangible artifact, that is, the article that the discussion is focused on. Participants also tend to use the word see when referring to specific aspects of the article, for example, ‘The only reliable source dealing with him I’ve seen so far is the one that says he didn’t exist’, ‘If this stays, I would like to see some independent sourcing’, ‘I’d love to see some significant coverage in RS.’ Additionally, the discussion is about the decision to be made regarding the article. Participants use the word see when offering their views related to this process and outcome, as in ‘I see no indication that this passes GNG’, ‘from what I see it should be speedy [sic] deleted’. Lastly, even if the participants were sharing their personal experiences in this discussion forum (e.g., past experiences in AfD processes), they are probably perceived as important as those in the Yelp review situations are when the comments are being evaluated. As illustrated in Xiao and Askin’s (Reference Xiao and Askin2014) analysis of the AfD arguments, precedence was a relatively little cited justification compared to others. The authors explained that this reflects the influence of Wikipedia’s policies and expectations regarding AfD discussions. One of the most often cited sections of Wikipedia’s ‘Arguments to avoid’ page is ‘Other stuff exists.’ Essentially, this page argues that a comparison to another article or previous debate is not a sufficient rationale to support one’s argument regarding how to handle the article proposed to be deleted.
6 Discussion
Persuasion literature shows that perceived credibility of the message affects how persuasive the message is (Murphy, Reference Murphy2001). Our finding suggests that in online communications where a writer’s credibility can be established through various platform features (e.g., the status in the online community), language use in the comment plays a less significant role. The linguistic factors of a persuasive online comment are therefore context dependent. In other words, a persuasive language style in one context may not be that persuasive in another. There are at least two implications for the context-dependent nature of persuasiveness on social media. For one, it is more challenging for malicious users to customize and successfully manipulate the perceived credibility of their messages across different social platforms. At the same time, it may be challenging for the general public to discern the various language styles that are persuasive in different social media channels. We therefore envision a social media feature that raises people’s awareness of the linguistic features that carry persuasion power in that specific platform.
According to the persuasion knowledge model by Friestad and Wright (Reference Friestad and Wright1994) and the modified model for social media communication by Xiao (Reference Xiao, Luttrell, Xiao and Glass2021), people’s persuasion knowledge includes their understandings of the persuasive power of some psychological activities among other various aspects. People also develop persuasion knowledge over time as they consciously participate in persuasion episodes. In these conscious experiences, people need to recognize persuasion tactics, the reason they are used, and assess how effective they are. From the perspective of a persuasion knowledge model, we expect that the provision of such a persuasion awareness feature on social media would help people (i) accumulate their persuasion knowledge through extended use and (ii) become more critical over time when evaluating online messages.
Nurturing one’s persuasiveness awareness can be a double-edged sword. It can help users compose comments that are more persuasive and that better interpret and evaluate comments posted by others. However, the introduced persuasiveness awareness capability could have an adverse effect in three main ways. First, knowing that the comment has low persuasive power, the user may decide to manipulate the rhetoric to make it more persuasive. This may increase the chance that the comment successfully persuades other users, but it is not due to the facts or evidence or logical connections between the premise and the conclusion. This can be detrimental if the comment itself is not actually a valid or factual argument. Second, novice users with limited language ability may be de-motivated to participate in the online discussions when they become aware that their comments have much less persuasive power compared to the other participants. A negative outcome of this situation is that the other participants could then monopolize the conversation. In addition, as seen in a prior study on the effects of knowing others’ rationales in a group activity, this can add the risk of groupthink (Xiao, Reference Xiao2013). Third, would this awareness bring additional risks to the participants if used over time? For example, consider a long-term member of an online discussion forum who is always active in the discussions and now often finds out that their comments are considered not persuasive. Would the member become depressed or lose confidence or comfort in participating in the discussions? As much as online persuasion processes and mechanisms and detection of persuasion power are being explored, we call for research that investigates the effects of one’s persuasion awareness in online communications.
7 Conclusion
Participation in online communications can be anonymous, asynchronous, and open to any internet user or registered community member. While this makes it relatively easy for people to join and offer their opinions, it can be quite challenging to interpret and evaluate others’ comments. At the same time, complex interactions such as persuasion and argumentation are increasingly becoming common in online communications (Heritage, this volume, Lawson, this volume; Tan et al., Reference Tan, Niculae, Danescu-Niculescu-Mizil and Lee2016). In response, researchers are investigating online persuasion phenomena to identify the linguistic indicators of an online comment’s persuasion power. Most of these studies are situated in the context of online discussion forums. However, it is unclear whether and how these communication contexts affect the perceived persuasion power of the comments. In this study, we analysed the linguistic indicators of an online comment’s persuasion power in two communication contexts: standalone argument pairs and Yelp reviews. Together with two online discussion contexts (Wikipedia Article for Deletion discussions and subreddit r/ChangeMyView), we conducted a cross-context examination that yielded similarities and differences across the four contexts. We found that the linguistic factors of a persuasive online comment are context dependent. Different online communication contexts may affect the linguistic features of a comment’s persuasion power as well as shape perceived persuasion and persuasion awareness in online communication contexts. These insights can be leveraged to enrich online communication experiences.
1 Introduction and Rationale of the Study
In face-to-face (F2F) communication, positive and negative emotions and attitudes can be expressed and emphasized through para-verbal (e.g., tone, volume) and non-verbal (e.g., facial expressions) means. Communicators try to simulate this multimodality of F2F communication in written Computer-Mediated Discourse (CMD) by using alternative paralinguistic devices such as capitalization (i.e., use of uppercase letters), flooding (i.e., deliberately repeating) of exclamation/question marks, and emoticons or emoji (Androutsopoulos, Reference Androutsopoulos, Kristiansen and Coupland2011). Studies have found that the use of such alternative paralinguistic cues is helpful in correctly interpreting the senders’ emotions and attitudes and that they indeed influence the perceived strength of these emotions and attitudes (Harris & Paradice, Reference Harris and Paradice2007; Sampietro, Reference Sampietro2019; Xu et al., Reference Xu, Chen and Liu2017). However, the use and impact of these alternative paralinguistic CMD-features (henceforth referred to as APF) cannot simply be equated with the use and impact of their para- and non-verbal counterparts in F2F communication. Indeed, as Glikson et al. put it with regard to the smiley emoji: ‘a smiley is not a smile’ (Reference Glikson, Cheshin and van Kleef2018, p. 614) in the sense that a smiley is used differently and has different effects on the recipient compared to a smile. Against this background, this chapter sets out to explore APF in online hotel reviews on Booking.com and how these features influence reply strategies in hotel management responses to these reviews.
We aim to examine the use and impact of APF in online hotel reviews through the theoretical lenses of normativity (i.e., ways of evaluating, judging, and policing semiotic practices, Leppänen et al., Reference Leppänen, Møller, Nørreby, Stæhr and Kytölä2015) and rapport management (i.e., ‘the use of language to promote, maintain, or threaten harmonious relationships’, Spencer-Oatey, Reference Spencer-Oatey2008, p. 3), two theoretical concepts that are highly relevant in this context for the following reasons. First, addressing normativity is crucial in view of the pluralization of written language norms (Androutsopoulos, Reference Androutsopoulos, Kristiansen and Coupland2011). The present-day era of ‘Late Modernity’ (Giddens, Reference Giddens1991) is characterized by a strong informalization of written (digital) discourse, which is also reflected in the spread and functional elaboration of APF. In spite of this tendency towards more informal language in written (digital) communication, however, there are considerable differences in the ways these features are used and tolerated. Indeed, different styles of writing are ‘deemed appropriate in different environments and genres and to different user groups’ (Androutsopoulos, Reference Androutsopoulos, Kristiansen and Coupland2011, p. 11). As a necessary step to further explore this claim of situated normativity with regard to APF, we aim to describe the use of APF in online hotel reviews and business responses in terms of different types of APF and their frequency patterns.
Second, in their capacity of intensifying positive or negative emotions and attitudes (Liebrecht et al., Reference Liebrecht, Hustinx and van Mulken2019), APF can be employed both to promote and to threaten rapport (i.e., a harmonious relationship) and social cohesion between interactants. Their use may thus affect the recipient’s feelings towards the person who used the features as well as the recipient’s response. In the fields of digital communication and marketing, the relational effects of the use of positive emoji have already been studied, showing that positive emoji lead recipients to experience higher positive affect through emotional contagion (Das et al., Reference Das, Wiener and Kareklas2019; Smith & Rose, Reference Smith, Rose, Geuens, Pandelaere, Tuan Phan and Vermeir2018). The relational effects of other APF have – to our knowledge – not yet been examined, however, nor have their effects been explored when they are used to strengthen negative instead of positive emotions. Moreover, these effects have only been examined with regard to recipients’ attitudes, not with regard to the ways in which they respond. It is thus a valuable endeavour to examine the rapport-related effects of these features in written CMD, and the extent to which these effects depend on the frequency of use, and emotional valence (positive vs. negative). In doing so, this study picks up on recent research in pragmatics which makes the case for improving the empirical foundations of theories on politeness and rapport management by also taking into account the effects of language use on interactants (i.e., a focus on perlocution) (Decock & Depraetere, Reference Decock and Depraetere2018).
As already mentioned, we will explore the use and impact of APF in the genre of online User-Generated (UG) hotel reviews on Booking.com as a form of Electronic Word-of-Mouth (eWOM). Apart from reviews’ growing societal importance as publicly available persuasive texts influencing consumers’ travel-related purchase decisions (Fotis et al., Reference Fotis, Buhalis, Rossides, Fuchs, Ricci and Cantoni2012), making it a high-stake genre, this genre was chosen for the following reasons.
First, the specific characteristics of online review (management) make it possible to conduct a highly systematic cross-valence analysis of the use and impact of APF. As a first characteristic, online hotel reviews contain positive and/or negative feedback on a guest’s hotel experience. This means that the genre serves to express both positive and negative emotions and attitudes. As a second characteristic, popular hotel review platforms such as Booking.com allow for only limited interaction to take place: hotels are allowed to write no more than one response to each review, and no subsequent interaction is possible, meaning that every guest-hotel interaction consists of one guest review and one hotel response.
Second, we can offer a novel approach to research on online review (management). APF have mainly been discussed in contexts of private messaging (e.g., Hilte, Reference Hilte2019; but see Seghers et al., Reference Seghers, De Clerck and Lybaert2021). It is timely to also examine them in online review (management) as a form of public CMD, as it is to be expected that the shift in audience (from one-to-one to one-to-many) as well as the professional context go hand in hand with a specific set of norms governing their use, impact, and evaluation (Xiao et al., this volume). Moreover, previous research on online hotel reviews is characterized by a rather one-sided focus on (responses to) negative reviews (Cenni, Reference Cenni2022). Positive reviews and mixed reviews have received less scholarly attention (Hoffmann, this volume). This calls for the adoption of a cross-valence perspective to acquire insights into the situated normativity of APF in online reviews and review responses (Androutsopoulos, Reference Androutsopoulos, Kristiansen and Coupland2011). Such an approach is helpful because message valence is known to influence language use. For instance, it has been found that emoji are used more often in positive than in negative messages (Bai et al., Reference Bai, Dan, Mu and Yang2019). In addition, existing linguistic research on online hotel reviews and review responses has described and analysed discourse features, often including the use of APF, in reviews and review responses separately (Ho, Reference Ho2017; Cenni, Reference Cenni2022; Ruytenbeek et al., Reference Ruytenbeek, Verschraegen and Decock2021), neglecting their interactional rapport-related dynamics. This is surprising, as hotel reviews and their responses are two core elements connected with each other in the genre chain of the review-related service encounter. While it is in the interest of hotels to establish or maintain rapport with previous guests in order to avoid or minimize reputational harm, the ease with which this is done is suspected to also depend on the language used in reviews (Zhang & Vásquez, Reference Zhang and Vásquez2014). As such, the presence of APF, which are used to strengthen the illocutionary force of either negative or positive customer evaluations and which may thus have an impact on the addressee’s face and feelings, is likely to influence the way rapport is managed in responses. In addition, this influence might be weaker or stronger depending on feedback valence (positive or negative).
After having discussed the aims and rationale of this study, we will, in the following sections, present our research questions and hypotheses, the methodology we adopted, as well as our findings. We will conclude this chapter with a discussion of our findings, this study’s limitations, and avenues for future research.
2 Research Questions and Hypotheses
Before being able to explore the impact of APF in online reviews on review responses, we first need to know more about their use, hence the first research question (RQ1): Which types and frequencies of APF are found in online reviews, and to what extent do the presence and frequency of these features differ according to feedback valence (positive vs. negative)? Based on the Pollyanna principle, which states there is a universal human tendency to use and expect positive evaluative words (Boucher & Osgood, Reference Boucher and Osgood1969), we hypothesize that APF which strengthen positive feedback will be used more often than APF which strengthen negative feedback (H1).
After having described the types and frequencies of APF in reviews, we will address the following research question (RQ2): What is the impact of APF in online reviews on hotel responses in terms of rapport management, and to what extent does the effect of these features on rapport depend on the frequency of use of APF in online hotel reviews, and feedback valence (positive vs. negative)? We know that APF used to intensify a positive or negative evaluation are indeed perceived as such (Meinders, Reference Meinders2017), and that speech acts whose illocutionary force is strengthened are likely to affect rapport and how it is managed in the ensuing interaction (Spencer-Oatey, Reference Spencer-Oatey2008). Against this background, we expect that a higher number of APF (vs. absence or lower number) used to intensify a positive evaluation will be associated with
an increased use of rapport-enhancing strategies in hotel responses (H2a)
and a decreased use of defensive strategies in hotel responses in the case of mixed reviews, namely reviews including both positive and negative feedback (H2b).
Moreover, we expect a higher number of APF (vs. absence or lower number) used to intensify a negative evaluation to be associated with
a decreased use of rapport-enhancing strategies in hotel responses (H3a)
and/or an increased use of defensive strategies in hotel responses (H3b).
In the next section, we will present the methodology used to address these research questions and test these hypotheses.
3 Methodology
3.1 Corpus Properties and Selection
This study relies on a corpus provided to us by Hotelspeaker in the context of a collaboration between this company and Ghent University which has been made official through a Confidential Disclosure Agreement in October 2018 and a Data Transfer Agreement in October 2021. Hotelspeaker is part of the French company Idixit SAS and is committed to responding to online hotel reviews on behalf of the hotels that have enlisted their services. Their customer base consists of hotels and hotel chains from all over the world. Hotels making use of their services stay in control of the review response process in that they are required to validate each response suggested by the company. The entire corpus contains reviews and responses to these reviews as they are published on a variety of public platforms and in over 40 different languages. The company is committed to answering every hotel review. Aside from reviews and review responses, the corpus provided by the company contains an ID for every interaction (review and response), and information on:
platform on and language in which the interaction is published;
hotel (name, location);
review writer (location, gender if available);
review (date of publication, normalized rating calculated on a scale from 0 to 100);
type of review, i.e., whether the review is entirely positive, entirely negative, or mixed;
response (date of publication, ID of company editor who wrote the response).
This pre-annotation is useful, because variables such as normalized rating, hotel ID, and company editor ID can be taken into account when performing statistical analyses on the data. The company works with communicative guidelines on how to respond to hotel reviews, thus setting a professional standard about what a good hotel response should look like, promoting politeness, personalization, and creativity. We are thus analysing the influence of APF in a professional setting.
For this study, 1,465 Dutch-language review interactions from Booking were selected using advanced and random sampling, with one interaction consisting of a review and a response. Aside from working exclusively with Dutch-language data from Booking.com, we only selected review interactions linked to hotels located in Belgium or the Netherlands and to editors writing responses in Dutch (not in other languages) and which included one or more out of 10 of the most occurring review topics and a user location for the reviewer (to verify that the review writer lives in a Dutch-speaking country). Moreover, we strived for a balanced distribution of interactions per editor, per country, and per review type (whether the review is entirely positive, entirely negative, or mixed). This selection procedure resulted in 1,480 interactions (534 entirely positive, 534 mixed, 412 entirely negative reviews). While coding, we found that some interactions occurred twice in the corpus, and that quite a few reviews pre-annotated as entirely negative (or positive) actually did contain some positive (or negative) feedback as well. We deleted identical interactions and recategorized some of the interactions, ending up with 1,465 review interactions (510 entirely positive, 769 mixed, 186 entirely negative reviews).
The company has granted us written permission to use this dataset, which is safely stored on space provided by our university (Ghent University) and which does not include sensitive information. To protect review writers’ privacy, the dataset does not contain user (nick)names, making their identification impossible. In this publication, no personal information is presented. We only show (excerpts of) linguistic realizations of reviews and review responses.
3.2 Corpus Annotation
The selected reviews as well as the responses were annotated for specific language and discourse features. Manual tagging of APF in reviews and of rapport-enhancing and defensive response strategies in hotel responses were carried out in Excel. APF were coded by the first author of this article, and rapport-enhancing and defensive response strategies were coded by a master’s student. As the coding of response strategies is less straightforward than the coding of APF, the first author of this article checked the master student’s annotation and made changes if necessary.
The following APF (including their frequencies per review) were annotated:
(flooding of) exclamation marks, e.g., ‘the room was so great!!!’
emoticons/emoji,Footnote 1 e.g., ‘the room was so great :)’ – ‘the room was so great
’
capitalization, e.g., ‘the room was SO great’
flooding of vowels, e.g., ‘the room was soooo great’
flooding of question marks, e.g., ‘is this supposed to be good service???’
As our corpus also contains mixed reviews, APF to strengthen positive feedback, and APF to strengthen negative feedback were coded separately. Frequency rates of APF strengthening positive and APF strengthening negative feedback (henceforth called positive and negative APF respectively) per review were calculated based on absolute numbers as well as density. Density was calculated by dividing the absolute number of negative/positive APF per review by the review word count.
What we mean by ‘rapport-enhancing’ and ‘defensive’ review response strategies has been operationalized in a codebook by two master’s students and the first author of this article, based on the move analyses of Ho (Reference Ho2017) and Cenni (Reference Cenni2022). A move analysis is a frequently adopted method within genre studies (cf. Van Herck et al., Reference Van Herck, Decock and Fastrich2022). It serves to describe the discourse structure of a specific genre – in our case responses to online hotel reviews – by identifying the moves making up this genre’s macro-structure, with a move being defined as ‘a discoursal or rhetorical unit that performs a coherent communicative function in a written or spoken discourse’ (Swales, Reference Swales2004, p. 228). We identified several moves in review responses and categorized these moves depending on whether they mainly fulfil the function of establishing rapport with the reviewer or of defending the hotel’s reputation. Rapport-enhancing discourse moves encompass positive moves which can only be used in response to positive feedback, positive moves which can be used in response to both positive and negative feedback, and accommodative moves which are only used in response to negative feedback. Defensive discourse moves encompass moves which either diminish or deny a guest’s negative feedback. In addition to the identification of specific moves, a move analysis also pays attention to linguistic features of discourse moves such as grammatical features (tense, voice, and aspect) and word choice (Upton & Cohen, Reference Upton and Cohen2009). In line with this procedure, we included intensifiersFootnote 2 (e.g., adverb, adjective, caps, exclamation mark, emoticon/emoji), used to strengthen a rapport-enhancing or a defensive move as well as the intensified echoing of a guest’s feedback in our codebook. When using the terms rapport-enhancing and defensive review response strategies, we thus refer to both discourse moves and to language intensifiers which have the goal of either enhancing rapport with the customer or to defend a hotel’s image. Examples and frequencies per review type (entirely positive, mixed, entirely negative) of each type of rapport-enhancing and defensive response strategy are given in Tables 6.1, 6.2, 6.3, and 6.4. When reporting frequencies of moves and most intensifiers in these tables, the upper number in a cell refers to the absolute number of occurrences of a specific move or intensifier. The lower number is based on a different way of counting and tells us if a move or intensifier occurs at least once within a response. The move ‘express positive feeling’, for instance, occurs 632 times in a sample of 510 responses to positive reviews, indicating that this move often occurs more than once within such a response. This same move occurs at least once in 474 out of 510 responses to positive reviews, or in 93% of such responses. In the case of highlighting a guest’s positive feedback, we distinguished several scales and provide frequencies for each scale.
Table 6.1 Examples and frequencies per review type (entirely positive, mixed, entirely negative) of rapport-enhancing response moves.
| Translated example from the corpus | No. in response to positive reviews (n=510) | No. in response to mixed reviews (n=769) | No. in response to negative reviews (n=186) | |
|---|---|---|---|---|
| Rapport-enhancing response moves | ||||
| A. Positive moves in response to positive feedback | ||||
| A.1. express positive feeling | We are delighted that everything was to your liking. | 632 474 (93%) | 693 580 (75%) | 4* 4 (2%) |
| A.2. compliment as motivator | Such reviews are a great boost for us. | 7 7 (1%) | 2 2 (0.3%) | |
| A.3. refer compliment | We pass on the compliments to our staff. | 24 24 (5%) | 25 25 (3%) | |
| A.4. promote facility/service/values | Customer friendliness is very important to us. | 96 94 (18%) | 82 79 (10%) | |
| B. Positive/accommodative moves in response to positive/negative feedback | ||||
| B.1. thank/appreciation | Thank you very much for your review. | 534 444 (83%) | 788 655 (85%) | 157 148 (80%) |
| B.2. solicit future visit | We look forward to seeing you again! | 515 491 (96%) | 610 595 (77%) | 106 103 (55%) |
| B.3. positive closing (wish) | We wish you a pleasant day! | 25 25 (5%) | 42 42 (5%) | 11 10 (5%) |
| B.4. express hope | We hope you slept well and enjoyed the city! | 26 26 (5%) | 132 132 (17%) | 52 51 (27%) |
| B.5. invite for future contact | We would like to ask you to contact us personally to discuss what exactly was wrong. | 1 1 (0.2%) | 78 73 (9%) | 44 42 (23%) |
| C. Accommodative moves in response to negative feedback | ||||
| C.1. take corrective action: consider the information | 1* 1 (0.2%) | 276 270 (35%) | 67 64 (34%) | |
| C.1.1. refer complaint | and we will pass on your comments to the cleaning team. | 97 95 (12%) | 37 35 (19%) | |
| C.1.2. investigate the matter | and will check what went wrong that day. | 54 53 (7%) | 10 9 (5%) | |
| C.1.3. pure consideration | We take note of your comments. | 1* 1 (0.2%) | 125 122 (16%) | 20 20 (11%) |
| C.2. rectify/improve | We are working hard to find a solution so that the lift can transport our guests without any problems. | 2* 2 (0.4%) | 242 222 (29%) | 69 63 (34%) |
| C.3. apologize | and we apologize for this inconvenience. | 273 230 (30%) | 122 101 (54%) | |
| C.4. indicate awareness/acknowledge problem | It is true that there are no kettles in the rooms. | 2* 2 (0.4%) | 159 147 (19%) | 25 23 (12%) |
| C.5. express negative feeling | We feel sorry that you had a disappointing experience with us. | 2* 2 (0.4%) | 392 353 (46%) | 141 115 (62%) |
Table 6.2 Examples and frequencies per review type (entirely positive, mixed, entirely negative) of rapport-enhancing language intensifiers.
| Translated example from the corpus | No. in response to positive reviews (n=510) | No. in response to mixed reviews (n=769) | No. in response to negative reviews (n=186) | |
|---|---|---|---|---|
| Rapport-enhancing language intensifiers | ||||
| A. Intensifiers used to strengthen a rapport-enhancing move | ||||
| A.1. adjective/adverb | We are very pleased that you were satisfied with the convenient location and our service. | 982 463 (91%) | 1307 606 (79%) | 219 130 (70%) |
| A.2. single exclamation mark | Thank you for your nice review! | 669 373 (73%) | 593 352 (46%) | 7 7 (4%) |
| A.3. emoticon | Thank you for this nice review :) | 41 36 (7%) | 13 13 (2%) | |
| A.4. interjection | Wow, thanks for this great review! | 16 15 (3%) | 3 3 (0.4%) | |
| A.5. caps/accent | WOW! What a top score! | 2 2 (0.4%) | ||
| B. Highlight a guest’s positive feedbackFootnote 1 | ||||
| B.1. intensified ways of making explicit that the guest’s review was positive | 1: positive review 2: very positive review 3: extremely positive review | 1: 240 (47%) 2: 108 (21%) 3: 7 (1%) | 1: 185 (24%) 2: 44 (6%) 3: 8 (1%) | |
| B.2. intensified ways of echoing a guest’s positive experience | 1: that you have so enjoyed 2: that you so tremendously enjoyed | 1: 143 (28%) 2: 2 (0.4%) | 1: 71 (9%) 2: 4 (0.5%) | 1: 1 (0.5%)* |
| B.3. intensified ways of making explicit the hotel management’s positive feelings in reaction to a guest’s positive feedback | 1: we find it fantastic that you 2: it pleases us enormously that you | 1: 39 (8%) 2: 39 (8%) | 1: 14 (2%) 2: 17 (2%) | |
1 In the case of negative feedback, hotels would sometimes also echo and thus fully acknowledge the guest’s negative experience, by writing ‘that you were so dissatisfied’. This, however, happened only rarely (in 0.8% of mixed reviews and 3% of entirely negative reviews). Because of data paucity, we did not take these cases into account in our analysis.
Table 6.3 Examples and frequencies per review type (entirely positive, mixed, entirely negative) of defensive response moves.
| Translated example from the corpus | No. in response to positive reviews (n=510) | No. in response to mixed reviews (n=769) | No. in response to negative reviews (n=186) | |
|---|---|---|---|---|
| Defensive response moves | ||||
| A. Diminish moves | ||||
| A.1. explanations | Especially during holidays or events in the city, the price varies from day to day, and this applies not only to us, but to all hotels in the area. | 131 128 (17%) | 47 42 (23%) | |
| A.2. frame as isolated incident | 63 57 (7%) | 31 28 (15%) | ||
| A.2.1. unusual occurrence | This is certainly not our standard. | 36 33 (4%) | 19 19 (10%) | |
| A.2.2. refer to positive comments in the past | However, our guests are usually very satisfied with it. | 27 25 (3%) | 12 10 (5%) | |
| A.3. shift responsibility | 340 299 (39%) | 104 84 (45%) | ||
| A.3.1. suggest or recommend | In the future, do not hesitate to report this immediately to the reception, they will help you as best they can. | 1* 1 (0.2%) | 283 263 (34%) | 73 67 (36%) |
| A.3.2. shift responsibility to the guest | We have offered you an upgrade for the inconveniences you experienced and we have done our best to satisfy you but none of the options proposed proved sufficient. | 11 7 (1%) | 15 11 (6%) | |
| A.3.3. shift responsibility to a third party | It was an unexpected external problem that unfortunately was not related to our hotel itself. | 1* 1 (0.2%) | 11 11 (1%) | 7 6 (3%) |
| A.3.4. matter of taste | Our guests are usually very satisfied with our standard box spring beds, but we understand that not everyone has the same preference. | 35 35 (4%) | 9 9 (5%) | |
| A.4. highlight facility/service/valuesFootnote 1 | We are sorry you were not completely satisfied with the service provided by our staff. Our guests are always our top priority and we will certainly discuss this with the team. | 2* 2 (0.4%) | 379 310 (40%) | 101 83 (45%) |
| B. Deny moves | ||||
| B.1. rebut | We take note of your comment about the television and would like to point out that we do broadcast Flemish channels. | 66 62 (8%) | 27 20 (11%) | |
| B.2. challenge reviewer’s decision | It is clearly stated on the booking.com website, in the policies, that an extra baby bed or extra bed is not possible. | 2 2 (0.3%) | 12 11 (6%) | |
| B.3. question responsibility | We are very surprised to hear you say that our staff are unfriendly and unhelpful. Hospitality is one of our highest priorities | 6 5 (0.7%) | 8 7 (4%) | |
1 The difference between the moves ‘promote facility/service/values’ (see Table 6.1) and ‘highlight facility/service/values’ is that in the latter, the hotel representative responds to negative feedback which they try to diminish by highlighting the hotel’s facilities, services, or values. In the former, however, the hotel representative confirms and/or emphasizes positive feedback, either by adding some new positive information about the hotel’s facilities, services, or values that was not present in the review itself or by elaborating on the guest’s positive feedback in greater detail.
Table 6.4 Examples and frequencies per review type (entirely positive, mixed, entirely negative) of defensive language intensifiers.
| Translated example from the corpus | No. in response to positive reviews (n=510) | No. in response to mixed reviews (n=769) | No. in response to negative reviews (n=186) | |
|---|---|---|---|---|
| Defensive language intensifiers: intensifiers used to strengthen a defensive move | ||||
| 1. adjective/adverb | This is definitely not our standard. | 559 346 (45%) | 192 109 (59%) | |
| 2. single exclamation mark | You can request a bath mat at the reception! | 33 33 (4%) | 6 6 (3%) | |
| 3. emoticon | Well, toilet paper is very personal :) | 6 6 (0.8%) | ||
| 4. interjection | / | |||
| 5. caps/accent | We would like to point out that it is stated on our site that parking with us can NOT be reserved in advance, and therefore you can only park here if there are still spaces available. | 2 2 (1%) | ||
Tables 6.1 and 6.2 present rapport-enhancing response strategies, with Table 6.1 focusing on rapport-enhancing response moves.
It should be noted that the realization of positive moves in response to positive or negative feedback can be different depending on review valence. For instance, in response to negative feedback, the move ‘solicit future visit’ will often be phrased along the lines of ‘We hope to see you again for a better experience’, a phrasing which would not make sense in response to positive feedback. Moreover, the moves ‘express hope’ and ‘invite for future contact’ are mainly used in response to negative feedback and less so in response to positive feedback.
Table 6.2 gives an overview of rapport-enhancing language intensifiers used in our corpus.
Tables 6.3 and 6.4 present defensive response strategies, with Table 6.3 focusing on defensive response moves.
Note that we found, on very rare occasions, a positive strategy in response to an entirely negative review and an accommodative move or a diminish move in response to an entirely positive review (marked as * in Tables 6.1 and 6.3), which may seem surprising at first sight. The reasons for these anomalies are that in exceptional cases, a reviewer did give some positive feedback, not when writing their review, but when performing the task of choosing predefined topical categories and marking them as either positive or negative, or a reviewer did mention a negative aspect in a positive review, with this negative comment either referring to something for which the hotel in itself is not responsible (e.g., road works), or merely serving the purpose of stressing the hotel’s excellent service to its guests.
Finally, Table 6.4 presents defensive language intensifiers.
Tables 6.1, 6.2, 6.3, and 6.4 show that, unsurprisingly, response strategies differ depending on review valence. The goal of this chapter is to find out if the use of positive or negative APF in reviews also affect response strategies.
3.3 Analysis
RQ1, which relates to the types and frequencies of APF in online reviews and the extent to which these frequencies differ according to feedback valence (positive vs. negative), can be answered by counting features and applying Chi-square tests. This analysis also allows us to test H1 (positive APF will be used more often than negative APF). RQ2 on the impact of APF in online reviews on hotel responses in terms of rapport management is answered and H2a–b and H3a–b tested using the software R (R Core Team, 2022). A Generalized Additive Mixed Model is fitted with (the frequency of) each response strategy (i.e., positive, accommodative, diminish, and deny moves; intensifiers of these four types of moves; the three strategies of highlighting a guest’s positive feedback) as the response variable and the density of either positive or negative APF in the hotel reviews (i.e., the number of APF divided by the word count) as the predictor variable. The overall rating of the hotel was included as a control variable in these models. The IDs of the hotels as well as those of the editors served as (two) random effects. The choice for Generalized Additive Mixed Models was motivated by the fact that some response variables proved to be non-linearly related to the APF density, in which case linear effects in a Generalized Linear Mixed Model could possibly misestimate the effect. All models were fitted with the R package gam (Hastie, Reference Hastie2020). Because the non-linear effects cannot be expressed in a simple formula involving estimated coefficients, results will not be reported in tables. As is customary for Generalized Additive Mixed Models, the statistically significant effects will instead be visualized. This will be done using the R package effects (Fox, Reference Fox2003). Part of this visualization for Generalized Additive Mixed Models is that the response variable is centred to a mean of 0 but the general shape of the non-linear curve closely represents the relationship between predictor and response variable. All models were fitted on all the data, but descriptive analyses showed that there were five outlying observations for the density of positive APF and five other outliers for the density of negative APF. The models were refitted on the data without these outliers and the significant results of these models are what is shown below (non-significant results will not be visualized).
4 Results
4.1 APF in Online Hotel Reviews (RQ1)
Table 6.5 serves to answer RQ1. It shows the different types of APF strengthening either positive or negative feedback with their absolute numbers. The percentages given show how often a specific type is used compared to the other types. We can see, for instance, that of all the positive APF, 80% are exclamation marks, 12% flooded exclamation marks, 3% emoticons, and 5% caps. Table 6.5 also shows that a single exclamation mark is most often used, both to strengthen positive and to strengthen negative feedback. Flooding (mainly of exclamation marks) comes in second, and this type is used more often to strengthen negative feedback as compared to positive feedback. Emoticons and capitalization are not frequently used.
Table 6.5 Frequencies for different types of positive and negative APF.
| Review | ! | Flooding (!!/??/vowels)Footnote 1 | EmoticonsFootnote 2 | Caps | TOTAL | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| Pos | Neg | Pos | Neg | Pos | Neg | Pos | Neg | Pos APF | Neg APF | |
| Only positive feedback (n=510) | 235 (80%) | 0 | 34 (12%) | 0 | 8 (3%) | 0 | 16 (5%) | 0 | 293 34% min. 1 | 0 |
| Mixed feedback (n=769) | 129 (84%) | 90 (58%) | 10 (7%) | 54 (35%) | 11 (7%) | 3 (2%) | 3 (2%) | 7 (5%) | 153 15% min. 1 | 154 13% min. 1 |
| Only negative feedback (n=186) | 0 | 36 (55%) | 0 | 23 (35%) | 0 | 0 | 0 | 6 (9%) | 0 | 65 17% min. 1 |
1 In this category, positive APF only consist of !!, while negative APF mainly consist of !!, with nine cases of ?? and one case of flooded vowels in mixed reviews and four cases of ?? and one case of flooded vowels in negative reviews.
2 It turned out that our corpus did not contain emojis.
For mixed reviews, no differences can be detected when considering all types of APF together: they seem to be used in equal numbers to strengthen positive or negative feedback. In contrast to reviews containing only negative feedback, we see that APF are used significantly more often in reviews containing only positive feedback (X² = 251,78, df = 1, p < .0001). Thirty-four per cent of such reviews contain a minimum of one APF, as opposed to 17% of reviews containing only negative feedback. It thus seems that H1 (positive APF will be used more often than negative APF) can be confirmed. When they are used to strengthen negative feedback, however, they are more likely to be used in a more extreme way, through flooding.
4.2 Influence of APF in Online Hotel Reviews on Hotel Reply Strategies (RQ2, i.e., H2a–b and H3a–b)
H2a, which proposes that a higher number of positive APF is associated with an increased use of rapport-enhancing response strategies, was tested on the subcorpora of responses to entirely positive and mixed reviews. The results reveal that almost all rapport-enhancing strategies remain unaffected by a higher number of positive APF: When more positive APF are used, this does not lead to an increased use of rapport-enhancing response moves, intensifiers strengthening such moves, and it does not lead the hotel manager to highlight more strongly how positive a guest’s review or a guest’s experience was. A higher number of positive APF does have an impact, however, on the way the hotel manager describes their own positive feelings in reaction to positive feedback in that these feelings are then described in a more intensified way. See Figure 6.1 which shows that the (hotel manager’s) description of positive feeling increases when the APF density increases to about 4%. After that, the data become too sparse to confidently estimate the effect. We thus found only limited support for H2a.

Figure 6.1 Effect of APF strengthening positive feedback (density) on the (predicted) frequency of (extreme) descriptions of positive feelings.
H2b, which proposes that a higher number of positive APF is associated with a decreased use of defensive response strategies, is only relevant to the subcorpus of mixed reviews. Bearing in mind that defensive response moves comprise both diminish and deny moves, results reveal no effect of a higher number of positive APF on the frequency with which deny moves are used, but they do reveal an effect on the frequency of the diminish moves and of the intensifiers used to strengthen such diminish moves. Figure 6.2 depicts how the number of diminish moves seems to increase at the beginning when the density of positive APF increases. However, the effect quickly becomes uncertain for a higher APF density. Figure 6.3 demonstrates that the number of intensifiers strengthening diminish moves exhibits a wave-like pattern for increasing density of positive APF: first it increases, then it decreases, but it finally increases again. These findings are not in line with H2b, as we expected a decreased use of defensive response strategies.

Figure 6.2 Effect of APF strengthening positive feedback (density) on the (predicted) frequency of diminish moves in response to negative feedback (in mixed reviews).

Figure 6.3 Effect of APF strengthening positive feedback (density) on the (predicted) frequency of intensifiers strengthening diminish moves in response to negative feedback.
H3a and H3b were tested on the subcorpora of responses to mixed and entirely negative reviews. H3a proposes that a higher number of negative APF is associated with a decreased use of rapport-enhancing response strategies. Results reveal that rapport-enhancing moves remain unaffected by higher numbers of negative APF. However, we do see an effect on rapport-enhancing language intensification. More specifically, an increasing density of negative APF appears to negatively affect the number of intensifiers for positive moves, as testified by Figure 6.4. Note that this effect is estimated on the data of the mixed reviews alone. However, this effect, which is in line with H3a, only holds for APF density up to 10% after which the effect becomes uncertain (due to data sparsity). Figure 6.5 also shows a general negative effect of APF strengthening negative feedback on the number of intensifiers of accommodating moves: at first the number of intensifiers of accommodating moves slightly increases with increasing negative APF density, but very soon the number of these intensifiers decreases. These results regarding rapport-enhancing language intensification support H3a.

Figure 6.4 Effect of APF strengthening negative feedback (density) on the (predicted) frequency of intensifiers of positive moves.

Figure 6.5 Effect of APF strengthening negative feedback (density) on the (predicted) frequency of intensifiers of accommodating moves.
H3b proposes that a higher number of negative APF is associated with an increased use of defensive response strategies. When testing this hypothesis, we found that a higher number of negative APF has a somewhat mixed effect on the number of defensive response strategies. More specifically, Figure 6.6 shows – in line with H3b – that the number of diminish moves in response to negative feedback begins to increase with a growing density of negative APF, but the effect quickly becomes uncertain. Figure 6.7 shows a mixed effect in that the number of intensifiers for deny moves at first slightly increases with growing negative APF density, but then it clearly decreases until APF density of about 10% after which the number of intensifiers for deny moves increases again.

Figure 6.6 Effect of APF strengthening negative feedback (density) on the (predicted) frequency of diminish moves in response to negative feedback.

Figure 6.7 Effect of APF strengthening negative feedback (density) on the (predicted) frequency of intensifiers for deny moves.
In sum, all these effects indicate no support for H2b and partial support for H2a, H3a, and H3b. Tables 6.6 and 6.7 offer a summary of the results related to our second research question.
Table 6.6 Impact (yes/no) of positive/negative APF on moves in review responses; italics = not in line with hypothesis; bold = in line with hypothesis.
| Moves in hotel review responses | ||||
|---|---|---|---|---|
| Rapport-enhancing moves | Defensive moves | |||
| Positive moves | Accommodative moves | Diminish moves | Deny moves | |
| Positive APF in reviews | no (H2a) | no (H2a) | yes (H2b) | no (H2b) |
| Negative APF in reviews | no (H3a) | no (H3a) | yes (H3b) | no (H3b) |
Table 6.7 Impact (yes/no) of positive/negative APF on language intensification in review responses; italics = not in line with hypothesis; bold = in line with hypothesis.
| Language intensification in hotel review responses | |||||||
|---|---|---|---|---|---|---|---|
| Intensification of rapport-enhancing moves | Intensification of defensive moves | Highlight the guest’s positive feedback | |||||
| Pos mov intensification | Accomm move intensification | Dim move intensification | Deny move intensification | Description of review | Description of guest’s experience | Description of hotel management’s feelings | |
| Positive APF in reviews | no (H2a) | no (H2a) | yes (H2b) | no (H2b) | no (H2a) | no (H2a) | yes (H2a) |
| Negative APF in reviews | yes (H3a) | yes (H3a) | no (H3b) | mixed (H3b) | no (H3a) | no (H3a) | no (H3a) |
5 Discussion and Conclusion
Based on the detected types and frequencies of APF in online hotel reviews, we can conclude that they do not occur that often and that they mainly occur in the form of (flooded) exclamation marks. Moreover, we detected significantly more APF in reviews containing only positive feedback than in reviews containing only negative feedback. This latter finding is in line with the Pollyanna Hypothesis, which asserts that there is a universal human tendency to use positive evaluative words more frequently than negative evaluative words in communication, and in line with previous research on the use of emoji (Bai et al., Reference Bai, Dan, Mu and Yang2019). Interestingly, however, when reviewers in our corpus do use APF to strengthen negative feedback, they use flooded punctuation more often than reviewers who strengthen positive feedback, who are more likely to use simple exclamation marks. It thus seems that negative feedback is intensified less often through the use of APF, but when it does happen, it is done in a more extreme way, thus amplifying the negative experience even more. It would be interesting to explore whether this finding holds in other languages, and how it could be explained from a psychological perspective.
After having provided an overview of APF types and frequencies in entirely positive, mixed, and entirely negative reviews, we explored our second research question related to the influence of these APF in online hotel reviews on the ensuing responses: How do they affect response strategies and particularly how rapport is managed in responses? To operationalize what we mean by response strategies, we distinguished between (rapport-enhancing and defensive) moves on the one hand and (rapport-enhancing and defensive) language intensifiers on the other. When unpacking and interpreting the results as presented in Section 4, some interesting patterns emerge.
When first looking at move frequency (see Table 6.6), we see that APF do not have an impact on the number of positive, accommodative, and deny moves in responses, which runs counter to what we expected, but APF did affect the frequency of diminish moves: both a higher number of positive APF and a higher number of negative APF are associated with higher numbers of diminish moves. While we expected this more defensive stance in reaction to the use of negative APF, we did not expect this in reaction to positive APF. A possible explanation for this result, which is surprising at first sight, may lie in the fact that defensive response strategies, and especially diminish moves, tend to be formulated in such a way that they may not necessarily come across as impolite or offensive. The review management company’s response policy stipulates that impolite and aggressive responses should be avoided at all times because they are considered unprofessional, and unprofessional behaviour is not in the interest of hotels, who want to avoid or at least minimize reputational harm. Diminish strategies can thus be characterized as defensive strategies in that they try to mitigate the complaint in an effort to defend a hotel’s reputation, but with a potentially accommodative feel to them (see, e.g., the move ‘suggest or recommend’ in Table 6.1). In that sense, one could explain this result by arguing that a higher number of positive APF in mixed reviews gives a hotel the impression that – given the positive stance of the reviewer – there is room to not hold back in fighting off the criticism in an inoffensive way, thus making the hotel look even better in the eyes of the reviewer and future customers. Overall, when evaluating the results regarding move frequency for entirely positive, mixed, and entirely negative reviews separately, we can conclude that the discourse structure of entirely positive reviews is not affected by APF, while for mixed and entirely negative reviews, the discourse structure is affected, but APF’s influence does not extend beyond diminish moves.
As for language intensification, which we subdivided into intensifiers of rapport-enhancing and defensive moves on the one hand, and highlighting a guest’s feedback on the other, we found that positive APF in online reviews yielded some unexpected and some expected results (see Table 6.7): (1) they do not influence the use of intensifiers for rapport-enhancing moves, the hotel manager’s description of the review and their description of the guest’s experience (while we expected a higher number of intensifiers for these moves and more intensified positive descriptions), (2) they are associated with more intensification of diminish moves (while we expected less intensification), and (3) they are, as predicted, associated with intensified descriptions of the hotel manager’s positive feelings. The first two findings echo what we found for the moves, and again, the – at first sight – surprising finding regarding diminish moves can be explained by the dual character of these moves, which are defensive in nature without posing a strong threat to rapport by using an accommodating tone. The third finding can be illustrated with the following example, in which a positive review with high APF numbers (We enjoyed it!! Great hotel! Lovely spacious, clean room. Wonderfully decorated!)) receives the following response: ‘Great hotel’ – these words are so nice to hear. We are delighted that you enjoyed a fantastic stay with us – and that you were totally happy with the spacious, clean room. We hope to see you again soon, Kind regards, X’ (our emphasis). This specific type of a hotel manager highlighting a guest’s positive feedback (by referring to the positive feelings caused by this feedback) can be seen as demonstrating emotional contagion. When zooming in on negative APF, we see that they can – as predicted – be associated with less intensification of rapport-enhancing moves, but, contrary to expectations, they did not affect the intensification of diminish moves. Their effect on the intensification of deny moves is mixed, which makes it hard to interpret from a quantitative perspective.
When taking a bird’s eye perspective on these results related to the impact of APF in reviews on hotel management responses, they reveal some clear patterns, allowing us to formulate the following conclusions: First, as we often found no impact on specific reply strategies, we can conclude that APF’s influence on how review-related interactions unfold in terms of rapport is quite subtle. This subtle influence may be due to the highly professional setting in which the review responses in our corpus are created (see Section 3.1). In this setting, there is a certain gold standard of what responses to reviews should look like, with rather fixed generic conventions (Cenni, Reference Cenni2022), and this gold standard also requires hotels to remain professional in their behaviour at all times, and thus to avoid emotional excesses and impoliteness in their interactions with guests. Second, it is remarkable that, although negative APF are less frequently used compared to positive APF, they do seem to have a stronger impact on responses, as they are associated with less intensification of rapport-enhancing moves and a higher frequency of diminish moves. This can be interpreted as an affirmation of the negativity bias, which refers to a person’s bias to automatically pay ‘more attention to unpleasant (negative) than to pleasant (positive) information’ because negative information is marked (Liebrecht et al., Reference Liebrecht, Hustinx and van Mulken2019, p. 172). Third, the fact that diminish moves increase in reaction to both positive and negative APF testifies to their dual character of being defensive, but with an accommodating potential. Overall, it seems that a higher frequency of positive APF leads to a more enthusiastic and more persuasive stance in responses to these reviews, and that a higher frequency of APF strengthening negative feedback leads to a more defensive, less rapport-enhancing stance in responses to these reviews.
By revealing differences in the use of positive and negative APF in online reviews as a type of public CMD, this chapter contributes to and confirms the observation of situated normativity in online written discourse. Our results also shed light on differences between APF as used in online reviews (see RQ1) and APF as used in responses to these reviews (see Table 6.1), an issue which we did not explicitly discuss as it would have gone beyond the scope of our research questions. Future research can compare our findings with APF as used in different forms of private and public CMD to acquire further insights into the norms governing the use of these features.
Our approach in studying the relational effects of APF was novel in that we analysed their effects beyond emoji and beyond the intensification of positive emotions, and by looking at their rapport-related impact from an interactional perspective. In doing so, we were able to show that the language used in reviews is not without consequence in terms of building rapport and makes certain reply strategies more likely than others.
This study obviously also comes with several limitations. As an additional way of assessing rapport management, we could have supplemented our quantitative analysis with a qualitative component, delving deeper into certain phenomena we observed, and we could also have looked more deeply into processes of accommodation with regard to language style: To what extent does the use of APF also trigger the use of APF on the part of the hotel manager? To what extent does brand identity and the type of formal or informal communication style that comes with it outplay the principles of language style accommodation in this specific context? These are all issues which can be explored in future research. Moreover, the type of quantitative corpus analysis we performed can reveal associations between certain features, but it cannot make claims about causality, which would enable us to exclude other explanations for our findings. It would therefore be interesting to test the second research question on the impact of APF experimentally. Finally, as additional avenues for future research, it would be worthwhile to explore this study’s research questions using similar data in other languages or on other platforms and using different types of consumer-company data on other platforms.
1 English as a World Language and Persuasion
English is the most widely spoken language in the world, considering not only first language speakers, but also its use as a second language and as a lingua franca. Estimates suggest that more than one in four people globally are capable of communicating in English (Crystal, Reference Crystal2003, p. 69; Kachru, Reference Kachru, Filppula, Klemola and Sharma2017, p. 272). Among those who speak English, second language and lingua franca speakers outnumber first language speakers (Seidlhofer, Reference Seidlhofer, Dziubalska-Kołaczyk and Przedlacka2005, p. 59). The widespread use of a single language is arguably a boon to international unity and exchange because it enables direct communication between counterparts who would otherwise need to rely on translators and interpreters, greatly complicating such exchanges, while reducing their effectiveness.
However, international communication through a single, shared language is not necessarily quite as seamless as this optimistic view might suggest. Use of the English language in international exchange might enable direct communication but also complicates it (Berns, Reference Berns2008; Smith & Nelson, Reference Smith, Nelson, Nelson, Proshina and Davis2019). Such complications arise because the English language is not simply a unitary and universal form of communication, which speakers and writers rely on to communicate. Rather, individuals speak English at varying levels of proficiency, and, more importantly, rely on a richly differentiated set of dialects. This diversity has been extensively analysed in research in the field of World Englishes or New Englishes (for a general overview, see Schneider, Reference Schneider2007).Footnote 1 It covers all levels of the language, including syntax (Rautionaho, Reference Rautionaho2014), lexis (Lambert, Reference Lambert2014), segmental phonology (i.e., phonemes; Meer et al., Reference Meer, Brato and Flores2021), suprasegmental phonology (e.g., speech rhythm; Fuchs, Reference Fuchs2023), and, of particular importance for the present chapter, pragmatics (Collins, Reference Collins2022).
The impact of this diversity on the smoothness of international communication is somewhat mitigated by the fact that some of these dialects, chiefly Standard American and British English, enjoy more international prestige than others, and are hence more widely used and understood (Mair, Reference Mair2013). Moreover, speakers may also enhance mutual comprehension by shifting to such more widely known dialects or accommodate to (i.e., consciously or unconsciously converge with) a particular speaker in a given context (Fang, Reference Fang2017). However, such mitigating strategies alleviate the problem only to some extent. Standard American and British English, notwithstanding their prestige, have substantial but still limited international reach and not everybody is capable of ‘tuning in’ to another dialect or tolerating ambiguity and dealing with subjectively unusual turns of phrase. Hence, even with such accommodations, the considerable variability of how English is spoken across the world seriously limits the seeming effectiveness of the widespread use of a ‘single’ language in international communication.
Across this multitude of dialects of English, also called ‘varieties’ in this field of research, speakers and writers engage in a range of communicative activities and with diverse goals. In many cases these goals will involve what psychologists call persuasion, or the ‘deliberate attempt to change the thoughts, feelings, or behaviors of others’ (Rocklage, et al., Reference Rocklage, Rucker and Nordgren2018, p. 749). Persuasion thus defined is crucial for the management of social relations, a ‘ubiquitous presence in everyday life’ (Humă et al., Reference Humă, Stokoe and Sikveland2020, p. 357) and usually realised through language (Blankenship & Craig, Reference Blankenship and Craig2011, p. 194). Given the fundamental role that language plays in attempts to persuade others, it stands to reason that a linguistic analysis of how such attempts are carried out has the potential to further a better understanding of this psychological and linguistic construct.
2 The Linguistic Analysis of Persuasion
2.1 The Persuasive Function of Language
The discipline of linguistics differs from psychology in that persuasion as in Rocklage, Rucker and Nordgren’s (Reference Rocklage, Rucker and Nordgren2018) definition is not always seen as central to the functions of language – nor is there, in fact, a universally agreed list of discrete functions of language. Jakobson’s (Reference Jakobson and Sebeok1960) conative function is a related concept, but focuses on change in behaviour rather than thoughts and feelings, while his referential and emotive functions do concern the expression of thoughts and feelings, but do not explicitly refer to the attempt to change the addressee’s mental state. Similarly, Halliday’s (Reference Halliday, Lenneberg and Lenneberg1975) interactional function overlaps with the concept of persuasion, but only partially, while Jackson and Stockwell (Reference Jackson and Stockwell2011, pp. 83–87), in their introductory linguistics textbook, did in fact include persuasion as one of the five functions of language they identify. In the present volume, Sorlin as well as Dayter and Rüdiger also discuss the various shades of persuasive functions that linguistic communication may take on.
Empirical research on persuasion in linguistics has often focused on particular contexts, such as attracting sexual partners (Dayter & Rüdiger, Reference Dayter and Rüdiger2016; O’Neill, Reference O’Neill2018), politics (Hansson, Reference Hansson2018; Hoffman, this volume) and particular online communities (Dayter & Messerli, Reference Dayter and Messerli2022; Wright, this volume; for a broad review of linguistic research on persuasion, see Rudolf von Rohr, Reference Rudolf von Rohr2018). Similarly, in psychology and economics, persuasion is of interest to researchers mainly in specific contexts, primarily politics, advertising, and other commercial interactions (DellaVigna & Gentzkow, Reference DellaVigna and Gentzkow2010; Humă et al., Reference Humă, Stokoe and Sikveland2019; Humă, this volume; Simons, Reference Simons2011).
One central point emerging from this body of research on persuasion is that the persuasive function of language is not equally prominent across all contexts. Rather, there are specific contexts where it is particularly salient, such as advertising, newspaper editorials, and political speeches. By contrast, in others, such as newspaper reportage and private conversations, it plays a more limited role. Yet, it is arguably not completely absent even then. In fact, despite the rich literature on persuasion in psychology and linguistics, Humă et al. (Reference Humă, Stokoe and Sikveland2020, p. 358) lamented a lack of research focusing on the linguistic expression of persuasion in language at large, rather than in contexts that are particularly geared towards the goal of persuasion, be it in experimental research or in particular real-life persuasive activities such as advertising. This gap could potentially be filled through the use of corpora of natural language, which present several advantages over experimental approaches, as they allow for larger and more representative samples and largely avoid experimenter or observer bias (Sterling et al., Reference Sterling, Jost and Bonneau2020, pp. 823–824).
2.2 The Multidimensional Analysis of Register Variation
Following on from the recognition that linguistic corpora could potentially provide rich evidence on the linguistic expression of persuasion, the next step in this investigation requires the identification of an appropriate methodology. A research tradition in linguistics that could potentially respond to this requirement is the exploration of context-bound variation in language as pioneered by Biber’s (Reference Biber1988, Reference Biber1995) Multidimensional Analysis of register variation. Its empirical starting point is the recognition that the diverse morphosyntactic features of a language are not used with equal frequency across all communicative situations – for example, contracted forms such as don’t or gotta are preferred in informal, spoken discourse, whereas their uncontracted equivalents are favoured in formal texts.
Through the simultaneous analysis of the frequencies of a larger number of morphosyntactic and other linguistic features, several dimensions of variation emerge that comprehensively account for a large part of the variation in frequency of these features. For example, Biber’s Dimension 1, involved versus informational discourse, is associated with variation in the use of contractions, verbs, and pronouns (all frequent in involved discourse, loosely referred to above as informal), as well as nouns, long words, and adjectives (all frequent in informational discourse, referred to above as formal). Examples of registers that occur at either end of this particular dimension are private conversations (involved) and academic writing (informational). In this manner, language varieties associated with particular communicative situations, called registers in this framework (Biber & Conrad, Reference Biber and Conrad2009, p. 6), can be characterised succinctly with regard to their position on this and several other dimensions of register variation. Thus, Multidimensional Analysis (MDA) is a ‘primarily methodological approach’ rather than a ‘comprehensive theoretical framework with a closed set of categories’ (Bohmann, Reference Bohmann, Schneider and Heyd2021, p. 83).
MDA is particularly relevant for the study of the linguistic expression of persuasion because it is regularly identified as one of a fairly limited number of register dimensions. In Biber’s (Reference Biber1988) work, ‘overt expression of persuasion’ is one of only six dimensions while, in later work using a much larger pool of syntactic and morphological features, Bohmann (Reference Bohmann2019, pp. 147–153) identified ‘explicit stance-marking’ as one of ten register dimensions. While these studies presented an overarching analysis of variation across all identified register dimensions, the present contribution will concentrate exclusively on the linguistic expression of persuasion and to what extent it varies across dialects of English worldwide.
In this connection, it is useful to consider two different aspects of persuasion: (1) persuasion as a process, namely trying to persuade someone, and (2) persuasion as an effect, in other words the successful outcome of the process (Dayter & Messerli, Reference Dayter and Messerli2022, p. 167). Persuasion as conceived within the MDA of register variation refers to the former, namely the process, and there is typically no way to relate it to the effect. Consequently, an MDA of the linguistic expression of persuasion across international dialects of English promises to uncover whether and to what extent persuasion as a process is expressed across these dialects.
2.3 Multidimensional Analysis and Varieties of English
A study of variation in linguistic persuasion across varieties of English is particularly promising because there is previous evidence suggesting that this variation is substantial. In general, variation between registers is a mediating factor and may even supersede variation between international dialects of English and variation due to short-term diachronic change (Kruger & van Rooy, Reference Kruger and van Rooy2018; Mair, Reference Mair and Collins2015). For example, the linguistic differences between private conversations and academic writing in terms of involved versus informational discourse (loosely, formality) within Indian English are likely to be much greater than the differences between private conversations in Indian English and British English. In a statistical analysis, Bohmann (Reference Bohmann2019, p. 188) showed that for all except one register dimension, there is more variation between registers than between varieties. The only register dimension where the reverse is true turned out to be explicit stance-marking, Bohmann’s equivalent of Biber’s overt expression of persuasion.
Bohmann (Reference Bohmann2019) studied 10 varieties of English, based on data from the International Corpus of English (ICE; Greenbaum, Reference Greenbaum1991), supplemented by Twitter data from the 10 territories. A few other previous studies also investigated international varieties of English, likewise based on the ICE corpora, and, in some cases, additional data from another variety. Specifically, van Rooy et al. (Reference van Rooy, Terblanche, Haase and Schmied2010) and Kruger and Smith (Reference Kruger and Smith2018) each studied one variety (East African and Australian English, respectively), Kruger and van Rooy (Reference Kruger and van Rooy2016) 3 varieties (only writing), Xiao (Reference Xiao2009) investigated 5 varieties of English, and Kruger and van Rooy (Reference Kruger and van Rooy2018) 16 varieties (only writing). Most of these studies found that register variation is usually greater than variation between dialects and that these two axes are relatively independent of each other (in statistical terms, that there is no significant interaction between these two variables). However, the most comprehensive study in the field to date, by Bohmann (Reference Bohmann2019, Reference Bohmann, Schneider and Heyd2021), presented a much more nuanced analysis of this question and found that for some register dimensions there are substantial differences between varieties. In particular, Bohmann (Reference Bohmann, Schneider and Heyd2021, p. 91) found that for stance-marking, variety is a more important predictor than register.
In terms of register, stance-marking/persuasion is most prominent in persuasive writing, creative writing, and public dialogue and least prominent in academic and instructional writing (these categories refer to parts of the ICE corpora, see Section 3.1 below). Moreover, Bohmann (Reference Bohmann2019, pp. 150–152) also identified geographic variation, with the following order, from most to least prominent: ‘Europe > Australia/NZ/N America > Caribbean/Africa > South Asia > East Asia’.
The present chapter will further test the latter conclusion,Footnote 2 and add to and expand current knowledge on register variation in varieties of English through (1) the inclusion of additional ICE corpora and (2) a more fine-grained distinction of some varieties, namely Kenyan and Tanzanian English, as well as Northern and Southern Irish English, respectively, which in previous research were analysed jointly.
Drawing together the diverse strands of research discussed so far, a corpus-based study of variation on the linguistic expression of dialects of English worldwide promises to be particularly fruitful for a number of reasons. People habitually engage in attempts to persuade others and primarily do so through language. Yet psychological research has relied primarily on small, experimental samples, a limitation that can be overcome through the use of corpora of language use in everyday contexts. Finally, the overt expression of persuasion is likely to reveal larger cross-varietal differences than other register dimensions, and the nature of this variation remains underexplored.
3 Data and Methods
3.1 Data
This analysis comprises 21 varieties of English, based on data drawn from the International Corpus of English (ICE) project (Greenbaum, Reference Greenbaum1991). Of these, six each are from Africa and Asia, four each from the Americas and Europe, and one from Oceania. ICE aims to provide data on the (emergent) local standard variety of English in a given country and thus purposefully limits data to speakers over the age of 18, educated in English-medium schools and enrolled in or having graduated from a university, who were raised and lived most of their lives in the country/territory. In practice, these criteria are usually enforced for the section comprising private dialogues. For the other sections, some criteria are likely to be met by design (e.g., age, education) and others are enforced to the extent that such information is available.
Each subcorpus, usually representing a particular territory or country, comprises at least 1 million words, which is relatively small by today’s standards. However, ICE is the best resource for evidence on a carefully controlled sample of language from English-speaking countries across the globe, providing a comprehensive range of registers. Sixty per cent of each subcorpus contains spoken language, including private dialogues, public dialogues (e.g., broadcast political discussions), scripted speeches (read off from a prepared script) and unscripted speeches (see Table 7.1). The written section includes student writing and letters/emails (originally called ‘non-printed’, now perhaps better ‘unpublished’), as well as academic writing (from scholarly journals), popular writing (i.e., popular science writing for the general population), reportage (i.e., newspaper reports), persuasive writing (newspaper editorials), instructional writing (skills and hobbies guides from magazines and administrative documents) and creative writing (novels and stories, but no lyrical language). Most subcorpora contain 500 samples, each at least 2,000 words long and, in practice, usually somewhat longer, in order to avoid cutting off the sample in the middle of a sentence. Hence the total number of words in the right-most column of Table 7.1 indicates the minimal number of words per section and corpus. In these subcorpora, some longer conversations or speeches were split into several files. Other files, especially for the sections on letters, comprise multiple separate documents. Alternatively, a minority of corpora followed a different sampling strategy whereby longer conversations or discussions were not split into smaller, 2,000-word files, while letters and other short documents were not merged into larger files. In these cases, each file represents a self-contained discourse event. In total, 9,847 corpus files were analysed in the present chapter.
Table 7.1 Design of subcomponents of the International Corpus of English.
| Code | Text Type 1 | Text Type 2 | Text Type 3 | Samples | Words (min.) |
|---|---|---|---|---|---|
| S1A | Spoken | Dialogue | Private | 100 | 200,000 |
| S1B | Spoken | Dialogue | Public | 80 | 160,000 |
| S2A | Spoken | Monologue | Unscripted | 70 | 140,000 |
| S2B | Spoken | Monologue | Scripted | 50 | 100,000 |
| W1A | Written | Non-printed | Student essays and exams | 20 | 40,000 |
| W1B | Written | Non-printed | Social and business letters | 30 | 60,000 |
| W2A | Written | Printed | Academic | 40 | 80,000 |
| W2B | Written | Printed | Popular | 40 | 80,000 |
| W2C | Written | Printed | Reportage | 20 | 40,000 |
| W2D | Written | Printed | Instructional | 20 | 40,000 |
| W2E | Written | Printed | Persuasive | 10 | 20,000 |
| W2F | Written | Printed | Creative | 20 | 40,000 |
| Total | 1,000,000 |
The comprehensive sample of various registers in ICE makes it a rich resource for studies on variation in register dimensions, and the present study follows the precedent set in previous research and harnesses this data (see Section 2.3). Of the 21 varieties of English explored here (see Table 7.2), most come from a full, published component, with the following exceptions: a few are (essentially complete) pre-release versions to which the corpus compilation teams provided early access (e.g., Ghana, Trinidad, and Tobago). The Pakistan subcorpus lacks some data in the spoken section. For American English, ICE-US only provides written data and the Santa Barbara Corpus was used for private dialogues, and no data is available to represent public dialogues and monologues. ICE-Ireland and ICE-East Africa were split, representing Northern Ireland and the Republic of Ireland, and Kenya and Tanzania, respectively. Finally, the data for Tanzania comprises just a small number of private conversations, while the scripted monologues in ICE-East Africa were included as ‘written to be spoken’ data (i.e., the data in the corpus is based on the scripts, not, as in the other subcorpora, transcriptions of the recordings of the actual speeches).
Table 7.2 Territories and countries and their local varieties represented in this study, with references for the ICE subcorpora.
| Territory | Variety (Abbreviation) | Corpora/Remarks | References |
|---|---|---|---|
| Canada | CanE | Newman & Columbus (Reference Newman and Columbus2010) | |
| Ghana | GhE | - | |
| Great Britain | BrE | Nelson et al. (Reference Nelson, Wallis and Aarts2002) | |
| Hong Kong | HKE | Bolt & Bolton (Reference Bolt, Bolton and Greenbaum1996) | |
| India | IndE | Shastri (Reference Shastri2002) | |
| Ireland (Northern) | NIrE | Northern part of ICE-Ireland | Kallen & Kirk (Reference Kallen, Kirk, Beal, Corrigan and Moisl2007) |
| Ireland (Republic) | IrE(R) | Southern part of ICE-Ireland | Kallen & Kirk (Reference Kallen, Kirk, Beal, Corrigan and Moisl2007) |
| Jamaica | JamE | Mair (Reference Mair, 153Siebers and Hoffmann2009) | |
| Kenya | KenE | Kenyan part of ICE-EA | Hudson-Ettle & Schmied (Reference Hudson-Ettle and Schmied1999) |
| New Zealand | NZE | Vine (Reference Vine1999) | |
| Nigeria | NigE | Wunder et al., (Reference Wunder, Voormann and Gut2010) | |
| Pakistan | PakE | - | |
| Philippines | PhiE | Bautista (Reference Bautista2004) | |
| Scotland | ScoE | Schützler et al., (Reference Schützler, Gut, Fuchs, Beal and Hancil2017) | |
| Singapore | SinE | Nelson (Reference Nelson2002) | |
| South Africa | SAfE | Jeffery (Reference Jeffery2003) | |
| Sri Lanka | SLKE | Körtvelyessy et al. (Reference Körtvelyessy, Bernaisch, Mukherjee and Mendis2012) | |
| Tanzania | TanE | Tanzanian part of ICE-EA (almost no private conversations) | Hudson-Ettle & Schmied (Reference Hudson-Ettle and Schmied1999) |
| Trinidad and Tobago | TrinE | Deuber (Reference Deuber2010) | |
| Uganda | UgE | Meierkord & Isingoma (Reference Meierkord and Isingoma2022) | |
| United States | AmE | Written: ICE-US Spoken (only private conversations): Santa Barbara Corpus | Du Bois et al. (Reference Du Bois, Chafe, Meyer, Thompson and Martey2000) |
3.2 Methods
The present analysis follows Biber’s (Reference Biber1988) original approach as realised in the Multidimensional Analysis Tagger v1.3 (referred to below as MAT; Nini, Reference Nini2015, Reference Nini, Berber Sardinha and Veirano Pinto2019). Subsequent MDA research on World Englishes included additional semantic (Xiao, Reference Xiao2009) and (morpho-)syntactic (Bohmann, Reference Bohmann2019) features to which they applied factor analysis. While this approach follows the methodology proposed by Biber (Reference Biber1988), it necessarily yields different register dimensions based on a separate set of linguistic features. This approach has the advantage of accounting for additional, potentially relevant linguistic features and it yields register dimensions that are fitted to the data under analysis. Through the inclusion of a larger number of linguistic features, Xiao and Bohmann achieve greater empirical grounding than Biber. However, researchers attempting to replicate the approach used by Xiao and Bohmann would in turn need to posit register dimensions based on a new, and probably different, factor solution. In addition, the exact number and annotation of linguistic features in these studies, while documented, requires some degree of interpretation on the part of other researchers.
By contrast, by using MAT, the present chapter employs an out-of-the-box method, which allows for a more economical and standardised analysis that does not require identifying and annotating linguistic features and computing a new factor solution. Instead, the register dimensions that an analysis with MAT yields are those proposed by Biber (Reference Biber1988). Thus, through the use of the MAT software, the present analysis prioritises comparability and replicability with earlier and subsequent work (see Sönning & Werner, Reference Sönning and Werner2021).
From among the register dimensions in Biber (Reference Biber1988), the present chapter focuses on Dimension 4, i.e., the overt expression of persuasion. The linguistic features associated with this dimension are illustrated by Examples (1)–(6) below, with the relevant words highlighted in italics and the source corpus files and relevant tags mentioned in brackets. They either explicitly mark the speaker’s own point of view, of which they want to persuade the addressee, or ‘argumentative discourse to persuade the addressee’ (Biber, Reference Biber1988, p. 111). Specifically, these features are:
i Infinitives such as to (be soaked) in a scripted monologue from Uganda. While infinitives can have a range of functions, they are ‘most commonly used as adjective and verb complements [in which] the head adjective or verb frequently encodes the speaker’s attitude or stance towards the proposition encoded in the infinitival clause’ (Biber, Reference Biber1988, p. 111). In the example below, the infinitive combines with a necessity modal (see also feature iii) and the speaker attempts to persuade the hearer of how cassava (a root vegetable popular in tropical Africa) should be prepared.
ii Predictive modals such as would, which express the speaker’s or writer’s confidence in the likelihood of an event or action. In the example below, from an editorial from Sri Lanka, the writer attempts to persuade readers of what they believe are the consequences of an action mentioned in the previous discourse.
iii Necessity modals such as should, which express the possibility of certain events, or the obligation to carry out certain actions, and, by extension, the speaker’s/writer’s attempt to persuade someone that these events or actions are likely or necessary. In the example below, taken from a private conversation from New Zealand, the speaker expresses their opinion about an action that should have been carried out in the past.
iv Suasive verbs, such as allow in a private conversation from Singapore, ‘imply intentions to bring about certain events in the future’ (Biber, Reference Biber1988, p. 111) or, as in the example below, to impede certain actions.
v Conditional adverbial subordinators such as if in a public dialogue from the Philippines, which specify ‘the conditions that are required for certain events to occur’ (Biber, Reference Biber1988, p. 111).
vi Split auxiliaries such as has … got in an unscripted monologue from the Republic of Ireland, which are indirectly tied to the expression of persuasion because they often co-occur with modal verbs (Biber, Reference Biber1988, p. 111).
(1) There are different types of cassava some of which have to be soaked in water after peeling for some time (UGA_S2B_023 TO).
(2) It would help deliver justice without delay minimising the unnecessary travails and hassle suffered by the litigants (SLK-W2E-008 PRMD).
(3) she should have borrowed money in New Zealand while our interest rates are lower (NNZ_S1A-044 NEMD).
(4) No I can’t allow that then we have no more space (SIN-S1A-089 SUAV).
(5) But beyond that if we have to constrict some more uh my position was that well it’s going to kill the economy (PHI-S1B-044 COND).
(6) And like John Fitzgerald Kennedy Bill Clinton has also got a very strong wife (IRS-S2A-017 SPAU).
Prior to the analysis, all corpus files were pre-processed by converting them to txt format and UTF-8 encoding, as required by MAT. Since the first step of MDA involves part-of-speech tagging, all potentially interfering elements such as ICE corpus markup was removed (e.g., ‘<ICE- …>’), while tags for indigenous words, overlaps, headings, and placeholders for non-ASCII characters were replaced or removed. Incomplete words in spoken discourse, which ended in a hyphen, were similarly removed. These steps were performed semi-automatically with regular expressions and the Replace function in Notepad++. Finally, all corpus files were loaded into MAT, and ‘tagged’ and ‘analysed’ with default settings, that is, ‘only VASW tags’, ‘z-scoring’, and ‘400 tokens’ for type-token ratio. The step ‘tagging’ refers to part-of-speech tagging, for which MAT relies on the Stanford Tagger (2013). Relevant POS tags are subsequently assigned to the linguistic features identified by Biber (Reference Biber1988). Next, the step ‘analysis’ counts relevant linguistic features and computes the register dimensions on this basis (for further details, see Nini, Reference Nini2015, Reference Nini, Berber Sardinha and Veirano Pinto2019).
Subsequent to the MDA, the data was loaded into RStudio, where linear regression analysis was performed with Dimension 4 scores (‘overt expression of persuasion’) as dependent variable, and variety and register (i.e., ICE categories as shown in Table 7.1) as independent variables.Footnote 3
This statistical model will determine which varieties, considered as a whole, differ from each other in the extent of the overt expression of persuasion. Register variation is likely to be substantial, which is accounted for by the inclusion of ‘register’ as a fixed factor. Since the focus of the chapter is on variation between varieties, variation between registers is not explored. Furthermore, post-hoc Tukey tests, corrected for multiple comparisons, then determined which varieties significantly differed from each (function glht in package multcomp; Hothorn et al., Reference Hothorn, Bretz and Westfall2008). The results are illustrated with package ggplot2 (Wickham, Reference Wickham2016). Information on the selection of the regression model and the statistical analysis more generally is available in the Online Appendix at https://osf.io/zdwqt/.
4 Results
The degree of overt expression of persuasion varies substantially within varieties, which was expected, given the inclusion of a wide range of registers. The distributions shown in Figure 7.1 indicate both the spread of values within varieties, as well as the overlap between varieties. At first glance, this might suggest that the degree of overt persuasion is more or less statistically indistinguishable in a cross-dialectal perspective.

Figure 7.1 Overt expression of persuasion across 21 international varieties of English. Thick vertical lines indicate medians, boxes comprise data from the first to the third quartile and the whiskers extend to values up to 1.5 times the interquartile range. Outliers beyond this are shown as dots (some outliers beyond the area of the plot not shown). Varieties marked with an asterisk lack a substantial amount of spoken data from the ICE design.
However, the statistical analysis included register as a fixed factor, such that the systematic variation due to this variable is accounted for.Footnote 4 As such, in statistical terms, there are clear differences between varieties, even though, due to the great number of pairwise comparisons, the computation of significance levels is very conservative (see Table 7.3).Footnote 5 Generally, varieties placed at either end of the gamut from high-persuasion to low-persuasion varieties differ significantly from most other varieties, except for those that are very close to them in the degree of persuasiveness. For example, KenE (at the top of the range in Figure 7.1) differs significantly from all varieties starting from PakE at the bottom up to HKE but does not differ significantly from the five closest varieties on this distribution. Similarly, PakE (at the bottom of the range) differs significantly from all varieties except SLKE and IndE (see Table 7.2. above for abbreviations for varieties).
Table 7.3 Statistically significant pairwise differences in overt expression of persuasion between varieties (- = not significant.; numbers indicate reached significance level).
| GB | GH | HK | IND | IRN | IRS | JAM | KEN | NIG | NZ | PAK | PHI | SA | SCO | SIN | SLK | TAN | TT | UG | USA | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| CAN | - | 0.001 | - | 0.01 | - | - | - | 0.001 | 0.001 | - | 0.001 | - | - | - | 0.001 | 0.01 | - | - | - | - |
| GB | - | - | 0.001 | - | - | 0.01 | - | - | - | 0.001 | 0.01 | - | - | - | 0.001 | - | - | - | - | |
| GH | 0.01 | 0.001 | 0.05 | 0.001 | 0.001 | - | - | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | - | 0.001 | 0.05 | 0.001 | 0.001 | 0.001 | ||
| HK | 0.001 | - | - | - | 0.05 | - | - | 0.001 | - | - | - | - | 0.001 | - | - | - | - | |||
| IND | 0.001 | 0.01 | - | 0.001 | 0.001 | 0.001 | - | 0.05 | - | 0.001 | 0.001 | - | 0.001 | 0.001 | 0.001 | - | ||||
| IRN | - | - | - | - | - | 0.001 | - | - | - | - | 0.001 | - | - | - | - | |||||
| IRS | - | 0.01 | 0.05 | - | 0.001 | - | - | - | - | 0.05 | - | - | - | - | ||||||
| JAM | 0.001 | 0.001 | - | 0.01 | - | - | - | 0.001 | - | 0.05 | - | - | - | |||||||
| KEN | - | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | - | 0.001 | - | 0.001 | 0.001 | 0.001 | ||||||||
| NIG | 0.001 | 0.001 | 0.001 | 0.01 | 0.001 | - | 0.001 | - | 0.01 | 0.01 | 0.001 | |||||||||
| NZ | 0.001 | - | - | - | 0.05 | 0.001 | - | - | - | - | ||||||||||
| PAK | 0.01 | 0.01 | 0.001 | 0.001 | - | 0.001 | 0.001 | 0.001 | 0.05 | |||||||||||
| PHI | - | - | 0.001 | - | - | - | - | - | ||||||||||||
| SA | - | 0.05 | - | - | - | - | - | |||||||||||||
| SCO | 0.01 | 0.001 | - | - | - | - | ||||||||||||||
| SIN | 0.001 | - | - | 0.05 | 0.01 | |||||||||||||||
| SLK | 0.001 | 0.001 | 0.001 | - | ||||||||||||||||
| TAN | - | - | - | |||||||||||||||||
| TT | - | - | ||||||||||||||||||
| UG | - |
The two varieties that differ significantly from just a few others are AmE and TanE. These varieties lack substantial amounts of data in the spoken sections (see Section 3.1). As such, it is likely that were full subcorpora available, differences between these varieties and others would be statistically significant in other cases as well. More generally, while the statistical results (Table 7.3) take account of register and thus the partly sparse data for these varieties, the visual representation in Figure 7.1 does not, and the relative position of these two varieties must be interpreted more cautiously than for the rest of the data.
In terms of geographical patterns, the results reveal that African varieties tend to mostly have higher than average values, followed by Europe/Oceania and the Americas, and finally Asia, with the lowest values. However, there are several exceptions to these generalisations: SAfE has much lower scores for overt expression of persuasion than the other African Englishes, TrinE has higher scores than the other American varieties, and SinE and HKE have much higher values than the rest of the Asian group. In some cases, it is tempting to posit smaller regional groups that appear to be relatively uniform, such as West Africa (NigE and GhE), North America (AmE and CanE), South Asia (IndE, PakE, and SLE) as well as (South-)East Asian city states (HKE and SinE).
5 Discussion and Conclusion
The present study investigated the overt expression of persuasion across 21 international dialects of English from countries where it is used as a first language, or entrenched in society as a second language. In these contexts, new dialects of English have emerged, often in postcolonial contexts, and local standards of English have reached various stages of codification (Schneider, Reference Schneider2007). The linguistic diversity of these World Englishes has been demonstrated in a substantial number of studies, prompting the expectation that the linguistic expression of persuasion might similarly show distinct patterns across the English-speaking world. By using a set of carefully controlled corpora, representing natural language across a wide range of contexts, this chapter also follows calls in the psychological literature for studies on persuasion that investigate its linguistic expression across language at large, rather than relying exclusively on carefully controlled experimental contexts or concentrating exclusively on particular text types (Humă et al., Reference Humă, Stokoe and Sikveland2020, p. 358).
A key finding of this study is that the overt expression of persuasion varies substantially between international varieties of English, with comparatively little overt persuasion found in South Asia, and, at the other end of the distribution, relatively frequent use of overt markers of persuasion in West Africa. These results have important implications for multiple areas of research. While the widespread use of English across many countries in the world enables speakers to communicate with each other, linguistic research has long recognised that diversity of World Englishes and English as a lingua franca can also lead to misunderstandings and miscommunication (Berns, Reference Berns2008; Smith & Nelson, Reference Smith, Nelson, Nelson, Proshina and Davis2019). Previous research has investigated such hurdles to effective communication in terms of phonetic and syntactic variables, among others. However, variation in pragmatics is arguably an equally, or even greater, potential source of miscommunication because interlocutors might be less attuned to variation in how pragmatic meaning is expressed by the other party. For example, the very frequent expression of persuasion in West African Englishes could potentially be perceived as inappropriately forthright or demanding by speakers of South Asian Englishes, where persuasion is less frequently overtly expressed. Vice versa, speakers of South Asian Englishes might often not deploy linguistic resources for persuasion frequently enough to communicate their position to speakers from high-persuasion dialects of English. Against the fundamental role that persuasion and its linguistic expression plays in the management of social relations (Blankenship & Craig, Reference Blankenship and Craig2011, p. 194; Humă et al., Reference Humă, Stokoe and Sikveland2020, p. 357), these findings suggest that speakers using English in international contexts could enhance their communicative effectiveness through increased attention to how, and how strongly, they and their interlocutors express persuasion. Increased sensitivity to the linguistic expression of persuasion could also be of great benefit in English Language Teaching for advanced learners, especially in Business English classes (e.g., Nickerson & Planken, Reference Nickerson and Planken2015).
The present study also demonstrated the useful intersection between MDA and the study of dialectal variation, as pointed out by Biber and Finegan (Reference Biber, Finegan, Biber and Finegan1994, p. 10), and as such contributes to the study of World Englishes and corpus-based dialectometry (Szmrecsanyi, Reference Szmrecsanyi2013), as well as the emerging fields of corpus pragmatics and variational pragmatics (e.g., Anchimbe, Reference Anchimbe2018; Fuchs et al., Reference Fuchs, Gut and Soneye2013; Leuckert & Rüdiger, Reference Leuckert and Rüdiger2021). Furthermore, MDA was applied in this study to a greater number of World Englishes than in previous research. The present results on the degree of overt expression across these varieties at first glance seem to differ from Bohmann’s (Reference Bohmann2019, p. 152) ranking (Europe > Australia/NZ/N America > Caribbean/Africa > South Asia > East Asia), whereas this study arrived at a ranking of Africa > Europe/Oceania > Americas (incl. Caribbean) > Asia. On the one hand, these contrasting findings could be put down to differences in methodology, given that the set of linguistic features used by Bohmann and in this study vary to some extent. Still, there is some overlap between persuasion and stance both in terms of linguistic features and in terms of their conceptualisation as linguistic-psychological constructs. In fact, the results of both studies are likely to be more similar than is immediately visible, mainly because approximately twice as many World Englishes were investigated here. For example, Bohmann includes no African and one Caribbean variety, against six and two varieties, respectively, in this study, suggesting that a high ranking for Africa might be justified and not in contradiction with Bohmann. Apart from this point, the two rankings are relatively similar. Nevertheless, the comparison of individual varieties still reveals two clear differences. Whereas HKE and SinE reach high levels of overt persuasive discourse in the present data, Bohmann’s (Reference Bohmann2019, p. 152) results indicate low levels of explicit stance-marking. These contrasting findings might, in fact, be caused by the afore-mentioned differences in how overt persuasion is measured in the two studies.
More broadly, the perspective adopted in this chapter presented a bird’s-eye view of the overt expression of persuasion. As the discussion has demonstrated, this frame of reference allows for broad comparisons and generalisations on a global basis, in the vein of a limited but growing number of multi-variety studies aiming at broader generalisations and testing models and hypotheses (e.g., Collins & Yao, Reference Collins and Yao2013; Dunn, Reference Dunn2019; Fuchs, Reference Fuchs, Werner, Seoane and Suárez-Gómez2016; Werner, Reference Werner, Werner, Seoane and Suárez-Gómez2016). As such, the present results add to existing evidence for the relevance of areoversals in explaining variation across World Englishes, i.e., the existence of shared patterns in geographically close varieties (Fuchs, Reference Fuchs, Werner, Seoane and Suárez-Gómez2016; Hickey, Reference Hickey2012; Szmrecsanyi & Kortmann, Reference Szmrecsanyi, Kortmann, Filppula, Klemola and Paulasto2009). However, such a broad analysis necessarily precludes a detailed investigation of individual dialects, and which means of linguistic persuasion are particularly prominent there. Such questions could be usefully explored in detailed studies of register variation of a single or a few varieties at a time, focusing on variety-internal variation or on particular linguistic features and how they are deployed by speakers in attempts to persuade (for an example of a useful qualitative approach, see Sorlin, this volume, and Humă, this volume). Moreover, it would also be of interest to determine what variation exists between varieties in how overt expression is expressed linguistically, and what geographic patterns or other explanations for such variation emerge. Finally, to what extent the pragmatic differences in the linguistic expression of persuasion revealed in this analysis lead to misunderstandings and miscommunication cannot be revealed by a corpus analysis of register variation. This question could be addressed through an analysis of corpora of international or cross-cultural communication as well as through dedicated experimental research (see, for example, Xiao et al., this volume).




















