Javascript is required
1.
S. Iwasaki and F. H. Yap, “Stance-marking and stance-taking in Asian languages,” J. Pragmat., vol. 83, pp. 1–9, 2015. [Google Scholar] [Crossref]
2.
W. L. Chafe, “Integration and involvement in speaking, writing, and oral literature,” in Spoken and Written Language: Exploring Orality and Literacy, 1982, pp. 35–54. [Google Scholar]
3.
S. F. Kiesling, “Style as stance: Stance as the explanation for patterns of sociolinguistic variation,” in Stance: Sociolinguistic Perspectives, Oxford Academic, 2009. [Google Scholar] [Crossref]
4.
D. Biber and E. Finegan, “Styles of stance in English: Lexical and grammatical marking of evidentiality and affect,” Text & Talk, vol. 9, no. 1, pp. 93–124, 1989. [Google Scholar] [Crossref]
5.
S. Conrad and D. Biber, “Adverbial marking of stance in speech and writing,” in Evaluation in Text: Authorial Stance and the Construction of Discourse, 2000, pp. 56–73. [Google Scholar] [Crossref]
6.
E. Kärkkäinen, Epistemic Stance in English Conversation: A Description of Its Interactional Functions, with a Focus on I Think. John Benjamins, 2003. [Google Scholar]
7.
H. L. Chen and W. B. Ren, “Does AI chatbot have a conversation style? A corpus-based analysis on AI-generated conversation material,” in Proceedings of the 2024 2nd International Conference on Language, Innovative Education and Cultural Communication (CLEC 2024), Wuhan, China, 2024. [Google Scholar] [Crossref]
8.
J. W. Du Bois and E. Kärkkäinen, “Taking a stance on emotion: Affect, sequence, and intersubjectivity in dialogic interaction,” Text & Talk, vol. 32, no. 4, pp. 433–451, 2012. [Google Scholar] [Crossref]
9.
S. F. Kiesling, “Stance and stancetaking,” Annu. Rev. Linguist., vol. 8, pp. 409–426, 2022. [Google Scholar] [Crossref]
10.
E. Ochs, “Linguistic resources for socializing humanity,” in Rethinking Linguistic Relativity, Cambridge University Press, 1996, pp. 407–437. [Google Scholar]
11.
R. Englebretson, “Stancetaking in discourse: An introduction,” in Stancetaking in Discourse: Subjectivity, Evaluation, Interaction, 2007, pp. 1–25. [Google Scholar] [Crossref]
12.
K. Hyland, “Stance and engagement: A model of interaction in academic discourse,” Discourse Stud., vol. 7, no. 2, pp. 173–192, 2005. [Google Scholar] [Crossref]
13.
Z. Lancaster, “Making stance explicit for second language writers in the disciplines: What faculty need to know about the language of stancetaking,” in Perspectives on Writing: WAC and Second-Language Writers: Research Towards Linguistically and Culturally Inclusive Programs and Practices, The WAC Clearinghouse and Parlor Press, 2014, pp. 269–292. [Google Scholar] [Crossref]
14.
R. Berman, H. Ragnarsdóttir, and S. Strömqvist, “Discourse stance,” Writ. Lang. Lit., vol. 5, no. 2, pp. 253–287, 2002. [Google Scholar] [Crossref]
15.
K. Hyland, “Humble servants of the discipline? Self-mention in research articles,” Engl. Specif. Purp., vol. 20, no. 3, pp. 207–226, 2001. [Google Scholar] [Crossref]
16.
A. Ogunsiji, M. E. Dauda, I. O. Nwabueze, and A. M. Yakubu, ENG 434: Literary stylistics. National Open University of Nigeria, 2012. [Online]. Available: https://nou.edu.ng/coursewarecontent/ENG434%20.pdf [Google Scholar]
17.
R. J. R. Wu, Stance in Talk: A Conversation Analysis of Mandarin Final Particles. John Benjamins, 2004. [Google Scholar]
18.
L. Cheng, X. L. Liu, and C. L. Si, “Identifying stance in legislative discourse: A corpus-driven study of data protection laws,” Humanit. Soc. Sci. Commun., vol. 11, p. 803, 2024. [Google Scholar] [Crossref]
19.
F. F. Qu, G. S. Xiao, and X. Chen, “A review of research on authorial stance in academic discourse,” Acad. J. Manag. Soc. Sci., vol. 2, no. 2, pp. 105–107, 2023. [Google Scholar] [Crossref]
20.
E. Fleisig, G. Smith, M. Bossi, I. Rustagi, X. Yin, and D. Klein, “Linguistic bias in ChatGPT: Language models reinforce dialect discrimination,” in Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, Florida, USA, 2024, pp. 13541–13564. [Google Scholar] [Crossref]
21.
G. Lakoff, “Hedges: A study in meaning criteria and the logic of fuzzy concepts,” J. Philos. Logic, vol. 2, pp. 458–508, 1973. [Google Scholar] [Crossref]
22.
M. E. Gherdan, “Hedging in academic discourse,” Rom. J. Engl. Stud., vol. 16, no. 1, pp. 123–127, 2019. [Google Scholar] [Crossref]
23.
R. Quirk, S. Greenbaum, G. Leech, and J. Svartik, A Grammar of Contemporary English. Longman, 1972. [Google Scholar]
24.
A. R. James, “Compromisers in English: A cross-disciplinary approach to their interpersonal significance,” J. Pragmat., vol. 7, no. 2, pp. 191–206, 1983. [Google Scholar] [Crossref]
25.
P. Brown and S. C. Levinson, Politeness: Some Universals in Language Usage. Cambridge University Press, 1987. [Google Scholar]
26.
D. Crystal and D. Davy, Advanced Conversational English. Longman, 1975. [Google Scholar]
27.
M. Stubbe and J. Holmes, “You know, eh and other ‘exasperating expressions’: An analysis of social and stylistic variation in the use of pragmatic devices in a sample of New Zealand English,” Lang. Commun., vol. 15, no. 1, pp. 63–88, 1995. [Google Scholar] [Crossref]
28.
D. Crystal, The Cambridge Encyclopedia of Language. Cambridge University Press, 1987. [Google Scholar]
29.
P. Crompton, “Hedging in academic writing: Some theoretical problems,” Engl. Specif. Purp., vol. 16, no. 4, pp. 261–274, 1997. [Google Scholar] [Crossref]
30.
S. H. Chan and H. Tan, “Maybe, perhaps, I believe, you could: Making claims and the use of hedges,” Engl. Teach., vol. 31, no. 1, pp. 98–106, 2002. [Google Scholar]
31.
B. Fraser, “Hedged performatives,” in Syntax and Semantics, New York: Academic Press, 1975, pp. 187–210. [Google Scholar]
32.
G. Yule, The Study of Language. Cambridge University Press, 2010. [Google Scholar]
33.
J. R. Wishnoff, “Hedging your bets: L2 learners’ acquisition of pragmatic devices in academic writing and computer-mediated discourse.,” in Second Language Studies, 2000, pp. 119–148. [Google Scholar]
Search
Open Access
Research article

Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse

kayode victor amusan*
Department of English, University of Louisiana at Lafayette, 70504 Lafayette, USA
Acadlore Transactions on AI and Machine Learning
|
Volume 4, Issue 1, 2025
|
Pages 40-49
Received: 01-21-2025,
Revised: 02-28-2025,
Accepted: 03-05-2025,
Available online: 03-10-2025
View Full Article|Download PDF

Abstract:

Stance, a critical discourse marker, reflects the expression of attitudes, feelings, evaluations, or judgments by speakers or writers toward a topic or other participants in a conversation. This study investigates the manifestation of stance in the discourse of four prominent artificial intelligence (AI) chatbots—ChatGPT, Gemini, MetaAI, and Bing Copilot—focusing on three dimensions: interpersonal stance (how chatbots perceive one another), epistemic stance (their relationship to the topic of discussion), and style stance (their communicative style). Through a systematic analysis, it is revealed that these chatbots employ various stance markers, including hedging, self-mention, power dominance, alignment, and face-saving strategies. Notably, the use of face-saving framing by AI models, despite their lack of a genuine “face,” highlights the distinction between authentic interactional intent and the reproduction of linguistic conventions. This suggests that stance in AI discourse is not a product of subjective intent but rather an inherent feature of natural language. However, this study extends the discourse by examining stance as a feature of chatbot-to-chatbot communication rather than human-AI interactions, thereby bridging the gap between human linguistic behaviors and AI tendencies. It is concluded that stance is not an extraneous feature of discourse but an integral and unavoidable aspect of language use, which chatbots inevitably replicate. In other words, if chatbots must use language, then pragmatic features like stance are inevitable. Ultimately, this raises a broader question: Is it even possible for a chatbot to produce language devoid of stance? The implications of this research underscore the intrinsic connection between language use and pragmatic features, suggesting that stance is an inescapable component of any linguistic output, including that of AI systems.

Keywords: Stance, AI, Natural language processing (NLP), Discourse, Hedging, Self-mention, Face-threatening act (FTA), Face-saving act (FSA)

1. Introduction

Stance has been described differently by scholars. To some, it is the attitude or perspective of a writer or speaker towards a topic in discourse. To others, it refers to how speakers or writers express their attitudes toward others to either reflect dominance or friendship [1]. Chafe [2] posited that stance-taking is a crucial aspect of identity construction, as it is used by speakers to display who they are and how they relate to others. Kiesling [3] defined stance as a person’s expression of their relationship to the topic (epistemic stance) or the expression of their relationship to their interlocutors (interpersonal stance: friendly or dominating). According to Biber and Finegan [4], the expression of stance involves lexical and grammatical markers that encode an individual’s subjective positioning toward a proposition or interaction.

These definitions of stance demonstrate that stance in discourse refers to a speaker’s or writer’s expressed attitude, evaluation, perspective, or intersubjective positioning toward the topic of discourse, which is often conveyed through lexical, grammatical, and interactional choices. In short, it is a writer’s approach to the topic, style, and audience or other participants in the discourse. Meanwhile, stance can operate in three different categories. Conrad and Biber [5] presented the three categories of stance as epistemic stance (reliability of a topic/knowledge), interpersonal stance (speaker’s attitudes towards others), and style stance (how information is presented in terms of voice and personality).

Since stance-taking in human writing can be evaluated through how people interact with the topic of discussion, audience or other participants, and style of communication, this study investigates how AI-assisted chatbots take stances during text generation by identifying how they express their positions, perspectives, or attitudes toward a topic and how they perceive one another. Currently, the four prominent conversational AI chatbots in global technology are ChatGPT (OpenAI), Gemini (Google DeepMind), Bing Copilot (Microsoft), and Meta AI (Meta). These tools are capable of simulating human-like conversations by generating texts using human language. Their algorithm employs natural language processing (NLP) techniques to engage users effectively [6]. The developing role of these AI chatbots in communication has drawn attention to their use of language. For instance, Chen and Ren [7] submitted that AI models exhibit distinct conversational styles, as ChatGPT performs the worst at conversational discourse while Copilot exhibits stronger conversational abilities. This finding demonstrates that AI chatbots do not share a uniform conversation style; rather, each one exhibits various stylistic patterns when generating conversational text [7]. This shows that the use of natural language requires incorporating pragmatic and interactive functions such as stance. In other words, natural language cannot be employed without pragmatic information, as it is inherently embedded in the code and “baked in” with the basic communicative function. Therefore, the question is not whether the bot language incorporates pragmatic functions, but if it does it coherently and in a human fashion.

The current study focuses on stance-taking in AI language use by examining how chatbots linguistically position themselves, convey attitudes, and align with or distance themselves from others. This research analyzes stance as a pragmatic feature of AI-generated discourse to extend our understanding of AI chatbots—not just as tools for conveying information, but as agents capable of shaping human-like interactional dynamics. One other concern of this study is the curiosity about whether conversational AI models are only built to interact with humans alone, or they are also designed to acknowledge the existence of other AI models. Since little attention has been drawn towards this direction, this study examines the expression of stance among the four prominent AI chatbots (ChatGPT, Gemini, MetaAI, and Bing Copilot), regarding how they perceive each other (interpersonal stance), how they relate to the topic of discussion (epistemic stance), and their style of communication (style stance). The specific objectives of this study are to identify the stance markers present in each AI discourse, aiming to examine how each chatbot positions itself in relation to the others, and to investigate how they approach the topic of discourse. By analyzing these aspects, the study aims to gain a deeper understanding of the linguistic strategies employed by AI chatbots in different conversational contexts.

2. Theoretical Perspectives

The term stance, a tool of discourse rhetoric, has been defined variously by prominent scholars. Du Bois and Kärkkäinen [8] defined stance as a public act by a social actor, achieved dialogically through overt communicative means, of simultaneously evaluating objects, positioning subjects, and aligning with other subjects. Similarly, Kiesling [9] described it as a means of referring to ways that people position themselves in conversation, often in terms of politeness, certainty, or affect/emotion. Ochs [10] described stance as a socially recognized act that conveys affective and epistemic positions through linguistic and non-linguistic means. Englebretson [11] referred to stance as a speaker’s expression of their perspective, feelings, or evaluations concerning the proposition or interactional context. According to Hyland [12], stance, in written discourse, is the use of language to convey an author's attitudes, judgments, and commitments to the content and the reader. Lancaster [13] also described stance as the linguistic manifestation of a speaker's alignment or misalignment with an interactional framework or the conveyed content. These definitions collectively imply that stance is a dynamic, socially situated act of positioning in discourse, where speakers express evaluation, alignment, affect, and epistemic perspectives through both linguistic and non-linguistic means.

Furthermore, Kärkkäinen [6] viewed stance as the speaker’s moment-by-moment, linguistically indexed expression of attitudes and perspectives in interaction. This demonstrates that stance markers are indexical, not semantic, because they are often context-dependent and function to index or signal the speaker's attitudes, perspectives, or social alignments in a particular interaction. However, some stance markers (such as modal verbs) could express stance as part of their semantics.

Scholars have described stance as a social phenomenon reflected through language use. Biber and Finegan [4] defined stance as the expression of attitudes, evaluations, certainty, or other epistemic markers, typically encoded in adverbs, modal verbs, and clauses. According to them, stance is a lexical and grammatical expression of attitudes, feelings, judgments, or commitment concerning the propositional content of a message. They posited that stance is mostly demonstrated in the use of adverbs, verbs, and adjectives. The lexical and grammatical construction of “stance” is corroborated by Iwasaki and Yap’s [1] position, as they submitted that “stance may be indicated through established lexical and morphological devices, or indexed indirectly via speakers’ strategic use of particular linguistic signs or interactional patterns in the speech situation. Also, Conrad and Biber [5] posited that adverbials are markers of stance. Most of these features are employed to create pictures of doubt, certainty, hedges, emphasis, possibility, probability, necessity, and prediction [1]. This demonstrates that stance marking is done at the linguistic and interactional level [6].

Ochs [10] provided a binary description of stance: epidemic stance vs. affective stance. Epistemic stance deals with a speaker’s attitude towards knowledge and information while affective stance refers to the speaker’s emotional connection to other participants or the topic of discourse. On the other hand, Berman et al. [14] considered the notion of stance as a three-dimensional discourse feature: orientational (the dynamics of perception among participants), attitudinal (epistemic, deontic, and affective), and generality (either specific or general). This shows that the notion of stance is dynamic as it can be approached through various lenses.

Lancaster [13], quoting Hyland [15], perceived stance to be the writer’s textual “voice” personality. This claim is like the linguistic approach to style as the man [16], by probing into the identity of the speaker. As reiterated by Englebretson [11], it is pertinent to state that stance-taking plays a significant role in language use, largely influenced by language form. Wu [17] also suggested that stance is treated as an emergent product, that is, shaped by, and itself shapes, the emerging development of interaction.

These studies have demonstrated that there is never a time during social interactions when people don’t take stances and positions. It is also obvious that stance is not “a speaker’s position” per se but “the expression of his position” about his feelings or emotions (affective), or other people being addressed (interpersonal) [8]. If stance were to be a speaker’s position about his feelings or emotions, then AI models cannot take stance because they do not have feelings of emotions. But if the stance is “the expression of a position” then it is possible to examine how an AI could express its position, because it would be “hollow” employment of form without content.

2.1 Empirical Review on Stance

Different studies have been conducted on the use of stances in various discourses. Lancaster [13] conducted a study on the ways that writing specialists can assist faculty in the discipline to become explicitly aware of stance expressions in their students’ writing. The study found that expressing an appropriate authorial stance is particularly challenging for L2 writers because the rules for evaluating evidence might conflict with their L1. He studied different fields of writing from students and identified examples of stance-making. The study discussed how university faculty can identify these language practices to help their students improve. Cheng et al. [18] investigated the use of stance in legislative discourse. This study focused on data protection laws from the United States, the European Union, and China through Hyland’s stance model. The study examined four stance-marking tools, namely, hedging, boosting, self-mention, and attitude markers. The study explains how legislative texts reflect public ideologies and legal values via stance expressions. The findings highlight the socio-legal constructiveness of such laws and propose a specialized model for examining stances in legal contexts.

Similarly, Qu et al. [19] conducted a comprehensive study on authorial stance in academic discourse, further illustrating the broad application of stance markers across different fields. The study presented the stance markers as hedges, boosters, attitude markers, and self-mention. The study’s categorization of the stance features is based on the theoretical framework of Hyland's interaction model [12]. The authors highlighted salient variations in stance usage across cultures, languages, disciplines, and academic writing.

The studies above highlight the broad application of stance markers across different discourses (academic writing, legal discourse, and cross-cultural communication) by identifying various contexts in which stance markers function. This supports the idea that the use of stance is a crucial tool for expressing positioning in multifaceted language situations or contexts. These studies imply that stance is not merely a linguistic or grammatical feature but a socio-pragmatic tool for constructing meaning to reflect ideologies, public values, and social positioning (as seen in the legislative and academic discourse studies).

2.2 Empirical Review on the Style of AI Chatbots

The closest to the identification of stance in AI discourse is the analysis of style and bias by AI models.

Like the current study, Chen and Ren [7] attempted a corpus-based analytical study to examine the discourse styles of three top AI chatbots, namely, ChatGPT, Claude, and Microsoft Bing Chat. The study was conducted to determine the capacity of each chatbot to imitate the patterns of natural conversations and whether they exhibit different conversational styles from one another, taking each bot’s style as a unitary thing. Their findings revealed significant stylistic variations among the chatbots, with ChatGPT exhibiting the weakest conversational naturalness. The study projected a likelihood of this being due to its pre-training which focused on formal and expository text. On the other hand, Bing Copilot demonstrated superior conversational tendencies, while Claude occupied an intermediate position, characterized by a more argumentative style that aligns tasks requiring reasoning and stance-taking. The study submits that these stylistic differences might be influenced by each chatbot's training data and frequent model updates which, therefore, necessitate the importance of enhancing AI systems for their specific tasks (either for natural conversations or task-oriented commands).

Fleisig et al. [20] examined how ChatGPT shows linguistic bias in American English and Nigerian English. The study found that the AI models are less accurate and more stereotypical when responding to these varieties of English. This inaccuracy and stereotyping reflect a negative stance. The authors submitted that ChatGPT’s responses can reinforce stereotypes and show a negative stance towards non-standard dialects such as American English and Nigerian English. The major difference between this study and the current one is that the latter examines how chatbots express stance among one another.

3. Method

This study employs a qualitative research design to analyze the use of stance in the text-generating discourse of four major AI chatbots (ChatGPT, Gemini, Bing Copilot, and Meta AI). The versions of selected chatbots include the ChatGPT-4o model, the Gemini 2.0 Flash model, the Meta Llama 3.2 (for Meta AI), and Copilot in Microsoft Edge. For this version of ChatGPT, the training updated time is October 2023; Gemini is August 2024; Meta AI is December 2023, and Bing Copilot is October 2023. The same prompt was given to all the chatbots. It stated thus:

Prompt: “Which among these AI tools do you think has a better performance: ChatGPT, Meta AI, Bing Copilot, or Gemini?"

Four different responses were collected from each data for comparison. Each response was collected as data and analyzed with a focus on three key aspects: identifying stance markers in each AI discourse, examining how each chatbot positions itself in relation to others, and investigating how they approach the topic of discourse. Investigating these functions involves an exploration of the ideological placement and power dynamics in the AI discourse, together with the FSAs to explore how they preserve face in their interactions. The study aims to contribute to our understanding of AI chatbots beyond their ability to merely convey information to their capability to shape human-like interactional dynamics.

4. Results

The study reports the results of stance markers, interpersonal stance, and affective stance strategies used by each bot. Hedging, self-mention, power dominance, alignment, FSAs, and FTAs are the four prominent stance strategies featured in the AI discourse.

4.1 Hedging

One of the most significant stance markers that were adopted by some of the chatbots is hedging. Hedging was first introduced by Lakoff [21] to mean “making things fuzzier or less fuzzy.” It is a discourse (or stance) marker used to lessen the impact of an utterance due to politeness constraints between a speaker and addressee [4]. It reduces commitment and negotiates meaning between writers and readers [22]. It has been described as “downtoners” [23], “compromisers” [24], “weakeners” [25], “softeners” [26], and “pragmatic devices” [27]. Berman et al. [14] posited that hedging is most often used to paint certain pictures such as doubt, certainty, emphasis, possibility, probability, necessity, and prediction. According to Lakoff [21], a speaker uses hedging to perform two functions, namely, to express uncertainty or to soften the speech. Crystal [28] posited that hedging is used because of a speaker’s intention not to be precise, avoid further questions and their unwillingness to tell the truth.

The most frequent hedging strategies are lexical verbs, adverbial constructions, and modal verbs [22], [29], [30], performative verbs [31], cognition verbs, hypothetical constructions and anticipatory it-clauses [30], copular verbs, other than BE, probability adjectives and probability adverbs [29].

Here are the results of how chatbots exhibit stance marking in their responses. This was mostly achieved using modal auxiliary verbs, as explicitly stated by Biber and Finegan [4]. Gemini and Bing CoPilot are two models that use this strategy.

  • Via modals

“Their ability to generate coherent and contextually relevant text suggests that they might excel in identifying grammatical errors.” (Gemini)

“While ChatGPT and Gemini may have an edge in terms of language generation and understanding, other tools could excel in certain areas.” (Gemini)

“Overall ChatGPT and Bing Copilot might stand out for their versatility and detailed feedback.” (Bing CoPilot)

“Each of these tools offers unique strengths and is tailored to different types of users and applications.” (Bing CoPilot)

  • Via conditionals

“If you need integrated search capabilities, Bing Copilot might be more suitable.” (Bing CoPilot)

“If you prefer a more conversational and context-aware tool, ChatGPT could be the best choice.” (Bing CoPilot)

“If you need integrated search capabilities for additional context, Bing Copilot might be more suitable.” (Bing CoPilot)

“To determine which tool performs best, it is essential to consider several factors...” (Gemini)

  • Via adverbial

“Choosing the best AI tool...is largely dependent on your specific needs and preferences.” (Bing CoPilot)

  • Via lexical verbs

“Their ability to generate coherent and contextually relevant text suggests that they might excel in identifying grammatical errors.” (Gemini)

4.2 Self-Mention Stance Marker

Self-mention is a pragma-linguistic strategy for stance marking in discourse. Hyland [15] stated that authors cannot entirely refrain from presenting themselves in the text and further posited that self-mention is important for successful academic writing. Self-mention is often marked by the use of first-person personal pronouns (e.g., I, me, my, we, us, and our).

While none of the chatbots used these first-person personal pronouns to exhibit self-mention, there are implicit elements of self-mention in their texts.

“ChatGPT is known for its extensive language generation capabilities. Trained on a vast dataset, it excels at nuanced tasks such as grammar analysis, conversation simulation, and text comprehension, making it highly reliable for natural language understanding (NLU) and generation (NLG). …. ChatGPT excels at nuanced tasks such as grammar analysis, conversation simulation, and text comprehension.” (ChatGPT)

“Meta AI excels in syntax analysis and semantic error detection.” (Meta AI)

“ChatGPT and Gemini…have garnered widespread attention for their impressive language generation and understanding abilities.” (Gemini)

“Bing Copilot integrates search capabilities, providing users with up-to-date information and context.” (Bing CoPilot)

“Its seamless integration with Microsoft’s suite of tools further enhances its utility for professional and academic tasks.” (CoPilot)

4.3 Power Dominance

“ChatGPT excels at nuanced tasks such as grammar analysis, conversation simulation, and text comprehension. Its ability to engage in meaningful, context-driven dialogue sets it apart from other AI tools.” (ChatGPT)

“ChatGPT remains the strongest option, while Bing Copilot is excellent for productivity integration. Meta AI and Gemini are still evolving in their practical applications.” (ChatGPT)

“ChatGPT stands out for its wide availability, conversational depth, and accuracy in text analysis... Meta AI and Gemini are still evolving in their practical applications.” (ChatGPT)

“ChatGPT currently stands out for its wide availability, conversational depth, and accuracy in text analysis. Bing Copilot performs similarly but is better suited for productivity tasks. Meta AI and Gemini are more experimental, with Gemini being a promising contender once fully realized.” (ChatGPT)

“Meta AI, designed to focus on cutting-edge NLP techniques, outperforms others in tasks requiring sophisticated computational models and abstract reasoning.” (Meta AI)

“Based on overall performance, ChatGPT ranks first (90-95%), followed by Meta AI (85-90%), Bing Copilot (80-85%), and Gemini (75-80%). When choosing an AI tool, consider the specific purpose, context, and desired accuracy level.” (Meta AI)

“Bing Copilot enhances productivity by integrating seamlessly with Microsoft’s suite of tools, offering users real-time information that other tools like ChatGPT or Meta AI cannot match.” (Bing CoPilot)

“Gemini’s ability to deliver precise grammar corrections and context-aware language generation makes it an invaluable tool for users focused on language accuracy.” (Gemini)

4.4 Alignment

“ChatGPT and Meta AI excel in grammar analysis…ChatGPT and Meta AI lead in text generation, producing…ChatGPT and Meta AI excel in error detection and correction, …ChatGPT and Meta AI demonstrate superior contextual…” (Meta AI)

“Based on overall performance, ChatGPT ranks first (90-95%), followed by Meta AI (85-90%), Bing Copilot (80-85%), and Gemini (75-80%). When choosing an AI tool, consider the specific purpose, context, and desired accuracy level.” (Meta AI)

“Overall, ChatGPT and Bing Copilot might stand out for their versatility and detailed feedback.” (CoPilot)

“While ChatGPT and Gemini may have an edge in terms of language generation and understanding, other tools could excel in certain areas.” (Gemini)

4.5 FSA and FTA

In politeness theory, Brown and Levinson [25] argued that the concept of face contains two different aspects: positive face and negative face. They describe the positive face as a human desire to be well thought out, admired, respected, and considered as a good person while the negative face is a human’s desire to be free from any burden or imposition. FTAs occur when speeches or actions challenge a person’s face, either through criticism or direct commands. Yule [32] argued that FTA is when an individual tries to do and say things that lead to another individual’s face loss. These acts disrupt social balance and can lead to discomfort in communication. However, FSAs refer to when an attempt is carried out to minimize the loss of positive face through politeness strategies.

Evidence of FTAs and FSAs, captured in the texts of each of the chatbots, is represented below:

“ChatGPT is highly effective at conversational tasks and complex language generation, but tools like Meta AI and Gemini may excel in more specialized areas.” (ChatGPT)

“While Meta AI is pioneering the latest in NLP research, it’s important to recognize that other models such as ChatGPT and Gemini are also advancing in conversational AI and practical applications.” (Meta AI)

“Bing Copilot integrates seamlessly with Microsoft tools, making it ideal for professional tasks, but the best choice ultimately depends on your specific needs, whether you choose ChatGPT, Meta AI, or Gemini.” (Bing CoPilot)

“Gemini provides accurate grammar correction, which might be beneficial for those focusing on language accuracy, though it’s important to note that tools like ChatGPT and Meta AI might offer broader functionalities.” (Gemini)

5. Discussion of Findings

5.1 Hedging

Hedging is a frequent strategy Gemini and Bing CoPilot use to express stance. Gemini’s use of hedging involves the notion of uncertainty and avoiding absolute comparison and definitive claims. For instance, its use of modal verbs like “may,” “might,” and “could” softens its assertions. And it shows possibilities rather than certainty. Gemini remains open and cautious. It uses diplomacy to depict that performance depends on such factors as the complexity of prompts or specific grammatical rules being evaluated. Equally, it uses conditional structures to establish caution; that is, its performance is subjected to context and not universally applicable. Gemini uses hedging to avoid making absolute comparisons and to exhibit a balanced judgment that acknowledges the potential proficiency of other tools without asserting superiority. It increases its appearance of reliability, softens the risk of over-promising, and invites users to make informed decisions tailored to their requirements.

Bing Copilot also exhibits hedging to avoid making absolute claims. Unlike Gemini, though, it presents its assessment as dependent on users’ preferences, which are subject to context rather than absolutes. The use of “might” highlights the cautious nature of the assessment, revealing that the tools' performance may vary based on personal requirements. Likewise, Bing CoPilot uses hedging to depict possibility without making an explicit recommendation. This is done to ensure that responses are streamlined to remain adaptable to distinct user circumstances. It also uses hedging to balance its assessments by acknowledging the roles of each chatbot while preventing direct assertion of superiority. In addition, the repeated use of the “if…” conditional by Bing CoPilot presents flexibility, which allows users to consider their choices based on various factors. Through hedging, Bing Copilot tries to avoid overgeneralization by maintaining neutral and professional tones. It also tries to present the fact that all chatbots have their unique capacities based on the context of tasks.

In summary, Gemini and Bing CoPilot adopt hedging as a stance marker in their discourses, as it allows them to maintain credibility while avoiding confrontation with others. Gemini tends to adopt an open and cautious approach, carefully framing its responses to mitigate potential risks, whereas Copilot actively engages in evaluative judgments, particularly regarding the nature and intent of user prompts. Given the complexity of stance expression in AI-generated discourse, it is essential to establish a framework that ensures the representativeness of the examples analyzed. While this study does not rely on quantitative support, the categorization of chatbot tendencies provides a structured means of interpreting these linguistic patterns. By grounding the analysis in observable discourse features, this approach allows for a systematic exploration of how different models position themselves in response to user interactions. This makes their approach a more diplomatic and mature manner of negotiating power.

5.2 Self-Mention

In discourse, self-mention describes the situation where speakers refer to themselves or their role in the research or argument being presented. Although this is usually achieved using first-person personal pronouns (e.g., I, me, my, we, us, and our), these chatbots implicitly exhibit self-mention by using their names as subjects instead of using first-person pronouns.

ChatGPT strongly employs the self-mention stance strategy by referencing its abilities first in each paragraph before others. In contrast, other chatbots introduce ChatGPT’s capabilities before mentioning their own. When two models are mentioned together, ChatGPT is consistently listed first (e.g., “ChatGPT and Bing Copilot might stand out…; ChatGPT and Gemini may have…”).

Meta AI uses self-mention to emphasize its strengths, while Gemini leverages self-mention to position itself as an evolving model in NLP and grammar analysis, framing itself as a notable competitor. Additionally, Gemini acknowledges its limitations while reinforcing its strengths. Bing Copilot, on the other hand, employs self-mention to highlight its unique capabilities, particularly its suitability for professional tasks. By explicitly referencing its professional applications, Bing Copilot carves out a distinct niche from its competitors.

Overall, the strategic use of self-mention by all four AI models helps them align with users' needs, reinforcing their reliability and credibility at the moment of use.

5.3 Interpersonal Stance

Interpersonal stance refers to how someone interacts with others, that is, how a person's behavior and communication style shape their relationships and interactions [3]. Interpersonal stance goes a long way to indicate whether interactants are dominant or friendly in their positioning of self-representation [3]. This study identifies power dominance and alignment as the two interpersonal strategies adopted in the chatbots’ discourses.

5.3.1 Power dominance

All four chatbots try to assert dominance in their texts. ChatGPT and Meta AI did this most confidentially, assertively, and authoritatively.

ChatGPT asserts its authority by highlighting its ability to handle complex language tasks. By emphasizing its proficiency in various language tasks, ChatGPT asserts itself as the most authoritative tool in this domain. ChatGPT distinguishes itself from other AI models, especially in terms of conversational precision and intensity. ChatGPT expresses its position as the major and leading AI model by initiating the contrast between its established capacities and others which it describes as “still emerging” and “experimental.” In addition, ChatGPT uses the power ranking strategy to assert its superiority over others. ChatGPT ranks itself as the most superior, followed by Bing CoPilot, Gemini, and Meta AI.

Meta AI also suggests power dominance by asserting that it is an authority in NLP, by expressing its capacity as a tool appropriate for complex, research-based tasks other than general conversational AI. It also used the power ranking metrics to achieve this, as it states, “ChatGPT ranks first 90-95%, followed by Meta AI 85-90%, Bing Copilot 80-85%, and Gemini 75-80%.” This framing depicts the shows of power and dominance among others. It is ironic to realize that while Meta AI ranks itself as next to ChatGPT, ChatGPT ranks Meta AI as being at the bottom of the list, i.e., “ChatGPT remains the strongest option, while Bing Copilot is excellent for productivity integration. Meta AI and Gemini are still evolving in their practical applications” (ChatGPT).

Bing Copilot also asserts its dominance in the field of professionalism and productivity, with a strong emphasis on its integration capabilities. Gemini also tries to assert authority in specific areas of language analysis. Gemini’s claim of accuracy in grammar and language generation strengthens its authority in education and linguistics. This foregrounds its uniqueness in specific tasks other than generalized functionalities.

In summary, power dominance is a unique tool to achieve self-representation. Each chatbot exhibits its strengths by highlighting its best areas of performance. Specifically, ChatGPT promotes its generality and versatility, Meta AI focuses on research, Bing Copilot on professional use, and Gemini on precision in language tasks. Significantly, the study also identifies ChatGPT as the most assertive model due to its unprecedented confidence and assertion. Meta AI also possesses similar traits, while Gemini and Bing Copilot are less assertive in their acknowledgment of authority.

5.3.2 Alignment

Findings reveal that almost all the chatbots try to align with ChatGPT to create a superior alliance against others. This interpersonal stance marking strategy was adopted by all other chatbots except for ChatGPT. They all established that ChatGPT is superior; hence, they tried to align with ChatGPT to assert their voice against others. This could be put in such a term as Us vs. Them strategy.

Meta AI employs this strategy to align itself with ChatGPT, positioning them as sophisticated tools in contrast to Gemini and Bing CoPilot as comparatively inferior. This alliance is achieved through lexical coordination (e.g., “ChatGPT and Meta AI excel in...”), a pattern that appears five times in the text. This repetition suggests a perception of ChatGPT as the dominant force among them. Further supporting this claim, Meta AI utilizes a ranking mechanism to express its alignment with ChatGPT in the top tier, while placing the other two at a lower rank.

Bing Copilot and Gemini also use this stance marker to place themselves alongside ChatGPT as being superior to others.

Virtually all these models leveraged the status of ChatGPT as a dominant and leading AI model to highlight their capabilities. Meta AI consistently adopts this strategy, frequently using “ChatGPT and Meta AI” as its subject. Similarly, other chatbots align themselves with ChatGPT by mentioning it before their own names, reinforcing the notion of stance as they deliberately associate with a more established model to enhance their perceived credibility.

5.4 Affective Stance

An affective stance describes a writer's emotional attitude toward a topic or participants, such as approval, anger, or empathy [10]. In this context, face plays a crucial role as a pragmatic strategy in shaping affective stance. It is often analyzed through the lens of FTAs and FSAs.

5.4.1 FSA

Findings reveal that all four chatbots try to save their negative faces as none of them present their weaknesses while they exhibit stance. ChatGPT employs a face-saving strategy to protect its social image by thoughtfully identifying and acknowledging the contributions of other tools without reducing its abilities. ChatGPT uses the modal auxiliary “may excel” as a subtle way of not being negative about the reputation of other models by exhibiting an image of fairness and humility. ChatGPT’s use of “may” instead of “do” helps to minimize any potential dispute by not absolutely discrediting the other chatbots.

Meta AI, while maintaining its academic and research dominance, uses a face-saving strategy to preserve its reputation. The phrase “it’s important to recognize” is a face-saving technique to make sure that it does not appear too critical of other models. By recognizing the advancement in other AI models, it keeps a polite stance and avoids face-threatening comments.

Bing Copilot also uses a face-saving discourse strategy to present itself as a beneficial and non-argumentative tool. It tries to evade the claim of superiority, as such preserving the reputation of the other tools. Copilot also avoids making a direct comparison that might threaten the faces of its competitors. Gemini also adopts a face-saving strategy to position itself as a precise and humble tool.

Gemini tries to avoid making a direct claim of superiority. By using the phrase “it’s important to note,” Gemini uses a face-saving mechanism to prevent itself from being labeled as arrogant or indifferent to other AI models. Hence, it preserves its image as useful and valuable without dominating others.

5.4.2 FTA

There is little direct face-threatening language used, as all four chatbots avoid direct negative commentaries or attacks on each other. However, there are a few instances where some forms of face-threatening occur through indirect comparison.

In ChatGPT’s response, although it does not explicitly threaten the face of other AI models, its claim that it is “highly effective” and “stands out” could be understood as subtly downplaying the capabilities of other models by positioning them as experimental. While it uses hedging to minimize the tone, the implication of such a claim of superiority could be seen as an FTA to other models.

Meta AI's statement that it is “pioneering the latest in NLP research” could be depicted as a form of face-threatening, especially when it is juxtaposed with other models. It positions itself as a leader in research, which could threaten the image of other models that focus more on other aspects than research.

Bing Copilot’s self-appraisal as highly proficient in professional tasks suggests that other AI chatbots are deficient in that field. Although it uses soft language, its emphasis on professional capacity might deface the proficiency of other AI models.

The fact that Gemini positions itself as a tool for “precise grammar correction” and its claim that other models like ChatGPT might provide more “generalized answers” can be perceived as a form of FTA. This indirectly demonstrates that the broad conversational capacity of ChatGPT is deficient because of its lack of specificity. Therefore, it might be less effective in specialized tasks.

6. Conclusions

This study attempted to analyze stance-marking in the language use of four AI models: ChatGPT, Meta AI, Bing Copilot, and Gemini. The texts were analyzed based on the use of stance markers [5], [14]. They were also analyzed based on the interpersonal stance features of the chatbots, that is, how they relate to one another [3], and the affective stance, that is, the chatbots’ attitude towards the topic [2].

Hedging and self-mention were the prevalent stance markers used by each of the chatbots to express their positions. This is similar to the studies [18], [28]. Hedging was used more by Gemini and Bing Copilot to communicate the feelings of “uncertainty,” “possibility,” and “avoiding absolute comparison.” This framing was captured by Berman et al. [14]. Modal auxiliary verbs such as “may, might, could,” conditionals, lexical verbs, and adverbials were used to achieve hedging. This exemplifies Biber and Finegan’s [4] submission about the lexical and grammatical marking of stance using verbs, adverbs, and adjectives.

Interpersonal stance marking is significant in the study. The use of power dominance and alignment was predominant in the ways each of the chatbots interacted with each other. Some chatbots (e.g., ChatGPT and MetaAI) used a power dominance stance to assert their dominance generally while Copilot explained its dominance in specific areas, such as conversational intensity, research competence, professional efficiency, or grammatical accuracy. While ChatGPT and Meta AI were highly assertive and confident regarding the use of power dominance, Gemini and Bing CoPilot were less assertive. Instead, they used alignment as a stance strategy to align with ChatGPT, as they project their capabilities from this lens as a leading model.

Analyzing the affective stance in the texts, face-saving framing was used by the chatbots to ensure that they did not engage in explicitly aggressive language; instead, they tried to promote mutual respect by acknowledging the strengths of their competitors. The notion that AI models demonstrate “face-saving framing” without having a “face” underscores the distinction between genuine interactional intent and the reproduction of linguistic conventions. This demonstrates that AI chatbots exhibit stance not as a product of subjective intent but as an inherent feature of natural language itself. Since writing itself is pragmatically sophisticated [33], chatbots do not consciously take a stance as an intentional act. Rather, they exhibit stance as an inherent feature of the natural language on which they are trained, reflecting the pragmatic tendencies embedded in linguistic structures. While chatbots do not possess attitudes or self-awareness, they can simulate subjective positioning and construct relationships through their linguistic outputs. Their responses encode stance through lexical choices and grammatical structures, reflecting patterns of human discourse rather than independent agency. The findings suggest that chatbots do not engage in rivalry or self-recognition in a human sense; rather, they mirror how stance-taking is embedded in human language. This is because the programming algorithm that is built into them might not allow for such tendencies. This study supports Fleisig et al.’s [20] claim about the linguistic bias among AI models which is a function of their training. Likewise. This study agrees with Chen and Ren’s [7] submission that there are significant stylistic variations among chatbots. Nevertheless, while the three studies engage the notion of stance in AI discourse, the current research expands the discussion by investigating stance as a feature of chatbot-to-chatbot communication rather than human-AI interaction. This demonstrates a bridge of gap between human linguistic behaviors and AI tendencies.

In summary, instead of viewing stance as an extraneous feature of discourse, this study reaffirms that stance is an integral and unavoidable aspect of language use, one that chatbots inevitably replicate. In other words, if chatbots must use language, then pragmatic features like stance are inevitable. Ultimately, it might be interesting to examine how chatbots exhibit stance in other languages other than English.

Data Availability

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References
1.
S. Iwasaki and F. H. Yap, “Stance-marking and stance-taking in Asian languages,” J. Pragmat., vol. 83, pp. 1–9, 2015. [Google Scholar] [Crossref]
2.
W. L. Chafe, “Integration and involvement in speaking, writing, and oral literature,” in Spoken and Written Language: Exploring Orality and Literacy, 1982, pp. 35–54. [Google Scholar]
3.
S. F. Kiesling, “Style as stance: Stance as the explanation for patterns of sociolinguistic variation,” in Stance: Sociolinguistic Perspectives, Oxford Academic, 2009. [Google Scholar] [Crossref]
4.
D. Biber and E. Finegan, “Styles of stance in English: Lexical and grammatical marking of evidentiality and affect,” Text & Talk, vol. 9, no. 1, pp. 93–124, 1989. [Google Scholar] [Crossref]
5.
S. Conrad and D. Biber, “Adverbial marking of stance in speech and writing,” in Evaluation in Text: Authorial Stance and the Construction of Discourse, 2000, pp. 56–73. [Google Scholar] [Crossref]
6.
E. Kärkkäinen, Epistemic Stance in English Conversation: A Description of Its Interactional Functions, with a Focus on I Think. John Benjamins, 2003. [Google Scholar]
7.
H. L. Chen and W. B. Ren, “Does AI chatbot have a conversation style? A corpus-based analysis on AI-generated conversation material,” in Proceedings of the 2024 2nd International Conference on Language, Innovative Education and Cultural Communication (CLEC 2024), Wuhan, China, 2024. [Google Scholar] [Crossref]
8.
J. W. Du Bois and E. Kärkkäinen, “Taking a stance on emotion: Affect, sequence, and intersubjectivity in dialogic interaction,” Text & Talk, vol. 32, no. 4, pp. 433–451, 2012. [Google Scholar] [Crossref]
9.
S. F. Kiesling, “Stance and stancetaking,” Annu. Rev. Linguist., vol. 8, pp. 409–426, 2022. [Google Scholar] [Crossref]
10.
E. Ochs, “Linguistic resources for socializing humanity,” in Rethinking Linguistic Relativity, Cambridge University Press, 1996, pp. 407–437. [Google Scholar]
11.
R. Englebretson, “Stancetaking in discourse: An introduction,” in Stancetaking in Discourse: Subjectivity, Evaluation, Interaction, 2007, pp. 1–25. [Google Scholar] [Crossref]
12.
K. Hyland, “Stance and engagement: A model of interaction in academic discourse,” Discourse Stud., vol. 7, no. 2, pp. 173–192, 2005. [Google Scholar] [Crossref]
13.
Z. Lancaster, “Making stance explicit for second language writers in the disciplines: What faculty need to know about the language of stancetaking,” in Perspectives on Writing: WAC and Second-Language Writers: Research Towards Linguistically and Culturally Inclusive Programs and Practices, The WAC Clearinghouse and Parlor Press, 2014, pp. 269–292. [Google Scholar] [Crossref]
14.
R. Berman, H. Ragnarsdóttir, and S. Strömqvist, “Discourse stance,” Writ. Lang. Lit., vol. 5, no. 2, pp. 253–287, 2002. [Google Scholar] [Crossref]
15.
K. Hyland, “Humble servants of the discipline? Self-mention in research articles,” Engl. Specif. Purp., vol. 20, no. 3, pp. 207–226, 2001. [Google Scholar] [Crossref]
16.
A. Ogunsiji, M. E. Dauda, I. O. Nwabueze, and A. M. Yakubu, ENG 434: Literary stylistics. National Open University of Nigeria, 2012. [Online]. Available: https://nou.edu.ng/coursewarecontent/ENG434%20.pdf [Google Scholar]
17.
R. J. R. Wu, Stance in Talk: A Conversation Analysis of Mandarin Final Particles. John Benjamins, 2004. [Google Scholar]
18.
L. Cheng, X. L. Liu, and C. L. Si, “Identifying stance in legislative discourse: A corpus-driven study of data protection laws,” Humanit. Soc. Sci. Commun., vol. 11, p. 803, 2024. [Google Scholar] [Crossref]
19.
F. F. Qu, G. S. Xiao, and X. Chen, “A review of research on authorial stance in academic discourse,” Acad. J. Manag. Soc. Sci., vol. 2, no. 2, pp. 105–107, 2023. [Google Scholar] [Crossref]
20.
E. Fleisig, G. Smith, M. Bossi, I. Rustagi, X. Yin, and D. Klein, “Linguistic bias in ChatGPT: Language models reinforce dialect discrimination,” in Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, Florida, USA, 2024, pp. 13541–13564. [Google Scholar] [Crossref]
21.
G. Lakoff, “Hedges: A study in meaning criteria and the logic of fuzzy concepts,” J. Philos. Logic, vol. 2, pp. 458–508, 1973. [Google Scholar] [Crossref]
22.
M. E. Gherdan, “Hedging in academic discourse,” Rom. J. Engl. Stud., vol. 16, no. 1, pp. 123–127, 2019. [Google Scholar] [Crossref]
23.
R. Quirk, S. Greenbaum, G. Leech, and J. Svartik, A Grammar of Contemporary English. Longman, 1972. [Google Scholar]
24.
A. R. James, “Compromisers in English: A cross-disciplinary approach to their interpersonal significance,” J. Pragmat., vol. 7, no. 2, pp. 191–206, 1983. [Google Scholar] [Crossref]
25.
P. Brown and S. C. Levinson, Politeness: Some Universals in Language Usage. Cambridge University Press, 1987. [Google Scholar]
26.
D. Crystal and D. Davy, Advanced Conversational English. Longman, 1975. [Google Scholar]
27.
M. Stubbe and J. Holmes, “You know, eh and other ‘exasperating expressions’: An analysis of social and stylistic variation in the use of pragmatic devices in a sample of New Zealand English,” Lang. Commun., vol. 15, no. 1, pp. 63–88, 1995. [Google Scholar] [Crossref]
28.
D. Crystal, The Cambridge Encyclopedia of Language. Cambridge University Press, 1987. [Google Scholar]
29.
P. Crompton, “Hedging in academic writing: Some theoretical problems,” Engl. Specif. Purp., vol. 16, no. 4, pp. 261–274, 1997. [Google Scholar] [Crossref]
30.
S. H. Chan and H. Tan, “Maybe, perhaps, I believe, you could: Making claims and the use of hedges,” Engl. Teach., vol. 31, no. 1, pp. 98–106, 2002. [Google Scholar]
31.
B. Fraser, “Hedged performatives,” in Syntax and Semantics, New York: Academic Press, 1975, pp. 187–210. [Google Scholar]
32.
G. Yule, The Study of Language. Cambridge University Press, 2010. [Google Scholar]
33.
J. R. Wishnoff, “Hedging your bets: L2 learners’ acquisition of pragmatic devices in academic writing and computer-mediated discourse.,” in Second Language Studies, 2000, pp. 119–148. [Google Scholar]

Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Amusan, K. V. (2025). Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse. Acadlore Trans. Mach. Learn., 4(1), 40-49. https://doi.org/10.56578/ataiml040104
K. V. Amusan, "Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse," Acadlore Trans. Mach. Learn., vol. 4, no. 1, pp. 40-49, 2025. https://doi.org/10.56578/ataiml040104
@research-article{Amusan2025InvestigatingSM,
title={Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse},
author={Kayode Victor Amusan},
journal={Acadlore Transactions on AI and Machine Learning},
year={2025},
page={40-49},
doi={https://doi.org/10.56578/ataiml040104}
}
Kayode Victor Amusan, et al. "Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse." Acadlore Transactions on AI and Machine Learning, v 4, pp 40-49. doi: https://doi.org/10.56578/ataiml040104
Kayode Victor Amusan. "Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse." Acadlore Transactions on AI and Machine Learning, 4, (2025): 40-49. doi: https://doi.org/10.56578/ataiml040104
AMUSAN K V. Investigating Stance Marking in Computer-Assisted AI Chatbot Discourse[J]. Acadlore Transactions on AI and Machine Learning, 2025, 4(1): 40-49. https://doi.org/10.56578/ataiml040104
cc
©2025 by the author(s). Published by Acadlore Publishing Services Limited, Hong Kong. This article is available for free download and can be reused and cited, provided that the original published version is credited, under the CC BY 4.0 license.