@mfisichella

Toxicity, Morality, and Speech Act Guided Stance Detection

, , and . Findings of the Association for Computational Linguistics: EMNLP 2023, page 4464--4478. Singapore, Association for Computational Linguistics, (December 2023)
DOI: 10.18653/v1/2023.findings-emnlp.295

Abstract

In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet's stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.

Description

Toxicity, Morality, and Speech Act Guided Stance Detection - ACL Anthology

Links and resources

Tags

community

  • @mfisichella
  • @nejdl
  • @dblp
  • @l3s
@mfisichella's tags highlighted