IberLEF 2023, September 2023, Jaén, Spain
INFOTEC
CentroGEO
INFOTEC
INFOTEC
GitHub: https://github.com/INGEOTEC
WebPage: https://ingeotec.github.io/
Violence is a serious problem that can have a devastating impact on individuals and communities.
In some cases, the virtual world is prone to aggressive expressions and insults, among other violent expressions.
An open challenge to develop multimodal models to detect violent incidents on Twitter. The task has two tracks:
This presentation describes our solution system for the (i) track using only text-based features. More particularly, we used our EvoMSA framework (Graff et al. 2020).
Since we focus on text messages, violent event identification can be seen as a text classification task.
Given the content of a tweet written in natural language, our model must identify whenever a violent event is mentioned.
The text is preprocessed and tokenized, then each token \(t\) is associated with a vector \(\mathbf{v_t} \in \mathbb R^d\) where the \(i\)-th component, i.e., \(\mathbf{v_t}_i\), contains the Inverse-Document-Frequency (IDF) value of the token \(t\) and \(\forall_{j \neq i} \mathbf{v_t}_j=0\).
The set of vectors \(\mathbf V = \{ \mathbf v_t \}\) corresponds to the vocabulary, and there are \(|\mathbf V| = d\) different tokens in the vocabulary.
A text is represented by the sequence of its tokens, i.e., \((t_1, t_2, \ldots)\). The text is then vectorized as:
\[ \textsf{sbow}(\text{some text}) = \textsf{sbow}((t_1, t_2, \ldots)) = \frac{\sum_t \mathbf{v_t}}{\lVert \sum_t \mathbf{v_t} \rVert} \]
In contrast to SBOW, the dense embeddings come from associating each component to the decision value of a text classifier (e.g., based on SBOW) pre-trained on a different collection of tweets.
Without loss of generality, it is assumed that there are \(M\) labeled datasets, each one contains a binary text classification problem.
For each of these \(M\) binary text classification problems, a SBOW-based classifier is built using a pre-trained SBOW representation and a linear Support Vector Machine (SVM) as the classifier. Consequently, there are \(M\) binary text classifiers, i.e., \((c_1, c_2, \ldots, c_M)\). Additionally, the decision function of \(c_i\) is a value where the sign indicates the class.
The text representation is the vector obtained by concatenating the decision functions of the \(M\) classifiers and then normalizing the vector to have unitary norm.
Stacking: All models aggregated are used according to their weights for producing an output, the final classification.
Different BoW representations were created and implemented following the approach described in (Tellez et al. 2017). The first step was to set all the characters to lowercase and remove diacritics and punctuation symbols. Additionally, the users and the URLs were removed from the text. Once the text is normalized, it is split into bigrams, words, and q-grams of characters with \(q=\{2, 3, 4\}.\)
The pre-trained BoW is estimated from 4,194,304 (\(2^{22}\)) tweets randomly selected in a larger collection of messages.
The IDF values were estimated from the collections, and some tokens were selected from all the available ones found in the collection.
Two procedures were used to select the tokens:
It is also possible to train the BoW model using the training set; in this case, we used the default parameters. The only difference is that vocabulary size \(d\) is bounded by the training set tokens.
The dense representations start by defining the labeled datasets used to create them. These datasets are organized in three groups.
The keyword group includes a set of dense representations where the words were selected from the training set; we refer to these representations as tailored.
Following an equivalent approach used in the development of the pre-trained BoW, different dense representations were created.
These correspond to varying the size of the vocabulary and the two procedures used to select the tokens. Vector spaces:
We tested 13 different algorithms for each task. The configuration having the best performance was submitted to the contest. The best performance was computed using k-fold cross-validation (\(k=5\)).
The different configurations tested in this competition are described below. These configurations include BoW and a combination of BoW with dense representations. Stack generalization combines the different text classifiers, and the top classifier was a Naive Bayes algorithm. The specific implementation of this configuration can be seen in EvoMSA’s documentation.
The list of configurations
bow
: Pre-trained BoW where the tokens are selected based on a normalized frequency w.r.t. its type, i.e., bigrams, words, and q-grams of characters.bow_voc_selection
: Pre-trained BoW where the tokens correspond to the most frequent ones.bow_training_set
: BoW trained with the training set; the number of tokens corresponds to all the tokens in the set.stack_bow_keywords_emojis
: Stack generalization approach where the base classifiers are the BoW, the emojis, and the keywords dense BoW.stack_bow_keywords_emojis_voc_selection
: Stack generalization approach where the base classifiers are the BoW, the emojis, and the keywords dense BoW. The tokens in these models were selected based on a normalized frequency w.r.t. its type, i.e., bigrams, words, and q-grams of characters.stack_bows
: Stack generalization approach where the base classifiers are BoW with the two token selection procedures described previously (i.e., bow
and bow_voc_selection
).stack_2_bow_keywords
: Stack generalization approach where with four base classifiers. These correspond to two BoW and two dense BoW (emojis and keywords), where the difference in each is the procedure used to select the tokens, i.e., the most frequent or normalized frequency.stack_2_bow_tailored_keywords
: Stack generalization approach with four base classifiers. These correspond to two BoW and two dense BoW (emojis and keywords), where the difference in each is the procedure used to select the tokens, i.e., the most frequent or normalized frequency. The second difference is that the dense representation with normalized frequency also includes models for the most discriminant words selected by a BoW classifier in the training set. We refer to these latter representations as tailored keywords.stack_2_bow_all_keywords
: Stack generalization approach with four base classifiers equivalent to stack_2_bow_keywords
where the difference is that the dense representations include the models created with the human-annotated datasets.stack_2_bow_tailored_all_keywords
: Stack generalization approach with four base classifiers equivalent to stack_2_bow_all_keywords
, where the difference is that the dense representation with normalized frequency also includes the tailored keywords.stack_3_bows
: Stack generalization approach with three base classifiers. All of them are BoW; the first two correspond pre-trained BoW with the two token selection procedures described previously (i.e., bow and bow_voc_selection
), and the latest is a BoW trained on the training set (i.e., bow_training_set
).stack_3_bows_tailored_keywords
: Stack generalization approach with five base classifiers. The first corresponds to a BoW trained on the training set, and the rest are used in stack_2_bow_tailored_keywords
.stack_3_bow_tailored_all_keywords
: Stack generalization approach with five base classifiers. It is comparable to stack_3_bows_tailored_keywords
being the difference in the use of the tailored keywords.Configuration | Tailored | Dense | DA-VINCIS 2023 | DA-VINCIS 2022 |
---|---|---|---|---|
stack_2_bow_tailored_all_keywords |
X | X | 0.8984 | 0.8361 |
stack_2_bow_all_keywords |
X | 0.8951 | 0.8447 | |
stack_3_bows_tailored_keywords |
X | X | 0.8971 | 0.7555 |
stack_3_bow_tailored_all_keywords |
X | X | 0.8968 | 0.8219 |
stack_2_bow_tailored_keywords |
X | X | 0.8966 | 0.7572 |
stack_2_bow_keywords |
X | 0.8955 | 0.7525 | |
stack_3_bows |
X | 0.8931 | 0.7329 | |
bow_voc_selection |
0.8907 | 0.7342 | ||
bow |
0.8894 | 0.7324 | ||
bow_training_set |
X | 0.8892 | 0.7337 | |
stack_bows |
0.8879 | 0.7329 | ||
stack_bow_keywords_emojis |
X | 0.8863 | 0.7595 | |
stack_bow_keywords_emojis_voc_selection |
X | 0.8859 | 0.7588 |
Performance, in terms of F1, of different configurations on a five fold cross-validation. The best performance is in boldface.
DA-VINCIS 2023 | DA-VINCIS 20222 | |
---|---|---|
Winner | 0.9264 | 0.7817 |
INGEOTEC | 0.8903 | 0.7510 |
Difference | 4.1% | 4.1% |
Competition | Winner | EvoMSA 2.0 | Difference |
---|---|---|---|
PoliticEs (Gender) | 0.8296 | 0.7115 | 16.6% |
PoliticEs (Profession) | 0.8608 | 0.8379 | 2.7% |
PoliticEs (Ideology Binary) | 0.8967 | 0.8913 | 0.6% |
PoliticEs (Ideology Multiclass) | 0.6913 | 0.6694 | 3.3% |
REST-MEX (Polarity) | 0.6216 | 0.5548 | 12.0% |
REST-MEX (Type) | 0.9903 | 0.9805 | 1.0% |
REST-MEX (Country) | 0.9420 | 0.9270 | 1.6% |
Competition | Winner | EvoMSA 2.0 | Difference |
---|---|---|---|
HOMO-MEX | 0.8847 | 0.8050 | 9.9% |
HOPE (ES) | 0.9161 | 0.4198 | 118.2% |
HOPE (EN) | 0.5012 | 0.4429 | 13.2% |
DIPROMATS (ES) | 0.8089 | 0.7485 | 8.1% |
DIPROMATS (EN) | 0.8090 | 0.7255 | 11.5% |
HUHU | 0.820 | 0.775 | 5.8% |
Using a tailored dense model with the DA-VINCIS 2023 dataset, we can see:
Using a dense model with the DA-VINCIS 2023 dataset, we can see:
We have presented our system solution and results for detecting violent incidents on Twitter using only text-based features in the context of the DA-VINCIS 2023 challenge.
Explainability of the model (with a simple bow outstanding results). Simplest solution Fast solution (in training and test), low computational resources. Dense representation using at most 100 million tweets.
Questions?
Also, we want to promote the usage of our EvoMSA library.
For EvoMSA documentation see:
https://evomsa.readthedocs.io/en/docs/
EvoMSA Github repository
https://github.com/INGEOTEC/EvoMSA