Welcome! 👋

To the Germeval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments

Germeval 2021 Procceedings: Read about the task, the aim, and the results of Germeval 2021 in the official Proceedings here.

Who attended? Find out more about the participants and have a look at the responses to our survey here.

Get the Data: You want to work with the Germeval 2021 annotated user comments? Find the training and test data here.

If you use the Germeval data sets or refer to the Shared Task, please cite:

    title = "Overview of the {G}erm{E}val 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments",
    author = "Risch, Julian  and
      Stoll, Anke  and
      Wilms, Lena  and
      Wiegand, Michael",
    booktitle = "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments",
    month = sep,
    year = "2021",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.germeval-1.1",
    pages = "1--12",

Task Overview and Data Set

:warning: Disclaimer: Please note that we display examples of toxic user comments on this website for better understanding and transparency.

With this year´s shared task, we want participants to go beyond the identification of offensive comments. To this end, we extend the focus to two other classes of comments that are highly relevant to moderators and community managers on online discussion platforms: engaging comments and fact-claiming comments, meaning comments that should be considered as a priority for fact-checking.

We provide an annotated dataset of over 4,000 Facebook user comments that have been labeled by four trained annotators. The dataset is drawn from the Facebook page of a political talk show of a German television broadcaster, including user discussions from February till July 2019. The dataset is provided in anonymized form. User information and comment IDs will not be shared. Links to users are replaced by @USER. Links to the show are replaced by @MEDIUM, and links zu the moderator of the show are replacd by @MODERATOR. For trial data, a sample of user comments to two further shows has been provided. The user comments in the test data were drawn from discussions on different shows than in the training data. This way, we could provide a realistic use case and further could control a possible bias causes by topics. The annotation guidelines for the data set can be obtained upon request. The data is provided in .csv-format and the following structure:

comment_id comment_text Sub1_Toxic Sub2_Engaging Sub3_FactClaiming
1 “Kinder werden….” 0 0 1
2 “Die aktuelle Situation zeigt vor allem…” 0 1 0

Subtask 1: :imp: Toxic Comment Classification (Binary Classification Task)

The detection of toxic content in online discussions remains challenging and new approaches are constantly being demanded and developed. With this subtask we aim to continue the series of previous GermEval Shared Tasks on Offensive Language Identification.

message Sub1_Toxic
“Na, welchem tech riesen hat er seine Eier verkauft..?” 1
“Ich macht mich wütend, dass niemand den Schülerinnen Gehör schenkt” 0

Subtask 2: :hugs: Engaging Comment Classification (Binary Classification Task)

In addition to the detection of toxic language, community managers and moderators increasingly express interest in identifying particularly valuable user content, for example, to highlight them and to give them more visibility. That includes rational, respectful, and reciprocal comments that can encourage readers to join the discussion, increase positive perceptions of discussion providers, and can enhance more fruitful and less violent exchange.

message Sub2_Engaging
“Wie wär’s mit einer Kostenteilung. Schließlich haben beide Parteien (Verkäufer und Käufer) etwas von der Tätigkeit des Maklers. Gilt gleichermassen für Vermietungen. Die Kosten werden so oder soweit verrechnet, eine Kostenreduktion ist somit nicht zu erwarten.” 1
“Die aktuelle Situation zeigt vor allem eines: viele Kinder mussten erkennen, dass ihre Mutter bestenfalls das Niveau Grundschule, Klasse 3 haben.” 0

Subtask 3: :point_up: Fact-Claiming Comment Classification (Binary Classification Task)

Beyond the challenge to ensure non-hostile debates, platforms and moderators are under pressure to act due to the rapid spread of misinformation and fake news. Platforms need to review and verify posted information to meet their responsibility as information providers and distributors. As a result, there is an increasing demand for systems that automatically identify comments that should be fact-checked manually. Note that this subtask is not about the fact-checking itself or the identification of fake news. However, the identification of fact-claiming comments is a pre-processing step for manual fact-checking.

message Sub3_FactClaiming
“Kinder werden nicht nur seltener krank, sie infizieren sich wohl auch seltener mit dem Coronavirus als ihre Eltern - das ist laut Ministerpräsident Winfried Kretschmann (Grüne) das Zwischenergebnis einer Untersuchung der Unikliniken Heidelberg, Freiburg und Tübingen.” 1
“hmm…das kann ich jetzt nich nachvollziehen…” 0

About GermEval

GermEval is a series of shared task evaluation campaigns that focus on Natural Language Processing for the German language. So far, there have been six iterations of GermEval, each with different types of tasks: https://germeval.github.io/tasks/ GermEval shared tasks have been run informally by self-organized groups of interested researchers. However, many of the shared tasks were endorsed by special interest groups within the German Society for Computational Linguistics (GSCL). The Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments is endorsed by the IGGSA (Interest Group on German Sentiment Analysis).

GermEval Workshop @KONVENS 2021

The GermEval workshop took place on September 6 at KONVENS 2021 with the following program:

Time Program
09:00 Welcome and presentation of the Shared Task
09:30 One Minute Madness for all participants
09:50 Session 1 (chair: Anke Stoll)
DFKI SLT at GermEval 2021: Multilingual Pre-training and Data Augmentation for the Classification of Toxicity in Social Media Comments by R. Calizzano, M. Ostendorff, G. Rehm (25 min)

UPAppliedCL at GermEval 2021: Identifying Fact-Claiming and Engaging Facebook Comments Using Transformers by R. Schäfer, M. Stede (25 min)
10:40 break
11:00 Session 2 (chair: Michael Wiegand)
Data Science Kitchen at GermEval 2021: A Fine Selection of Hand-Picked Features, Delivered Fresh from the Oven by N. Hildebrandt, B. Boenninghoff, D. Orth, C. Schymura (25 min)

DeTox at GermEval 2021: Toxic Comment Classification by M. Schütz, C. Demus, J. Pitz, N. Probol, M. Siegel, D. Labudde (25 min)

AIT FHSTP at GermEval 2021: Automatic Fact Claiming Detection with Multilingual Transformer Models by J. Böck, D. Liakhovets, M. Schütz, A. Kirchknopf, D. Slijepčević, M. Zeppelzauer, A. Schindler (25 min)
12:15 break
12:35 Session 3 (chair: Lena Wilms)
FHAC at GermEval 2021: Identifying German toxic, engaging, and fact-claiming comments with ensemble learning by T. Bornheim, N. Grieger, S. Bialonski (25 min)

WLV-RIT at GermEval 2021: Multitask Learning with Transformers to Detect Toxic, Engaging, and Fact-Claiming Comments by S. Morgan, T. Ranasinghe, M. Zampieri (25 min)
13:25 Feedback & Further Information
14:00 End of the Event


The GermEval 2021 workshop is part of KONVENS 2021. KONVENS (Konferenz zur Verarbeitung natürlicher Sprache/Conference on Natural Language Processing) is an annual conference series on computational linguistics that is organized under the auspices of the German Society for Computational Linguistics and Language Technology (GSCL), the Special Interest Group on Computational Linguistics of the German Linguistic Society (DGfS-CL) and the Austrian Society for Artificial Intelligence (ÖGAI).