Navigation auf uzh.ch
Kolloquium FS 2022: Berichte aus der aktuellen Forschung am Institut, Bachelor- und Master-Arbeiten, Programmierprojekte, Gastvorträge
Zeit & Ort: alle 14 Tage dienstags von 10.15 Uhr bis 12.00 Uhr, BIN-2.A.10 (Karte)
Online-Teilnahme via das MS Teams Team CL Colloquium ist auch möglich.
Verantwortlich: PD Dr. Gerold Schneider
Date |
Speaker |
Topic |
---|---|---|
Thursday, 24.2, 17:15 | Rüdiger Pryss (Uni Würzburg) | Insights on Machine Learning for mHealth Data Sources [ifi Colloquium] |
08. 03. | Lukas Fischer | Machine Translation of 16th Century Letters from Latin to German |
22. 03. |
Phillip Ströbel Chantal Amrhein |
New Approaches to HTR (and its Evaluation) for Historical Manuscripts Identifying Weaknesses in COMET Through Minimum Bayes Risk Decoding |
05. 04. |
Nicolas Spring Fabio Rinaldi |
Automatic Text Simplification for German NLP and the COVID pandemic: from the scientific literature to misinformation |
26. 04. |
Jannis Vamvas |
Using Translation Perplexity for NLP Evaluation (Anne-Sophie Gnehm's talk is cancelled) |
10. 05. |
Alessia Battisti Chiara Tschirner |
Analyzing L2 productions of Swiss German Sign Language “Lesen im Blick”: Developing an Eyetracking-Based Screening for Dyslexia |
24. 05. |
Janis Goldzycher
Farhad Nooralahzadeh |
Hypothesis Engineering for Zero-Shot Natural Language Inference-Based Hate Speech Detection Multimodal Multilingual NLP |
Lukas Fischer: Machine Translation of 16th Century Letters from Latin to German
Heinrich Bullinger (1504-1575) was a Swiss reformer with an extensive correspondence network across Switzerland and Europe. Roughly 10,000 handwritten letters addressed to Bullinger and 2000 letters penned by himself have been preserved, but only a quarter of them have been edited. The Bullinger Digital (http://www.bullinger-digital.ch) project aims to bring Bullinger's complete correspondence into digital form and make it accessible to the general public and to scholars. This includes scanning the original letters, recognising the handwriting, and making them available online.
In addition, as most letters are written in Latin, we will provide a German translation, which will be automatically generated by a customised Machine Translation system. In this talk, I will outline our approach in collecting training data for Machine Translation, and discuss a key strategies that improve the performance of the translation systems.
Phillip Ströbel: New Approaches to HTR (and its Evaluation) for Historical Manuscripts
Heinrich Bullinger (1504-1575) was a Swiss reformer with an extensive correspondence network across Switzerland and Europe. Roughly 10,000 handwritten letters addressed to Bullinger, and 2000 letters penned by himself have been preserved, but only a quarter of them have been edited. We have transcriptions of varying quality from two further quarters, while 3000 letters have not been transcribed at all.
These preconditions leave us with a challenging set-up for HTR (Handwritten Text Recognition). On the one hand, we have a highly skewed author distribution for the remaining letters which need a transcription. On the other hand, we have much training material to choose from, but what composition of the training material leads to the best results? And which techniques should be used to train optimal models for this task? In this talk, I will highlight our activities in the Bullinger Digital (http://www.bullinger-digital.ch) project in model selection and HTR for historical documents, showcasing how traditional approaches have found a new opponent in Transformer-based HTR.
Chantal Amrhein: Identifying Weaknesses in COMET Through Minimum Bayes Risk Decoding
Neural metrics have achieved an impressive correlation with human judgements in the evaluation of machine translation systems. Before we can safely optimise towards such metrics, we should be aware of (and ideally eliminate) biases towards bad translations that receive high scores. But how can we actively search for metric "blind spots" that we do not already know are there? In this talk, I will show that sample-based Minimum Bayes Risk decoding can be used to explore and quantify such weaknesses. Our case study for COMET – a leading MT evaluation metric – identifies that COMET models are not sensitive enough to discrepancies in numbers and named entities. I will also discuss our experiments to remove these biases once identified and show that simply training on additional synthetic data is not effective enough, opening opportunities for future work.
Nicolas Spring: Automatic Text Simplification for German
Simplified language is a variety of standard language characterized by reduced lexical and syntactic complexity, the addition of explanations for difficult concepts, and clearly structured layout. It has received ever growing attention with a number of legal and political developments in many parts of the world and plays a key part in building an inclusive society where everyone has access to information.
In this talk, we will give an introduction to the task of automatic text simplification (ATS) for German, for which only a limited amount of data is available. This data is composed of various corpora with different standards for simplified language, which are not always enforced. We focus on our work in the paradigms of sentence-level ATS, which requires previous sentence alignment, and document-level ATS. We identify the non-parallel nature of text simplification and the numerous unchanged segments as the main challenges in sentence-based ATS. For the paradigm of document-level ATS, we focus on model outputs to identify key refinements for going forward.
Fabio Rinaldi: NLP and the COVID pandemic: from the scientific literature to misinformation
In this presentation I will briefly survey the global response of the NLP community to the COVID pandemic. I will show how NLP tools have been helpful in dealing with the double curse of information overload and misinformation. I will also briefly illustrate some of our own activities in this domain.
Jannis Vamvas: Using Translation Perplexity for NLP Evaluation
The surprisal of a translation model is a useful source of information, even beyond the process of translation. For example, we previously showed that contrastive conditioning can be used to evaluate the disambiguation quality of MT systems. Another important error type is omission and addition of content. We demonstrate how the idea of contrastive conditioning can be extended to this error type, by pinpointing superfluous words in the translation and untranslated words in the source.
In a follow-up project, we also use translation perplexity to estimate the semantic similarity of two sentences. Such attempts have already been made in previous work, and we redefine these attempts in the common framework of multilingual NMT. We introduce a similarity measure that uses translation perplexity from a novel perspective, showing that it has competitive accuracy in paraphrase identification and is a comparatively reliable metric for reference-based evaluation of generated text.
Alessia Battisti: Analyzing L2 productions of Swiss German Sign Language
In recent years, there has been a growing interest in sign languages, both from a linguistic and acquisition point of view and from a computational point of view (sign language recognition, translation, etc.). In this talk, I will introduce the Subproject 1 of the SNSF-Sinergia project SMILE-II, in which the first longitudinal corpus of continuous Swiss German Sign Language is created with the aim of identifying salient linguistic patterns that will feed into research in the fields of DSGS education as well as automatic recognition and assessment of DSGS. In the second part of the talk, I will briefly show the first prototype for automatic assessment of DSGS isolated signs.
Chiara Tschirner: “Lesen im Blick”: Developing an Eyetracking-Based Screening for Dyslexia
Dyslexia is characterized as a learning disorder that affects reading, often in combination with writing difficulties, while intellectual abilities are within the normal range (Snowling, 2019). It is usually diagnosed after reading instruction has begun, when many of the intervention strategies are less effective than they would have been at an earlier stage in development. "Lesen im Blick" is a longitudinal eyetracking study with two main aims: i)developing a screening tool for dyslexia that can diagnose and predict dyslexia, even in preschool children and ii)investigating how dyslexia and the affected skills are reflected in eye movements, thus contributing to basic research. In this talk, I will present the overall study design of "Lesen im Blick", with a special focus on the planned eyetracking experiments and demo versions of their implementations.
Janis Goldzycher: Hypothesis Engineering for Zero-Shot Natural Language Inference-Based Hate Speech Detection
The rise of online hate speech, in various forms and across many languages poses a major problem with severe negative consequences around the world. This makes hate speech detection an important task for content moderation, the detection of alarming trends in society and for identifying vulnerable communities. For most languages there exist little to no resources for hate speech detection, making approaches that do not need large amounts of data attractive.
To this end, I will present an approach for zero-shot and few-shot hate speech detection. Previous work has proposed to use natural language inference models as zero-shot text classifiers. We first apply this approach to hate speech detection and analyze its weaknesses. Then, on the basis of this analysis, we develop strategies for querying an inference model with multiple hypotheses to avoid typical errors. The evaluation on English and German shows significant performance improvements. Finally, we consider the scenario where a small amount of training data is available and present first results of natural language inference-based few-shot learning for hate speech detection.
Farhad Nooralahzadeh: Multimodal Multilingual NLP
Vision-and-language pre-training (VLP) models (e.g., VilBERT, CLIP) have driven tremendous progress in joint multimodal representation learning. To generalize this success to non-English languages, recent works (e.g., mUNITER, M3P, UC2) aimed to learn universal representations to map objects that occurred in different modalities or texts expressed in various languages into shared semantic space.
Recent benchmarks across various tasks and languages showed a large gap between monolingual and (zero-shot) cross-lingual transfer of the current multilingual VLP models and motivated future work in this area.
In this talk, I will outline our approach to reduce this gap, particularly in cross-lingual visual question answering (xGQA), showcasing how linguistic and visual prior can improve cross-lingual performance