Discover the World of LLMs und AI:

Lecture Series on AI and Large Language Models

Large language models are driving the current AI revolution. HIDA's Lecture Series on AI and LLMs will give you an insight into various facets of the topic.

Not only since the launch of ChatGPT, the use, development and implementation of AI and large language models has been of great interest to the scientific community.

For this reason, the Helmholtz Information & Data Science Academy (HIDA) is organizing a series of monthly lectures on these topics.

From basic technical knowledge about the systems and their impact on the scientific community to ethical issues that may arise from the use of AI and LLM, the lectures cover a wide range of topics.

All speakers are highly qualified scientists from the Helmholtz Association and its partners.

Next events

The Ethics of AI: Foundations of Applied Ethics and Emerging Risks from AI Development, 23.01.2024

Machine Learning for Precision Medicine: Avenues and Roadblocks, 18.02.2025

Large-Scale Brain Decoding - Taking Advantage of Physiological Diversity, 10.03.2025

 

Watch again: Past Lectures

The speaker: Simon Ostermann

The speaker: Simon Ostermann

Simon Ostermann is a computational linguist and Senior Researcher at the German Research Center for Artificial Intelligence (DFKI), where he leads a research group on Efficient and Explainable Natural Language Processing in the lab for Multilinguality and Language Technology. His research interests are on trying to improve the accessibility of Large Language Models (LLMs) in several aspects. First, by making the parameters and behaviour or LLMs more explainable and understandable to both end users and researchers, second, by improving language models in terms of their data consumption and size.

A Short Introduction to Efficient Natural Language Processing

Abstract:

The lecture focuses on the challenges of developing efficient models with limited data and resources. It addresses strategies for maximizing data and model efficiency, particularly for large, computationally intensive models that are often based on English data. It discusses techniques such as prefiltering, online methods, data enrichment, curriculum learning, and parameter-efficient approaches such as adapters, prompt tuning, and prefix tuning to boost performance without extensive data requests.

The speaker: Fredrik Heintz

Der Referent: Fredrik Heintz

Fredrik Heintz ist Professor für Informatik an der Universität Linköping, wo er die Abteilungen für Künstliche Intelligenz und Integrierte Computersysteme (AIICS) sowie das Reasoning and Learning Lab (ReaL) leitet. Sein Forschungsschwerpunkt liegt auf Künstlicher Intelligenz, insbesondere auf vertrauenswürdiger KI und der Schnittstelle zwischen maschinellem Denken und maschinellem Lernen.

TrustLLM - Towards Trustworthy and Factual Language Models

Abstract

This talk gives an overview of the EU project ‘TrustLLM’, which aims to develop open, reliable and neutral Large Language Models (LLMs). The initial focus is on Germanic languages. This will create the basis for an advanced, open platform that supports modular and extensible next-generation European LLMs.

The speaker: Isra Mekki

The speaker: Isra Mekki

Isra Mekki is a Machine Learning Engineer at Helmholtz AI, specializing in general machine learning and software engineering. She completed an engineering degree in computer systems at ESI in Algiers, followed by a Master's in Data Science and Machine Learning at PSL Research University in Paris. Before joining Helmholtz AI, she gained experience working on Automated Speech Recognition at a startup and later as a Data Engineer at Orange France.

ChatGPT in Action: Enhancing Your Workflow

Abstract

Many users work with ChatGPT without fully exploiting its potential. This lecture will show you how prompt engineering can help you to use applications like ChatGPT, which are based on Large Language Models (LLMs), more effectively.

The event is aimed at both beginners who have no experience with ChatGPT and those who want to learn more about best practices, possible challenges and limitations of this technology.

The speaker: Lea Schönherr

The speaker: Lea Schönherr

Lea Schönherr is a tenure-track faculty at CISPA Helmholtz Center for Information Security interested in information security with a focus on adversarial machine learning. She received her Ph.D. in 2021 from Ruhr University Bochum, where she was advised by Prof. Dr.-Ing. Dorothea Kolossa at the Cognitive Signal Processing group at Ruhr University Bochum (RUB), Germany. She received two scholarships from UbiCrypt (DFG Research Training Group) and CASA (DFG Cluster of Excellence).

Challenges and Threats in Generative AI: Exploits and Misuse

Abstract

This talk will discuss the security challenges associated with generative AI in more detail. These fall into two possible categories: firstly, manipulated inputs and, secondly, the misuse of computer-generated results.

The speaker: Sahar Abdelnabi

The speaker: Sahar Abdelnabi

Sahar Abdelnabi is an AI security researcher at the Microsoft Security Response Center (Cambridge, UK). Previously, she was a doctoral researcher at CISPA - Helmholtz Center for Information Security. Her research interests lie in the broad intersection of machine learning with security, safety, and sociopolitical aspects.

On New Security and Safety Challenges Posed by LLMs and How to Evaluate Them

Abstract

This online lecture explores the widespread integration of Large Language Models (LLMs) into various real-world applications, highlighting both the vast opportunities for aiding diverse tasks and the significant challenges related to security and safety. Unlike earlier models with static generation, the dynamic, multi-turn, and adaptable nature of LLMs presents substantial difficulties in achieving robust evaluation and control. Join us as we examine the emerging risks associated with LLMs, delve into methodologies for rigorous evaluation, and tackle the complex challenges involved in implementing effective mitigations.

The speaker: Steffen Albrecht

The speaker: Dr. Steffen Albrecht

Dr. Steffen Albrecht is scientific staff member at the Office of Technology Assessment at the German Bundestag. He advises members of the parliament on scientific and technological developments. After completing his doctorate in sociology in Hamburg, he worked for several years at universities and in companies in Hamburg, Berlin and Dresden, where he researched the impact of digitalization on society and its various sectors, in particular education, politics and science. He currently focuses on bio- and medical technologies, digitalization and artificial intelligence. He is also involved in the further development of technology assessment methods.

Contextualizing LLMs – What are the social and ethical implications?

Abstract

This talk focuses on the social implications of the widespread use of large-scale language models. Steffen Albrecht discusses how generative AI could change public debate, administration and the arts, and how politics and society can steer these developments. After all, in order to recognize the potential and limitations of current AI systems and find ways to improve them, we need to look beyond their technological functions and consider the context of their development and use.

Speaker: Bert Heinrichs

Speaker: Bert Heinrichs

Bert Heinrichs is professor of ethics and applied ethics at the Institute for Science and Ethics (IWE) at the University of Bonn and leader of the research group “Neuroethics and Ethics of AI” at the Institute of Neuroscience and Medicine: Brain and Behaviour (INM-7) at the Forschungszentrum Jülich.

He studied philosophy, mathematics and education in Bonn and Grenoble. He received his MA in 2001, followed by a doctorate in 2007 and his habilitation in 2013.

Prior to his current position, he was Head of the Scientific Department of the German Reference Center for Ethics in the Life Sciences (DRZE). He works on topics of neuroethics, ethics of AI, research ethics and medical ethics. He is also interested in questions of contemporary metaethics.

Ethical Considerations on Hate Speech and AI

Inhaltszusammenfassung

In this presentation, the speaker will delve into ethical dilemmas surrounding the handling of hate speech. The primary emphasis will be on exploring how AI can play a role in curbing the dissemination of hate speech and the potential challenges it poses. The discussion will distinguish between two key realms: hate speech prevalent on social media platforms and that which emerges from Large Language Models (LLMs).

Within these realms, there are distinct concerns to address. On social media platforms, hasty intervention may inadvertently lead to censorship, while with LLM-generated hate speech, there's a looming threat of stifling innovation. Thus, the imperative lies in identifying ethically sound compromises and devising technical mechanisms for their effective implementation.

The speaker: Jan Ebert

The speaker: Jan Ebert

Jan Ebert

Jan Ebert has studied Cognitive Informatics and Intelligent Systems at Bielefeld University. With high interest in deep learning and high-performance computing, he started to work at Jülich Supercomputing Centre as Software Engineer and Researcher Large-Scale HPC Machine and Deep Learning, supporting researchers in various domains to apply artificial intelligence (AI) techniques for their research and co-founding LAION, an open community for open AI projects.

ChatGPT's Backgrounds: Exploring the World of Large Language Models

Abstract

In this talk, various key aspects of training LLMs will be explained in detail. This includes discussions on the architecture of such models, the selection and fine-tuning of training data, and the challenges and approaches to dealing with large datasets and computational resources. Current research trends and methodological innovations in the field of LLM training will also be discussed.

In addition, various exemplary applications and possible uses of LLMs will be presented during the lecture. This includes applications in natural language processing, automatic text generation, translation, sentiment analysis and many more. Practical case studies and success stories from industry and research will also be presented to illustrate the versatility and potential of LLMs.

An important part of the presentation is the current state of the technology and an outlook for the future. Current developments and trends in LLM research and application will be highlighted and potential challenges and opportunities that could arise in the coming years will be discussed. Possible areas of application beyond the current range of applications will also be considered in order to take a look at future developments and innovations in the field of speech technology.

The speaker: Jörg Pohle

The speaker: Jörg Pohle

Jörg Pohle is a postdoc and head of the research programme “Data, actors, infrastructures: The governance of data-driven innovation and cyber security” at the Alexander von Humboldt Institute for Internet and Society in Berlin.

Ally or Adversary: Examining the Impact of Large Language Models on Academics and Academic Work

Abstract:

This talk will analyse and present the impact of Large Language Models (LLMs) on science. Potential benefits and risks for the scientific system and the scientific profession will be discussed. Ethical issues and the role of LLMs in academic tasks will be highlighted using empirical data. In addition, the changing role of academics in the digital world will be discussed, including the datafication and quantification of academics and their institutions and future challenges.

Alternativ-Text

Subscribe newsletter