Title:
|
LARGE LANGUAGE MODEL DETUNING IN LEARNING
CONTENT UNDERSTANDING |
Author(s):
|
Tsubasa Minematsu and Atsushi Shimada |
ISBN:
|
978-989-8704-61-0 |
Editors:
|
Demetrios G. Sampson, Dirk Ifenthaler and Pedro IsaĆas |
Year:
|
2024 |
Edition:
|
Single |
Keywords:
|
Large Language Model, Data Poisoning, Data Augmentation |
Type:
|
Full |
First Page:
|
11 |
Last Page:
|
18 |
Language:
|
English |
Cover:
|
|
Full Contents:
|
if you are a member please login
|
Paper Abstract:
|
In using large language models (LLMs) for education, such as distractors in multiple-choice questions and learning by
teaching, error-containing content is used. Prompt tuning and retraining LLMs are possible ways of having LLMs generate
error-containing sentences in the learning content. However, there needs to be more discussion on how to tune LLMs for
specific lecture content. Such discussions help control LLMs and for developing educational applications. In this study, we
aim to train a detuned LLM that only states incorrect things, considering the limitations of prompt-based approaches such
as prompt injection. Our method detunes LLMs by generating datasets that confuse LLMs. To evaluate our method, we
asked the detuned LLM to solve multiple-choice questions to evaluate whether it answered the questions incorrectly or not.
We also evaluate how many errors are contained in the sentences generated by the LLM to investigate how their knowledge
of lecture content is degraded regarding factuality. |
|
|
|
|