Ideas to add on
- What are LLMs?
- What’s the history behind them?
- How are they evolving now?
- What implications are there with using LLMs?
- How can LLMs be exploited or hijacked, and what can we (developers, creators, users, etc.) do to safeguard them?
Interesting terms
- Catastrophic interference/forgetting — the tendency for knowledge of the previously learned task(s) to be abruptly lost as information relevant to the current task is incorporated1.
- Knowledge distillation — a technique that transfers the learning of a large pre-trained model (“teacher model”) to a smaller one (“student model”)2, typically with the acknowledgement of some degradation of quality compared to the original teacher model.
Footnotes
-
As defined in this research paper ↩
-
As defined in this IBM blog post ↩