Large Language Models: Navigating the AI Policy Nexus

Large Language Models, known as LLMs, are a generative Artificial Intelligence (gAI) class that uses dynamic network-nodal computation to estimate probabilistic responses to questions posed by the “prompt engineer.” But why study LLMs? What makes them unique?

LLMs are the result of two co-produced innovations:

  • advancement in autoregression estimation, which we call “self-supervised learning,” and
  • “tokenization,” which is the process that converts symbolic transactions into a format that a computer program can process and then predict.

These co-produced innovations allow the LLM to automatically re-train and identify new labels (tokens), resulting in relatively profound pattern-matched predictive outcomes. LLM innovations are revolutionizing human-machine interactions.

We have three learning objectives in this course:

  1. Use Information theory to explore the history and logic of gAI.
  2. Understanding the foundations of prompt engineering as a tool.
  3. Expand gAI literacy and use in social science and public policy.

Faculty