GenAI
Research on generative AI models, large language models, and creative AI systems.
The Problem
Generative AI has demonstrated remarkable capabilities, but it also introduces unique risks and challenges:
- Hallucination and reliability — Large language models can generate confident but factually incorrect outputs. Ensuring reliability for high-stakes applications remains an open problem.
- Misuse potential — Generative models can be used to create deepfakes, misinformation, and other harmful content at scale. How do we mitigate misuse while preserving beneficial uses?
- Training data governance — Issues of copyright, consent, and bias in training data raise fundamental questions about how generative models should be built.
- Emergent capabilities — As models scale, they develop unexpected abilities that were not explicitly trained for. Understanding and predicting these emergent properties is crucial.
What We're Working On
- Factuality and grounding — Developing techniques to improve the factual reliability of language model outputs through retrieval augmentation and fact verification.
- Safety evaluation for generative models — Creating comprehensive benchmarks and evaluation frameworks for assessing the safety of generative AI systems.
- Controllable generation — Methods for fine-grained control over generated content to ensure outputs remain within desired boundaries.
Related Publications
No publications in this area yet. Check back soon or view all research.