Open-world learning (OWL) has taken on new importance in recent years as AI systems continue to be applied in real-world environments where structural violations of expectation or novelty can occur with non-trivial frequency. Novelty can impact AI performance profoundly, ranging from overt catastrophic failures to non-robust behaviors that do not take changing context into account. In this presentation, the University of Southern California's Dr. Mayank Kejriwal argues that designing machine intelligence that can operate in open worlds, including detecting, characterizing, and adapting to novelty, is a critical goal on the path to building systems that can solve complex and relatively under-determined problems.
He will also discuss and distinguish between three forms of OWL (weak, semi-strong and strong), and between the development of OWL algorithms in active versus passive domains. Finally, he will explore the issue of how we can properly evaluate OWL approaches. Unlike traditional machine learning, an OWL algorithm must be capable of handling situations that are genuinely and structurally unexpected. Hence, their practical evaluation can pose an interesting conceptual challenge.
- An overview on open-world learning and why it is so important in real-world infrastructures.
- How open-world learning can provide a competitive advantage.
- Reasons why open-world learning is the next frontier in machine intelligence.
- The key difference between weak, semi-strong and strong open-world learning systems.
- A primer on evaluating and stress-testing open-world learning.
About the speaker:
Dr. Mayank Kejriwal holds joint appointments at the University of Southern California's Viterbi School of Engineering. His research, funded by the US Department of Defense, corporations and philanthropists, is focused on applying AI to solving complex data-intensive problems. He is the author of four books, including an MIT Press textbook on knowledge graphs.