Pioneering research on the path to AI that understands the universe.

Our research focuses on advancing AI reasoning and decision-making, enabled by a full-stack, probabilistic perspective. The universe – like any data –  is noisy and incomplete, and so fundamentally probabilistic. And thus navigating it efficiently requires a principled view of uncertainties so that one can reliably reason and explain across complex information and insight pathways without hallucination. This entails a fundamental upgrade to existing AI systems – probabilistic thinking – to reliably augment human intelligence at scale.

Part of our work focuses on scaling probabilistic reasoning to some of the hardest and largest-scale decision-making problems in AI. Because probabilistic reasoning is ubiquitous in nature, we turn to the natural world for clues on how to build these kinds of AI computing systems using physics-based principles of thermodynamics and Bayesian learning.

Read the latest research from our team and academic partners.

Our research team is behind the innovations that enabled scaling probabilistic programming and probabilistic machine learning (TensorFlow Probability, Stan), and the modern approach to near-term quantum computation (Tensorflow Quantum, NISQ). Together, the Normal team has pioneered Thermodynamic AI, a physics-based computing paradigm for accelerating the key primitives in probabilistic machine learning.

We also created Posteriors and Outlines, groundbreaking open-source frameworks for uncertainty quantification and controlled LLM generations, respectively, and maintain DSPy, the leading framework for language model program optimization.

Thank you for signing up.
Oops! Something went wrong while submitting the form.