Google DeepMind at NeurIPS 2024
AI

Google DeepMind at NeurIPS 2024

Research Published 5 December 2024 Advancing adaptive AI agents, empowering 3D scene creation, and innovating LLM training for a smarter, safer future Next week, AI researchers worldwide will gather for the 38th Annual Conference on Neural Information Processing Systems (NeurIPS), taking place December 10-15 in Vancouver, Two papers led by Google DeepMind researchers will be

Genie 2: A large-scale foundation world model
AI

Genie 2: A large-scale foundation world model

Acknowledgements Genie 2 was led by Jack Parker-Holder with technical leadership by Stephen Spencer, with key contributions from Philip Ball, Jake Bruce, Vibhavari Dasagi, Kristian Holsheimer, Christos Kaplanis, Alexandre Moufarek, Guy Scully, Jeremy Shar, Jimmy Shi and Jessica Yung, and contributions from Michael Dennis, Sultan Kenjeyev and Shangbang Long. Yusuf Aytar, Jeff Clune, Sander Dieleman,

Looking back at speculative decoding
AI

Unlocking the power of time-series data with multimodal models

The successful application of machine learning to understand the behavior of complex real-world systems from healthcare to climate requires robust methods for processing time series data. This type of data is made up of streams of values that change over time, and can represent topics as varied as a patient’s ECG signal in the ICU

Looking back at speculative decoding
AI

Bridging the gap in differentially private model training

Vulnerability gap in DP-SGD privacy analysis Most practical implementations of DP-SGD shuffle the training examples and divide them into fixed-size mini-batches, but directly analyzing the privacy of this process is challenging. Since the mini-batches have a fixed size, if we know that a certain example is in a mini-batch, then other examples have a smaller

Looking back at speculative decoding
AI

Tool invocation rewriting for zero-shot tool retrieval

Augmenting large language models (LLMs) with external tools, rather than relying solely on their internal knowledge, could unlock their potential to solve more challenging problems. Common approaches for such “tool learning” fall into two categories: (1) supervised methods to generate tool calling functions, or (2) in-context learning, which uses tool documents that describe the intended

Google’s research on quantum error correction
AI

Google’s research on quantum error correction

Quantum computers have the potential to revolutionize drug discovery, material design and fundamental physics — that is, if we can get them to work reliably. Certain problems, which would take a conventional computer billions of years to solve, would take a quantum computer just hours. However, these new processors are more prone to noise than

Scroll to Top