At Google DeepMind, we value diversity of experience, knowledge, backgrounds
and perspectives and harness these qualities to create extraordinary impact.
We are committed to equal employment opportunity regardless of sex, race,
religion or belief, ethnic or national origin, disability, age, citizenship,
marital, domestic or civil partnership status, sexual orientation, gender
identity, pregnancy, or related condition (including breastfeeding) or any
other basis as protected by applicable law. If you have a disability or
additional need that requires accommodation, please do not hesitate to let us
Our team is responsible for integrating diverse viewpoints and values into
ground-breaking technologies like Large Language Models (VOICES of all in AI
As a Research Engineer, you will design, implement, and empirically validate
fair, democratic, and inclusive approaches to alignment and harm mitigation,
and integrate successful approaches into our best AI systems.
Our interdisciplinary team drives the responsible development of safe and
equitable AI systems, including identifying potential harms from current and
future AI systems, and conducting (socio)technical research to mitigate
Research Engineers spearhead innovative technical approaches, collaborating
closely with internal AI research teams (like Scalable Alignment and Ethics
Research), product teams (including Bard and Gemini), and external research
Develop and implement technical approaches for integrating diverse
viewpoints and values. Approaches include fair, democratic, & inclusive
algorithms, scalable evaluations and oversight, and more, in coordination
with the team's broader technical agenda.
Identify, investigate and mitigate possible risks and harms of foundation
models, stemming from capabilities, human-AI interaction and systemic
Build infrastructure that accelerates research velocity by enabling fast
experimentation on foundation models (text and multimodal), and easy
logging and analysis of experimental results.
Support human data collection and data set creation.
Collaborate with other internal teams to ensure that Google DeepMind AI
systems and products (e.g. Gemini) are informed by and adhere to the
most advanced safety research and protocol
Help make sure our AI models work well for everyone.
In order to set you up for success as a Research Engineer as part of VOICES at
Google DeepMind, we look for the following skills and experience:
You have at least one year of hands-on experience working with deep
learning and/or foundation models
You are adept at building codebases that support machine learning at
scale. You are familiar with ML / scientific libraries (e.g. JAX,
TensorFlow, PyTorch, Numpy, Pandas), distributed computation, and large
scale system design.
Your knowledge of statistics and machine learning concepts enables you to
understand research papers in the field.
You are keen to address harms from foundation models, and plan for your
research to impact production systems on a timescale between “immediately”
and “a few years”.
You are excited to work with strong contributors from different fields of
research to make progress towards a shared adventurous goal.
You have experience in and enjoy working as part of interdisciplinary
You have experience as tech lead on research projects
You have an ambition to grow and lead a team of Research Engineers
In addition, the following would be an advantage:
You have technical experience in responsible AI, sociotechnical AI and/or
AI Safety (whether from industry, academia, coursework, or personal
You have experience in running evaluations and/or collecting human data.
You have an interest in Natural Language Processing and related areas.
You have experience in contributing to publishing research papers at
conferences, ranging from ACL, NeuIPS, ICML, to ACM CHI, or FAccT.
Applications close on Monday 12th February 2024.