Research Scientist, Safety

DeepMind
June 02, 2023
Contact:N/A
Offerd Salary:Negotiation
Location:N/A
Working address:N/A
Contract Type:Other
Working Time:Negotigation
Working type:N/A
Ref info:N/A

Applications close Friday 2nd June at 5pm

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to alignment, evaluations, reward design, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long- term goal of Google DeepMind: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, statistics, human feedback, and more. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.

Snapshot

We're a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit.

We've built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don't set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals.

We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible

Our list of benefits is extensive, and we're happy to discuss this further throughout the interview process!

The role

Key responsibilities:

  • Identify and investigate possible failure modes for current and future AI systems, and dedicatedly develop solutions to address them
  • Conduct empirical or theoretical research into technical safety mechanisms for AI systems in coordination with the team's broader technical agenda
  • Collaborate with research teams externally and internally to ensure that AI capabilities research is informed by and adheres to the most advanced safety research and protocols
  • Report and present research findings and developments to internal and external collaborators with effective written and verbal communication
  • About you

    Minimum qualifications:

  • PhD in a technical field or equivalent practical experience
  • Preferred qualifications:

  • PhD in machine learning, computer science, statistics, computational neuroscience, mathematics, or physics.
  • Relevant research experience in deep learning, machine learning, reinforcement learning, statistics, or computational neuroscience.
  • A real passion for AI.
  • Competitive salary applies.

    From this employer

    Recent blogs

    Recent news