Research Engineer, Safety

DeepMind
June 02, 2023
Contact:N/A
Offerd Salary:Negotiation
Location:N/A
Working address:N/A
Contract Type:Other
Working Time:Negotigation
Working type:N/A
Ref info:N/A

Applications close Friday 2nd June at 5pm

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives, and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

About us

At Google DeepMind, we've built a unique culture and work environment where long-term ambitious research can flourish. Safety research at Google DeepMind aims to identify potential risks from developing AGI, and develop technical approaches to address them. As a Research Engineer, you will design, implement, and empirically validate algorithms that promote safety.

Conducting research into any transformative technology comes with the responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to alignment, evaluations, reward design, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long- term goal of Google DeepMind Research: to build safe and socially beneficial AI systems.

Research on technical AI safety draws on expertise in deep learning, statistics, human feedback, and more. Research Engineers work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind.

The role

Safety Research Engineers at Google DeepMind work directly on a wide range of conceptual, theoretical and empirical research projects, typically in collaboration with Research Scientists. You will apply your engineering and research skills to accelerate research progress through developing prototypes, designing and scaling up algorithms, overcoming technical obstacles, and designing, running, and analysing experiments.

  • Understand and investigate possible failure modes for current and future AI systems
  • Collaborate on projects within the team's broader technical agenda to research technical safety mechanisms that address potential failure modes
  • Collaborate with research teams externally and internally to ensure that AI research is informed by and adheres to the most advanced safety research and protocols
  • Essential:
  • Bachelor's degree in a technical subject (e.g. machine learning, AI, computer science, mathematics, physics, statistics), or equivalent experience.
  • Ability to write code in at least one programming language, preferably Python or C++.
  • Knowledge of mathematics, statistics and machine learning concepts needed to understand research papers in the field.
  • Ability to communicate technical ideas effectively, e.g. through discussions, whiteboard sessions, written documentation.
  • Nice to have:
  • Knowledge of ML/scientific libraries such as TensorFlow, JAX, PyTorch, NumPy and Pandas.
  • Machine learning and research experience in industry, academia and personal projects.
  • Familiarity with distributed scientific computation, whether CPU, GPU, TPU, or heterogenous.
  • Experience with large scale system design.
  • A real passion for AGI safety.
  • Competitive salary applies.

    Closing date : 2 June 2023

    From this employer

    Recent blogs

    Recent news