• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

28 jobs found

Email me jobs like this
Refine Search
Current Search
research scientist science of evaluations
Food Safety Scientist
Ramboll Group A/S
Are you excited about understanding and ensuring the safety of food and food ingredients? Are you motivated by creating a sustainable future through innovative health sciences and regulatory practices? Do you enjoy working in a collaborative and empowering environment? If this sounds like you, or you're curious to learn more, then this role could be the perfect opportunity. Join our Health Sciences practice as our new Food Safety Scientist and work with us to close the gap to a sustainable future. Your new role As our new Food Safety Scientist, you will be part of our global Health Sciences practice. You will join an established team of Product Safety and Stewardship specialists who are committed to ensuring the safety and quality of food and food ingredients globally, delivering key knowledge to a range of Ramboll projects. Our successful global track record includes evaluations and stewardship for food, food ingredients, food packaging, toys, cosmetics, chemicals, aerospace manufacturing, pesticides/biocides, pharmaceutical manufacturers, medical devices and electronic device manufacturing, and building materials. Our UK offices are easily accessible by car or public transport. We also will consider remote/hybrid working. Ramboll operates a flexible working policy, and with this you will be part of an exciting team of experts, who respect each other and work towards a common goal. Your key responsibilities will be : Conduct toxicological risk assessments on contaminants, food additives, and other chemical hazards. Provide expert advice and support on toxicological matters to internal stakeholders and clients. Take responsibility within your project team for, and oversee the input of, technical data summaries and data interpretation into regulatory systems such as the EFSA Portal and its submission to regulatory agencies or clients; Work with other specialists as part of multi-disciplined global project teams to deliver technical work including preparation of risk assessments, study summaries, waivers, regulatory compliance assessments, new product registrations/notifications, and other regulatory documentation as required. Assist with the completeness/quality review of scientific data according to current European Regulations, Directives and Guidelines and provision of general regulatory and scientific consultancy to clients as required. Support members of your project team and colleagues with any project issues they may encounter and in providing advice to clients. Participate in and contribute to food safety audits, development and assessment of good manufacturing practices (GMP), and inspections. Develop and maintain a comprehensive understanding of the current and future regulatory landscape, attending conferences to keeping up to date with developments in the appropriate laws, guidance and scientific advancements. Support business development activities and maintain awareness of current issues and developments in the field by providing industry expertise when needed during visits to clients, through discussion and responses to new project briefs and administration. Liaise with regulatory bodies and represent the organisation in scientific and professional forums. About you Scientific background with a specialisation or degree in toxicology, chemistry, biochemistry, biology or related discipline (a PhD or Master's degree is desirable) Minimum of 5 years of experience in toxicology/food safety and risk assessment Strong knowledge of European (EU and UK) food safety regulations and guidelines Numerically competent - ability to complete accurate technical research and data gathering tasks under supervision and assists in technical analysis and data review Effective communication skills, both written and verbal, with the ability to present complex information clearly Ability to work in a multidisciplinary team Ability to use software packages, including MS Word, MS Excel and MS Outlook. Use of statistical software an advantage High level knowledge of various food testing methods and assessment techniques Strong skills in project management with responsibility for technical quality, resources, budget and timelines. Fluent in English; proficiency in other European languages is an advantage What we can offer you Commitment to your development Leaders guided by our Leadership Principles A culture that welcomes you as the unique person you are Inspiration from colleagues, clients, and projects The long-term thinking of a foundation-owned company Flexible work environment 27 days annual leave plus bank holidays Matched pension contributions Private medical cover and life assurance Ready to join us? Please submit your application with your up-to-date CV. We invite diversity in all its forms and encourage applicants from all groups to apply. Deadline: 14/07/2025 Thank you for taking the time to apply! We look forward to receiving your application. Do you have any questions? Contact Meera Cush Work at the heart of sustainable change with Ramboll in the United Kingdom and Ireland Ramboll is a global architecture, engineering, and consultancy company. As a foundation-owned people company, founded in Denmark, we believe that the purpose of sustainable change is to create a thriving world for both nature and people. So, that's where we start - and how we work. Our history is rooted in a clear vision of how a responsible company should act and being open and curious is a cornerstone of our culture. Ramboll in the United Kingdom and Ireland has a proven track record of sustainable and responsible business and is a top ten engineering and environmental and sustainability consultancy in the UK, with more than 1,500 employees across 16 offices. Ramboll experts deliver innovative solutions across Buildings, Transport, Environment & Health, and Energy. In 2024, Ramboll was included in the Sunday Times' list of Best Places to Work. Equality, Diversity, and Inclusion Equality, diversity, and inclusion are at the heart of what we do. At Ramboll, we believe that diversity is a strength and that different experiences and perspectives are essential to creating truly sustainable societies. We are committed to providing an inclusive and supportive work environment where everyone is able to flourish and reach their potential. We also know how important it is to achieve the right balance of where, when, and how much you work. As a company, Ramboll recognises the importance of having a good work/life balance, both in terms of individual well-being and its positive impact with respect to the engagement and retention of our employees. We aim to support all employees to achieve a work/life balance which enables them to work in a supported manner while having the time to achieve personal aspects of their life outside of work. We invite applications from candidates of all backgrounds and characteristics. As a Disability Confident Committed employer, Ramboll ensures opportunities are accessible to candidates with disabilities. Please let us know if there are any changes we could make to the application process to make it more comfortable for you. You can contact us at with such requests. Let's close the gap - talent video - September 2024 Let's close the gap - talent video - September 2024 Let's close the gap - talent video - September 2024 0:33 Share "Let's close the gap - talent video - September 2024" Share from current time 00:00 0:00 Ramboll in numbers : employees worldwide : 300 office 300 office across 35 countries in revenue : 6 markets 6 markets Buildings, Transport, Energy, Environment & Health, Water and Management Consulting
Jun 19, 2025
Full time
Are you excited about understanding and ensuring the safety of food and food ingredients? Are you motivated by creating a sustainable future through innovative health sciences and regulatory practices? Do you enjoy working in a collaborative and empowering environment? If this sounds like you, or you're curious to learn more, then this role could be the perfect opportunity. Join our Health Sciences practice as our new Food Safety Scientist and work with us to close the gap to a sustainable future. Your new role As our new Food Safety Scientist, you will be part of our global Health Sciences practice. You will join an established team of Product Safety and Stewardship specialists who are committed to ensuring the safety and quality of food and food ingredients globally, delivering key knowledge to a range of Ramboll projects. Our successful global track record includes evaluations and stewardship for food, food ingredients, food packaging, toys, cosmetics, chemicals, aerospace manufacturing, pesticides/biocides, pharmaceutical manufacturers, medical devices and electronic device manufacturing, and building materials. Our UK offices are easily accessible by car or public transport. We also will consider remote/hybrid working. Ramboll operates a flexible working policy, and with this you will be part of an exciting team of experts, who respect each other and work towards a common goal. Your key responsibilities will be : Conduct toxicological risk assessments on contaminants, food additives, and other chemical hazards. Provide expert advice and support on toxicological matters to internal stakeholders and clients. Take responsibility within your project team for, and oversee the input of, technical data summaries and data interpretation into regulatory systems such as the EFSA Portal and its submission to regulatory agencies or clients; Work with other specialists as part of multi-disciplined global project teams to deliver technical work including preparation of risk assessments, study summaries, waivers, regulatory compliance assessments, new product registrations/notifications, and other regulatory documentation as required. Assist with the completeness/quality review of scientific data according to current European Regulations, Directives and Guidelines and provision of general regulatory and scientific consultancy to clients as required. Support members of your project team and colleagues with any project issues they may encounter and in providing advice to clients. Participate in and contribute to food safety audits, development and assessment of good manufacturing practices (GMP), and inspections. Develop and maintain a comprehensive understanding of the current and future regulatory landscape, attending conferences to keeping up to date with developments in the appropriate laws, guidance and scientific advancements. Support business development activities and maintain awareness of current issues and developments in the field by providing industry expertise when needed during visits to clients, through discussion and responses to new project briefs and administration. Liaise with regulatory bodies and represent the organisation in scientific and professional forums. About you Scientific background with a specialisation or degree in toxicology, chemistry, biochemistry, biology or related discipline (a PhD or Master's degree is desirable) Minimum of 5 years of experience in toxicology/food safety and risk assessment Strong knowledge of European (EU and UK) food safety regulations and guidelines Numerically competent - ability to complete accurate technical research and data gathering tasks under supervision and assists in technical analysis and data review Effective communication skills, both written and verbal, with the ability to present complex information clearly Ability to work in a multidisciplinary team Ability to use software packages, including MS Word, MS Excel and MS Outlook. Use of statistical software an advantage High level knowledge of various food testing methods and assessment techniques Strong skills in project management with responsibility for technical quality, resources, budget and timelines. Fluent in English; proficiency in other European languages is an advantage What we can offer you Commitment to your development Leaders guided by our Leadership Principles A culture that welcomes you as the unique person you are Inspiration from colleagues, clients, and projects The long-term thinking of a foundation-owned company Flexible work environment 27 days annual leave plus bank holidays Matched pension contributions Private medical cover and life assurance Ready to join us? Please submit your application with your up-to-date CV. We invite diversity in all its forms and encourage applicants from all groups to apply. Deadline: 14/07/2025 Thank you for taking the time to apply! We look forward to receiving your application. Do you have any questions? Contact Meera Cush Work at the heart of sustainable change with Ramboll in the United Kingdom and Ireland Ramboll is a global architecture, engineering, and consultancy company. As a foundation-owned people company, founded in Denmark, we believe that the purpose of sustainable change is to create a thriving world for both nature and people. So, that's where we start - and how we work. Our history is rooted in a clear vision of how a responsible company should act and being open and curious is a cornerstone of our culture. Ramboll in the United Kingdom and Ireland has a proven track record of sustainable and responsible business and is a top ten engineering and environmental and sustainability consultancy in the UK, with more than 1,500 employees across 16 offices. Ramboll experts deliver innovative solutions across Buildings, Transport, Environment & Health, and Energy. In 2024, Ramboll was included in the Sunday Times' list of Best Places to Work. Equality, Diversity, and Inclusion Equality, diversity, and inclusion are at the heart of what we do. At Ramboll, we believe that diversity is a strength and that different experiences and perspectives are essential to creating truly sustainable societies. We are committed to providing an inclusive and supportive work environment where everyone is able to flourish and reach their potential. We also know how important it is to achieve the right balance of where, when, and how much you work. As a company, Ramboll recognises the importance of having a good work/life balance, both in terms of individual well-being and its positive impact with respect to the engagement and retention of our employees. We aim to support all employees to achieve a work/life balance which enables them to work in a supported manner while having the time to achieve personal aspects of their life outside of work. We invite applications from candidates of all backgrounds and characteristics. As a Disability Confident Committed employer, Ramboll ensures opportunities are accessible to candidates with disabilities. Please let us know if there are any changes we could make to the application process to make it more comfortable for you. You can contact us at with such requests. Let's close the gap - talent video - September 2024 Let's close the gap - talent video - September 2024 Let's close the gap - talent video - September 2024 0:33 Share "Let's close the gap - talent video - September 2024" Share from current time 00:00 0:00 Ramboll in numbers : employees worldwide : 300 office 300 office across 35 countries in revenue : 6 markets 6 markets Buildings, Transport, Energy, Environment & Health, Water and Management Consulting
Research Scientist/Research Engineer- Safeguards
AI Safety Institute
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team. Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute's Safeguard Analysis Team researches such interventions, which it refers to as 'safeguards', evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future. The Safeguard Analysis Team takes a broad view of security threats and interventions. It's keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed. The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team's priorities include research-oriented responsibilities - like assessing the threats to frontier systems and developing novel attacks - and engineering-oriented ones, such as building infrastructure for running evaluations. In this role, you'll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification You may be a good fit if you have some of the following skills, experience and attitudes: Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry, in academia, or independently. Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab). Red-teaming experience against any sort of system. Strong written and verbal communication skills. Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine-tuning LLMs. Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling. Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success and find new ways of getting things done. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Writing production quality code. Improving technical standards across a team through mentoring and feedback. Designing, shipping, and maintaining complex tech products. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. Required Experience This job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The 'required' experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant: Writing production quality code Writing code efficiently Python Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Security research knowledge Research problem selection Research science Written communication Verbal communication Teamwork Interpersonal skills Tackle challenging problems Learn through coaching
Jun 19, 2025
Full time
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team. Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Safety Institute's Safeguard Analysis Team researches such interventions, which it refers to as 'safeguards', evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future. The Safeguard Analysis Team takes a broad view of security threats and interventions. It's keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed. The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer, recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Team's priorities include research-oriented responsibilities - like assessing the threats to frontier systems and developing novel attacks - and engineering-oriented ones, such as building infrastructure for running evaluations. In this role, you'll receive mentorship and coaching from your manager and the technical leads on your team. You'll also regularly interact with world-famous researchers and other incredible staff, including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification You may be a good fit if you have some of the following skills, experience and attitudes: Experience working on machine learning, AI, AI security, computer security, information security, or some other security discipline in industry, in academia, or independently. Experience working with a world-class research team comprised of both scientists and engineers (e.g. in a top-3 lab). Red-teaming experience against any sort of system. Strong written and verbal communication skills. Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine-tuning LLMs. Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling. Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success and find new ways of getting things done. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Writing production quality code. Improving technical standards across a team through mentoring and feedback. Designing, shipping, and maintaining complex tech products. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. Required Experience This job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The 'required' experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant: Writing production quality code Writing code efficiently Python Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Security research knowledge Research problem selection Research science Written communication Verbal communication Teamwork Interpersonal skills Tackle challenging problems Learn through coaching
Senior Consultant
Clarivate Analytic
We are looking for a Senior Consultant to join our Academic and Government Consulting team within the Research & Analytics business unit. With a focus on Europe, in this role you will be working with colleagues to cultivate strong relationships with our clients and help universities, research institutes, and governments to develop and implement their strategic plans and approaches in research and education. Working in collaboration with Product, Sales, Marketing, the Institute for Scientific Information (ISI), and other Consulting practice areas, this role is responsible for growing market share in our consulting offerings. If you are a forward thinker and have a strong understanding of research tools and delivering innovative solutions to clients, we would love to speak with you. About You - experience, education, skills, and accomplishments Master's degree in a science discipline. At least 5 years of experience in consulting in the context of government and academic clients. Experience working with government agencies or research funding agencies in evaluating or managing scientific portfolios. Familiarity with how science is performed, funded, and managed. Previous experience with data analysis including the use of SQL, R, or Python. Previous project and team management experience, within a professional services environment. It would be great if you also had MBA or PhD in a science discipline. Previous experience in academia. Experience with data analysis including the use of SQL, R, or Python. Knowledge of science citation data such as Web of Science, bibliometric and scientometric methods. Prior experience working with Clarivate research tools. What will you be doing in this role? The position offers challenging hypothesis-driven projects for professionals with a scientific background and leadership experience around data-driven science management. The Senior Consultant will have responsibilities that include, but are not limited to: Driving business development/new business by leveraging your existing network as well as Clarivate's network, engaging with C-suite profiles at Academic, Government Institutes and Corporate organisations. Consulting with clients to define their needs and ensuring client satisfaction through delivery of high-quality solutions completed within the agreed upon timeframes. Identifying appropriate methodologies to answer client questions, and interpreting results. Leading the design and implementation of projects as a senior member of a multidisciplinary team of analysts. Working independently and as part of a team to design, conduct, and manage innovative quantitative analyses for a wide range of business cases including program evaluations, science management and research assessment. Developing and adhering to project plans, schedules, and milestones, while adhering to standard processes and methodologies. Collaborating with a diverse group of subject matter experts, executives, and managers. Driving projects to completion within a designated budget and timeframe. Managing about 40% of your time on business development with annual target and 60% on project delivery e.g., client engagement, quality assurance, final reporting, collaborating with project team members, etc. Communicating complex research findings and innovative methodologies clearly and effectively through written research reports, data visualizations, client or conference presentations and peer-reviewed journal publications. Serving as a point of contact to answer client questions and resolve project issues. About the Team Clarivate's Academic and Government Consulting group is a diverse group of practitioners working across the global delivering consulting revenue globally. The Academic and Government Consulting group is made up of three practice areas: Research Analytics, Reputation, and Evaluation & Assessment. Across our practice areas, there is also a wide range of skill sets (e.g., data scientists, bibliometricians, visualisation specialists, analysts, etc). Across our different practice areas there is also a wider variety of capabilities and services we provide to clients along with different commercial and go-to-market models. Hours of Work This is a full-time, permanent position based in Spain or UK and will require hybrid working in either our Barcelona or London offices (2 days per week in office, rest of week remote). This position requires weekday (Monday - Friday) attendance with some scheduling flexibility available around core working hours. This position may require up to 30% travel. At Clarivate, we are committed to providing equal employment opportunities for all persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Jun 16, 2025
Full time
We are looking for a Senior Consultant to join our Academic and Government Consulting team within the Research & Analytics business unit. With a focus on Europe, in this role you will be working with colleagues to cultivate strong relationships with our clients and help universities, research institutes, and governments to develop and implement their strategic plans and approaches in research and education. Working in collaboration with Product, Sales, Marketing, the Institute for Scientific Information (ISI), and other Consulting practice areas, this role is responsible for growing market share in our consulting offerings. If you are a forward thinker and have a strong understanding of research tools and delivering innovative solutions to clients, we would love to speak with you. About You - experience, education, skills, and accomplishments Master's degree in a science discipline. At least 5 years of experience in consulting in the context of government and academic clients. Experience working with government agencies or research funding agencies in evaluating or managing scientific portfolios. Familiarity with how science is performed, funded, and managed. Previous experience with data analysis including the use of SQL, R, or Python. Previous project and team management experience, within a professional services environment. It would be great if you also had MBA or PhD in a science discipline. Previous experience in academia. Experience with data analysis including the use of SQL, R, or Python. Knowledge of science citation data such as Web of Science, bibliometric and scientometric methods. Prior experience working with Clarivate research tools. What will you be doing in this role? The position offers challenging hypothesis-driven projects for professionals with a scientific background and leadership experience around data-driven science management. The Senior Consultant will have responsibilities that include, but are not limited to: Driving business development/new business by leveraging your existing network as well as Clarivate's network, engaging with C-suite profiles at Academic, Government Institutes and Corporate organisations. Consulting with clients to define their needs and ensuring client satisfaction through delivery of high-quality solutions completed within the agreed upon timeframes. Identifying appropriate methodologies to answer client questions, and interpreting results. Leading the design and implementation of projects as a senior member of a multidisciplinary team of analysts. Working independently and as part of a team to design, conduct, and manage innovative quantitative analyses for a wide range of business cases including program evaluations, science management and research assessment. Developing and adhering to project plans, schedules, and milestones, while adhering to standard processes and methodologies. Collaborating with a diverse group of subject matter experts, executives, and managers. Driving projects to completion within a designated budget and timeframe. Managing about 40% of your time on business development with annual target and 60% on project delivery e.g., client engagement, quality assurance, final reporting, collaborating with project team members, etc. Communicating complex research findings and innovative methodologies clearly and effectively through written research reports, data visualizations, client or conference presentations and peer-reviewed journal publications. Serving as a point of contact to answer client questions and resolve project issues. About the Team Clarivate's Academic and Government Consulting group is a diverse group of practitioners working across the global delivering consulting revenue globally. The Academic and Government Consulting group is made up of three practice areas: Research Analytics, Reputation, and Evaluation & Assessment. Across our practice areas, there is also a wide range of skill sets (e.g., data scientists, bibliometricians, visualisation specialists, analysts, etc). Across our different practice areas there is also a wider variety of capabilities and services we provide to clients along with different commercial and go-to-market models. Hours of Work This is a full-time, permanent position based in Spain or UK and will require hybrid working in either our Barcelona or London offices (2 days per week in office, rest of week remote). This position requires weekday (Monday - Friday) attendance with some scheduling flexibility available around core working hours. This position may require up to 30% travel. At Clarivate, we are committed to providing equal employment opportunities for all persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Research Scientist - Science of Evaluations
AI Safety Institute
The Science of Evaluations Team AISI's Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation. Measurement of Capabilities: The goal is to develop and apply rigorous scientific techniques for the measurement of frontier AI system capabilities, so they are accurate, robust, and useful in decision making. This is a nascent area of research which supports one of AISI's core products: conducting tests of frontier AI systems and feeding back results, insights, and recommendations to model developers and policy makers. The team will be an independent voice on the quality of our testing reports and the limitations of our evaluations. You will collaborate closely with researchers and engineers from the workstreams who develop and run our evaluations, getting into the details of their key strengths and weaknesses, proposing improvements, and developing techniques to get the most out of our results. The key challenge is increasing the confidence in our claims about system capabilities, based on solid evidence and analysis. Directions we are exploring include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Example checks could include: performance as a function of context length, auditing areas with surprising model performance, checking for soft refusal performance issues, and efficient comparisons of system performance between pre-deployment and post-deployment testing. Running in-depth analyses of evaluations results to understand successes and failures and using these insights to create best practices for testing exercises. Developing our approach to uncertainty quantification and significance testing, increasing statistical power (given time and token constraints). Developing methods for inferring model capabilities across given domains from task or benchmark success rates, and assigning confidence levels to claims about capabilities. Predictive Evaluations: The goal is to develop approaches to estimate the capabilities of frontier AI systems on tasks or benchmarks, before they are run. Ideally, we would be able to do this at some point early in the training process of a new model, using information about the architecture, dataset, or training compute. This research aims to provide us with advance warning of models reaching a particular level of capability, where additional safety mitigations may need to be put in place. This work is complementary to both safety cases -an AISI foundational research effort-and AISI's general evaluations work. This topic is currently an area of active research, and we believe it is poised to develop rapidly. We are particularly interested in developing predictive evaluations for complex, long-horizon agent tasks, since we believe this will be the most important type of evaluation as AI capabilities advance. You will help develop this field of research, both by direct technical work and via collaborations with external experts, partner organizations, and policy makers. Across both focus areas, there will be significant scope to contribute to the overall vision and strategy of the science of evaluations team as an early hire. You'll receive coaching from your manager and mentorship from the research directors at AISI (including Geoffrey Irving and Yarin Gal), and work closely with talented Policy / Strategy leads and Research Engineers and Research Scientists. Responsibilities This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work will include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Conducting in-depth analysis of evaluations methodology and results, diagnosing possible sources of uncertainty or bias, to improve our confidence in estimates of AI system capabilities. Improving the statistical analysis of evaluations results (e.g. model selection, hypothesis testing, significance testing, uncertainty quantification). Developing and implementing internal best-practices and protocols for evaluations and testing exercises. Staying well informed of the details and strengths and weaknesses of evaluations across domains in AISI and the state of the art in frontier AI evaluations research more broadly. Conducting research on predictive evaluations using the latest techniques from the published literature on AISI's internal evaluations, as well as conducting novel research to improve these techniques. Writing and editing scientific reports and other materials aimed at diverse audiences, focusing on synthesizing empirical results and recommendations to key decision-makers, ensuring high standards in clarity, precision, and style. Person Specification To set you up for success, we are looking for some of the following skills, experience and attitudes, but we are flexible in shaping the role to your background and expertise. Experience working within a world-leading team in machine learning or a related field (e.g. multiple first author publications at top-tier conferences). Strong track record of academic excellence (e.g. PhD in a technical field and/or spotlight papers at top-tier conferences). Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, hands-on experience with designing and running evaluations, scaling laws, fine-tuning, scaffolding, prompting. Broad experience in empirical research methodologies, potentially in fields outside of machine learning, and statistical analysis (T-shaped: some deep knowledge, lots of shallow knowledge, in e.g. experimental design, A/B testing, Bayesian inference, model selection, hypothesis testing, significance testing). Deeply care about methodological and statistical rigor, balanced with pragmatism, and willingness to get into the weeds. Experience with data visualization and presentation. Proven track record of excellent scientific writing and communication, with the ability to understand and communicate complex technical concepts for non-technical stakeholders and synthesize scientific results into compelling narratives. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 The Department for Science, Innovation and Technology offers a competitive mix of benefits including: A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the Civil Service Pension Scheme , with an average employer contribution of 27%. A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts. The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted: Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team Required Experience We select based on skills and experience regarding the following areas: . click apply for full job details
Jun 10, 2025
Full time
The Science of Evaluations Team AISI's Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation. Measurement of Capabilities: The goal is to develop and apply rigorous scientific techniques for the measurement of frontier AI system capabilities, so they are accurate, robust, and useful in decision making. This is a nascent area of research which supports one of AISI's core products: conducting tests of frontier AI systems and feeding back results, insights, and recommendations to model developers and policy makers. The team will be an independent voice on the quality of our testing reports and the limitations of our evaluations. You will collaborate closely with researchers and engineers from the workstreams who develop and run our evaluations, getting into the details of their key strengths and weaknesses, proposing improvements, and developing techniques to get the most out of our results. The key challenge is increasing the confidence in our claims about system capabilities, based on solid evidence and analysis. Directions we are exploring include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Example checks could include: performance as a function of context length, auditing areas with surprising model performance, checking for soft refusal performance issues, and efficient comparisons of system performance between pre-deployment and post-deployment testing. Running in-depth analyses of evaluations results to understand successes and failures and using these insights to create best practices for testing exercises. Developing our approach to uncertainty quantification and significance testing, increasing statistical power (given time and token constraints). Developing methods for inferring model capabilities across given domains from task or benchmark success rates, and assigning confidence levels to claims about capabilities. Predictive Evaluations: The goal is to develop approaches to estimate the capabilities of frontier AI systems on tasks or benchmarks, before they are run. Ideally, we would be able to do this at some point early in the training process of a new model, using information about the architecture, dataset, or training compute. This research aims to provide us with advance warning of models reaching a particular level of capability, where additional safety mitigations may need to be put in place. This work is complementary to both safety cases -an AISI foundational research effort-and AISI's general evaluations work. This topic is currently an area of active research, and we believe it is poised to develop rapidly. We are particularly interested in developing predictive evaluations for complex, long-horizon agent tasks, since we believe this will be the most important type of evaluation as AI capabilities advance. You will help develop this field of research, both by direct technical work and via collaborations with external experts, partner organizations, and policy makers. Across both focus areas, there will be significant scope to contribute to the overall vision and strategy of the science of evaluations team as an early hire. You'll receive coaching from your manager and mentorship from the research directors at AISI (including Geoffrey Irving and Yarin Gal), and work closely with talented Policy / Strategy leads and Research Engineers and Research Scientists. Responsibilities This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work will include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Conducting in-depth analysis of evaluations methodology and results, diagnosing possible sources of uncertainty or bias, to improve our confidence in estimates of AI system capabilities. Improving the statistical analysis of evaluations results (e.g. model selection, hypothesis testing, significance testing, uncertainty quantification). Developing and implementing internal best-practices and protocols for evaluations and testing exercises. Staying well informed of the details and strengths and weaknesses of evaluations across domains in AISI and the state of the art in frontier AI evaluations research more broadly. Conducting research on predictive evaluations using the latest techniques from the published literature on AISI's internal evaluations, as well as conducting novel research to improve these techniques. Writing and editing scientific reports and other materials aimed at diverse audiences, focusing on synthesizing empirical results and recommendations to key decision-makers, ensuring high standards in clarity, precision, and style. Person Specification To set you up for success, we are looking for some of the following skills, experience and attitudes, but we are flexible in shaping the role to your background and expertise. Experience working within a world-leading team in machine learning or a related field (e.g. multiple first author publications at top-tier conferences). Strong track record of academic excellence (e.g. PhD in a technical field and/or spotlight papers at top-tier conferences). Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, hands-on experience with designing and running evaluations, scaling laws, fine-tuning, scaffolding, prompting. Broad experience in empirical research methodologies, potentially in fields outside of machine learning, and statistical analysis (T-shaped: some deep knowledge, lots of shallow knowledge, in e.g. experimental design, A/B testing, Bayesian inference, model selection, hypothesis testing, significance testing). Deeply care about methodological and statistical rigor, balanced with pragmatism, and willingness to get into the weeds. Experience with data visualization and presentation. Proven track record of excellent scientific writing and communication, with the ability to understand and communicate complex technical concepts for non-technical stakeholders and synthesize scientific results into compelling narratives. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 The Department for Science, Innovation and Technology offers a competitive mix of benefits including: A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the Civil Service Pension Scheme , with an average employer contribution of 27%. A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts. The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted: Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team Required Experience We select based on skills and experience regarding the following areas: . click apply for full job details
Lead Research Scientist
Faculty
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Lead the AI safety team's research agenda, setting priorities and ensuring alignment with Faculty's long-term goals. Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems. Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI). Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics. Provide mentorship and technical guidance to researchers across diverse AI safety topics. Technical Contributions: Lead hands-on contributions to technical research. Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Jun 10, 2025
Full time
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Lead the AI safety team's research agenda, setting priorities and ensuring alignment with Faculty's long-term goals. Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems. Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI). Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics. Provide mentorship and technical guidance to researchers across diverse AI safety topics. Technical Contributions: Lead hands-on contributions to technical research. Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Research Scientist/Research Engineer - Societal Impacts
AI Safety Institute
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented Research Scientists and Research Engineers to work in the Societal impacts team. Societal Impacts Societal Impacts is a multidisciplinary team that studies how advanced AI models can impact people and society. Core research topics include the use of AI for assisting with criminal activities, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks. In this role, you'll join a strongly collaborative technical research team led by the Societal Impacts Research Director, Professor Christopher Summerfield. You will receive mentorship, training, and opportunities for development. You'll also regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge), as well as other partners across government. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification Successful candidates will work with other researchers to design and run studies that answer important questions about the effect AI will have on society. For example, can AI effectively change people's political and social views? Research Scientists/Engineers have scope to use a range of research methodologies and drive the strategy of the team. This is a multidisciplinary team and we look for people with a diversity of backgrounds. We are especially excited about candidates with experience of research in one or more of these areas: Computational social science Machine learning (research engineer / research scientist) Data Science, especially including Natural Language Processing Advanced statistical modelling and experimental design. Required Skills and Experience We select based on skills and experience regarding the following areas: Writing production quality code Writing code efficiently, especially using Python Demonstrable interest in the societal impacts of AI Experimental design Demonstrable experience running research experiments involving AI models and/or human participants Strong quantitative skills Data analytics Data science methods Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Research problem selection Research science Verbal communication Teamwork Interpersonal skills Desired Skills and Experience Written communication Published work related to cognitive, social or political social science. A specialization in a particular field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field. Front-end software engineering skills to build UI for studies with human participants. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Jun 04, 2025
Full time
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented Research Scientists and Research Engineers to work in the Societal impacts team. Societal Impacts Societal Impacts is a multidisciplinary team that studies how advanced AI models can impact people and society. Core research topics include the use of AI for assisting with criminal activities, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks. In this role, you'll join a strongly collaborative technical research team led by the Societal Impacts Research Director, Professor Christopher Summerfield. You will receive mentorship, training, and opportunities for development. You'll also regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge), as well as other partners across government. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification Successful candidates will work with other researchers to design and run studies that answer important questions about the effect AI will have on society. For example, can AI effectively change people's political and social views? Research Scientists/Engineers have scope to use a range of research methodologies and drive the strategy of the team. This is a multidisciplinary team and we look for people with a diversity of backgrounds. We are especially excited about candidates with experience of research in one or more of these areas: Computational social science Machine learning (research engineer / research scientist) Data Science, especially including Natural Language Processing Advanced statistical modelling and experimental design. Required Skills and Experience We select based on skills and experience regarding the following areas: Writing production quality code Writing code efficiently, especially using Python Demonstrable interest in the societal impacts of AI Experimental design Demonstrable experience running research experiments involving AI models and/or human participants Strong quantitative skills Data analytics Data science methods Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Research problem selection Research science Verbal communication Teamwork Interpersonal skills Desired Skills and Experience Written communication Published work related to cognitive, social or political social science. A specialization in a particular field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field. Front-end software engineering skills to build UI for studies with human participants. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Xcede Recruitment Solutions
Senior GenAI Engineer
Xcede Recruitment Solutions
AI Engineer Up to 95k + Bonus Location: London - 3 days a week in office My client is a leading financial services company who is in the process of growing their GenAI team. They are looking for an AI Engineer who will be an individual technical contributor to help build their NLP projects focusing on large language models. This role includes working the full life cycle from proof of concept to fine-tuning and deployment. Responsibilities: Identify where GenAI can be used to support the business and deliver measurable advantages. Collaborate with the Data Science and AI team to implement scalable AI applications. Improve and optimize the infrastructure by enhancing MLOps capabilities, CI/CD pipelines, user interfaces, and AI Python libraries. Develop effective prompts for AI models while fine-tuning them. Conduct thorough evaluations of third-party GenAI and LLM technologies. Take ownership of projects from initial research and development stages all the way through production deployment. Write production-ready code. What You Need to Succeed: Minimum of 3 years working as a Data Scientist or Machine Learning Engineer on machine learning projects. At least 2 years experience using LLMs. Strong background in Python. Exposure to MLOps including CI/CD, Docker, Kubernetes. Familiarity with software best practices. Experience working with non-technical stakeholders. Confident using AWS. Bonus if you have used Kafka, Databricks, and RAG. Unfortunately, sponsorship is not provided for this role. If you are interested, please apply here or reach out to .
Feb 17, 2025
Full time
AI Engineer Up to 95k + Bonus Location: London - 3 days a week in office My client is a leading financial services company who is in the process of growing their GenAI team. They are looking for an AI Engineer who will be an individual technical contributor to help build their NLP projects focusing on large language models. This role includes working the full life cycle from proof of concept to fine-tuning and deployment. Responsibilities: Identify where GenAI can be used to support the business and deliver measurable advantages. Collaborate with the Data Science and AI team to implement scalable AI applications. Improve and optimize the infrastructure by enhancing MLOps capabilities, CI/CD pipelines, user interfaces, and AI Python libraries. Develop effective prompts for AI models while fine-tuning them. Conduct thorough evaluations of third-party GenAI and LLM technologies. Take ownership of projects from initial research and development stages all the way through production deployment. Write production-ready code. What You Need to Succeed: Minimum of 3 years working as a Data Scientist or Machine Learning Engineer on machine learning projects. At least 2 years experience using LLMs. Strong background in Python. Exposure to MLOps including CI/CD, Docker, Kubernetes. Familiarity with software best practices. Experience working with non-technical stakeholders. Confident using AWS. Bonus if you have used Kafka, Databricks, and RAG. Unfortunately, sponsorship is not provided for this role. If you are interested, please apply here or reach out to .
Centrica
Solution Architect
Centrica Windsor, Berkshire
About your role : We're looking for a Software Architect to lead the design and implementation of high-level software architectures, collaborating with cross-functional teams-including machine learning and data scientists and external product teams-to deliver innovative solutions aligned with business objectives. This role exists to ensure the development of scalable, efficient, and integrated software systems while leading the team to uphold best practices and achieve project goals. We're seeking an experienced and adaptable Software Architect with strong leadership abilities and expertise in collaborative, cloud-based software development, ideally in a corporate research environment. The team is currently primarily in Antwerp, Belgium but we are building a Windsor, Berkshire-centred team to match. This is a hybrid working position, based in Windsor, Berkshire (2 days on-site). It will be useful to be able to travel between the locations. Key responsibilities will include: Solution Architecture and Design Design and oversee high-level software architectures aligned with business objectives. Apply data engineering principles to design efficient data pipelines and storage. Integrate solutions seamlessly with machine learning models and data science workflows. Technical Leadership and Collaboration Collaborate closely with researcher engineers, data scientists, software and DevOps engineers, to ensure the quality of solutions. Promote best practices within the team, including rigorous testing, code review, continuous integration/continuous deployment (CI/CD) techniques, and well-maintained documentation. Introduce innovative solutions using emerging technologies. Team Leadership and Development Lead and mentor a team of developers, fostering excellence. Enhance team skills in coding practices and technical competencies. Conduct performance evaluations and support career growth. Here's what we're looking for: Professional experience A bachelor's degree in computer science, Software Engineering, or a related field is required; a master's degree or relevant certifications (such as AWS Certified Solutions Architect) are highly desirable. Professional experience in software development, including significant experience in software architecture and team leadership. Proven ability to design high-level software architectures aligned with business goals. Technical knowledge Understanding of data architecture concepts and best practices to support our machine learning and data science activities. Experience in architecting and managing scalable cloud-native solutions particularly on AWS. Advanced knowledge of Python programming and relevant frameworks. Bonus: Working knowledge of other languages, especially JVM-based, Go, or Rust. Bonus: Familiarity with some of our other key applications, e.g. web or mobile front-end design, data persistency, IoT devices. Leadership and Software Management Experience with testing, code review practices, code deployment, and infrastructure management. Proven ability to lead and mentor development teams effectively. Experience conducting performance evaluations and supporting career growth. Skilled in fostering a collaborative and high-performance team environment. Communication and Collaboration Excellent verbal and written communication skills. Ability to collaborate closely with cross-functional teams and stakeholders. Skilled in conveying complex technical concepts to non-technical audiences.
Feb 17, 2025
Full time
About your role : We're looking for a Software Architect to lead the design and implementation of high-level software architectures, collaborating with cross-functional teams-including machine learning and data scientists and external product teams-to deliver innovative solutions aligned with business objectives. This role exists to ensure the development of scalable, efficient, and integrated software systems while leading the team to uphold best practices and achieve project goals. We're seeking an experienced and adaptable Software Architect with strong leadership abilities and expertise in collaborative, cloud-based software development, ideally in a corporate research environment. The team is currently primarily in Antwerp, Belgium but we are building a Windsor, Berkshire-centred team to match. This is a hybrid working position, based in Windsor, Berkshire (2 days on-site). It will be useful to be able to travel between the locations. Key responsibilities will include: Solution Architecture and Design Design and oversee high-level software architectures aligned with business objectives. Apply data engineering principles to design efficient data pipelines and storage. Integrate solutions seamlessly with machine learning models and data science workflows. Technical Leadership and Collaboration Collaborate closely with researcher engineers, data scientists, software and DevOps engineers, to ensure the quality of solutions. Promote best practices within the team, including rigorous testing, code review, continuous integration/continuous deployment (CI/CD) techniques, and well-maintained documentation. Introduce innovative solutions using emerging technologies. Team Leadership and Development Lead and mentor a team of developers, fostering excellence. Enhance team skills in coding practices and technical competencies. Conduct performance evaluations and support career growth. Here's what we're looking for: Professional experience A bachelor's degree in computer science, Software Engineering, or a related field is required; a master's degree or relevant certifications (such as AWS Certified Solutions Architect) are highly desirable. Professional experience in software development, including significant experience in software architecture and team leadership. Proven ability to design high-level software architectures aligned with business goals. Technical knowledge Understanding of data architecture concepts and best practices to support our machine learning and data science activities. Experience in architecting and managing scalable cloud-native solutions particularly on AWS. Advanced knowledge of Python programming and relevant frameworks. Bonus: Working knowledge of other languages, especially JVM-based, Go, or Rust. Bonus: Familiarity with some of our other key applications, e.g. web or mobile front-end design, data persistency, IoT devices. Leadership and Software Management Experience with testing, code review practices, code deployment, and infrastructure management. Proven ability to lead and mentor development teams effectively. Experience conducting performance evaluations and supporting career growth. Skilled in fostering a collaborative and high-performance team environment. Communication and Collaboration Excellent verbal and written communication skills. Ability to collaborate closely with cross-functional teams and stakeholders. Skilled in conveying complex technical concepts to non-technical audiences.
Senior Research Scientist
Faculty
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award-winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Senior Research Scientist at Faculty you will contribute to Faculty's success by performing novel scientific research in the area of AI safety that progresses scientific understanding through external publications and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Own and drive forward themes and areas of interest demonstrated by high-impact AI research. Contribute to the wider vision for Faculty's research effort in AI safety through team contributions to the research agenda. Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long-term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Guide and mentor junior researchers in the team, fostering a collaborative and innovative culture across a wide range of AI Safety-relevant research topics. Technical Contributions: Lead technical research within the AI Safety space; some examples of AI safety research we've worked on include developing novel attacks against models, fine-tuning models for additional safety (e.g. knowledge unlearning), increasing self-consistency of language models and uncertainty estimation. Support the delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A track record working with high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Feb 13, 2025
Full time
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award-winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Senior Research Scientist at Faculty you will contribute to Faculty's success by performing novel scientific research in the area of AI safety that progresses scientific understanding through external publications and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Own and drive forward themes and areas of interest demonstrated by high-impact AI research. Contribute to the wider vision for Faculty's research effort in AI safety through team contributions to the research agenda. Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long-term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Guide and mentor junior researchers in the team, fostering a collaborative and innovative culture across a wide range of AI Safety-relevant research topics. Technical Contributions: Lead technical research within the AI Safety space; some examples of AI safety research we've worked on include developing novel attacks against models, fine-tuning models for additional safety (e.g. knowledge unlearning), increasing self-consistency of language models and uncertainty estimation. Support the delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A track record working with high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Research Scientist, Frontier Red Team (Autonomy)
Menlo Ventures
About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP) . We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you've thought particularly hard about how models might be agentic and associated risks, and you've built an eval or experiment around it, we'd like to meet you. Please note: We will be prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office. We're still iterating on the structure of our team. It is possible that this role might end up being the people manager of a few other individual contributors (ICs). If you would be interested in people management, you may express interest in the application. Responsibilities: Lead the end-to-end development of autonomy evals and research. This starts with risk and capability modeling, and includes designing, implementing, and regularly running these evals. Quickly iterate on experiments to evaluate autonomous capabilities and forecast future capabilities. Provide technical leadership to Research Engineers to scope + build scalable and secure infrastructure to quickly run large-scale experiments. Communicate the outcomes of the evaluations to relevant Anthropic teams, as well as policy stakeholders and research collaborators, where relevant. Collaborate with other projects on the Frontier Red Team, Alignment, and beyond to improve infrastructure and design safety techniques for autonomous capabilities. You may be a good fit if you: Have an ML background and experience leading experimental research on LLMs/multimodal models and/or agents Have strong Python-based engineering skills Are driven to find solutions to ambiguously scoped problems Design and run experiments and iterate quickly to solve machine learning problems Thrive in a collaborative environment (we love pair programming) Have experience training, working with, and prompting models The expected salary range for this position is: Annual Salary: €225.000 - €270.000 EUR Logistics Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Feb 13, 2025
Full time
About Anthropic Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We are looking for Research Scientists to develop and productionize advanced autonomy evaluations on our Frontier Red Team. Our goal is to develop and implement a gold standard of advanced autonomy evals to determine the AI Safety Level (ASL) of our models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP) . We believe that developing autonomy evals is one of the best ways to study increasingly capable and agentic models. If you've thought particularly hard about how models might be agentic and associated risks, and you've built an eval or experiment around it, we'd like to meet you. Please note: We will be prioritizing candidates who can start ASAP and can be based in either our San Francisco or London office. We're still iterating on the structure of our team. It is possible that this role might end up being the people manager of a few other individual contributors (ICs). If you would be interested in people management, you may express interest in the application. Responsibilities: Lead the end-to-end development of autonomy evals and research. This starts with risk and capability modeling, and includes designing, implementing, and regularly running these evals. Quickly iterate on experiments to evaluate autonomous capabilities and forecast future capabilities. Provide technical leadership to Research Engineers to scope + build scalable and secure infrastructure to quickly run large-scale experiments. Communicate the outcomes of the evaluations to relevant Anthropic teams, as well as policy stakeholders and research collaborators, where relevant. Collaborate with other projects on the Frontier Red Team, Alignment, and beyond to improve infrastructure and design safety techniques for autonomous capabilities. You may be a good fit if you: Have an ML background and experience leading experimental research on LLMs/multimodal models and/or agents Have strong Python-based engineering skills Are driven to find solutions to ambiguously scoped problems Design and run experiments and iterate quickly to solve machine learning problems Thrive in a collaborative environment (we love pair programming) Have experience training, working with, and prompting models The expected salary range for this position is: Annual Salary: €225.000 - €270.000 EUR Logistics Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact - advancing our long-term goals of steerable, trustworthy AI - rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Research Scientist, Gemini Safety / AGI Safety & Alignment
Google DeepMind
We are hiring for this role in London, Zurich, New York, Mountain View or San Francisco. Please clarify in the application questions which location(s) work best for you. At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. Snapshot Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Scientist, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems. About Us Conducting research into any transformative technology comes with responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind. Our culture We're a dedicated scientific community, committed to 'solving intelligence' and ensuring our technology is used for widespread public benefit. We've built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don't set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals. We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible. Roles We are seeking research scientists for our Gemini Safety and AGI Safety & Alignment (ASAT) teams. Gemini Safety Gemini Safety is seeking research scientists to contribute to the following areas: Pretraining In this role you will investigate new techniques to improve the safety behavior of Gemini via pretraining interventions. You will conduct empirical studies on model behavior, analyze model performance across different scales, experiment with synthetic datasets, data weighting, and related techniques. You should enjoy working with very large scale datasets and have an empirical mindset. Text output (core Gemini, reasoning models, Search AI mode, etc) This role is focused on post training safety. You will be part of a very fast paced, intense effort at the heart of Gemini to improve safety and helpfulness for the core model, and help adapt the model to specific use cases such as reasoning or search. Red teaming and adversarial resilience In this role, you will build and apply automated red teaming via our most capable models, find losses and vulnerabilities in our Gen AI products, including Gemini itself, reasoning models, image and video generation, and whatever else we are building. You may also work to improve resilience to jailbreaks and adversarial prompts across models and modalities, driving progress on a fundamentally unsolved problem with serious implications for future safety. Image and video generation This role is about safety for image and video generation, including Imagen, Veo, and Gemini. You will design evaluations for safety and fairness, improve the safety behavior of the relevant models working closely with the core modeling teams, and design mitigations outside the model (e.g. external classifiers). AGI Safety & Alignment (ASAT) Our AGI Safety & Alignment team is seeking research scientists to contribute to the following areas. Applied Interpretability The focus of this role is to put insights from model internals research into practice on both safety in Gemini post-training and dangerous capability evaluations in support of our Frontier Safety Framework . Key responsibilities: Rapid research ideation, iteration and production implementation of interpretability applications to address promising use cases. Adapting techniques emerging from collaboration with mechanistic interpretability researchers, as well as using other approaches such as probing and training data attribution. Inventing novel methods to address novel frontier AI advances. AGI Safety Research In this role you will advance AGI safety & alignment research within one of our priority areas. Candidates should have expertise in the area they apply to. We are also open to candidates who could lead a new research area with clear impact on AGI safety & alignment. Areas of interest include, but are not limited to: Dangerous Capability Evaluations: Designing evaluations for dangerous capabilities for use in the Frontier Safety Framework , particularly for automation of ML R&D Safety cases: Producing conceptual arguments backed by empirical evidence that a specific AI system is safe Alignable Systems Design: Prototyping AI systems that could plausibly support safety cases Externalized Reasoning: Understanding the strengths and limitations of monitoring the "out loud" chain of thought produced by modern LLMs Amplified Oversight: Supervising systems that may outperform humans Interpretability: Understanding the internal representations and algorithms in trained LLMs, and using this knowledge to improve safety Robustness: Expanding the distribution on which LLMs are trained to reduce out-of-distribution failures Monitoring: Detecting dangerous outputs and responding to them appropriately Control Evaluations: Designing and running red team evaluations that conservatively estimate risk from AI systems under the assumption that they are misaligned. Alignment Stress Testing: Identifying assumptions made by particular alignment plans, and red teaming them to see whether they hold About you You have extensive research experience with deep learning and/or foundation models (for example, a PhD in machine learning). You are adept at generating ideas and designing experiments, and implementing these in Python with real AI systems. You are keen to address risks from foundation models, and have thought about how to do so. You plan for your research to impact production systems on a timescale between "immediately" and "a few years". You are excited to work with strong contributors to make progress towards a shared ambitious goal. With strong, clear communication skills, you are confident engaging technical stakeholders to share research insights tailored to their background. In addition, any of the following would be an advantage: PhD in Computer Science or Machine Learning related field. A track record of publications at venues such as NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI and UAI. Experience in areas such as Safety, Fairness and Alignment. Engineering experience with LLM training and inference. Experience with collaborating or leading an applied research project. What we offer At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc. We are also open to relocating candidates to Mountain View and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility). The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. Application deadline: 12pm PST Friday 28th February 2025
Feb 13, 2025
Full time
We are hiring for this role in London, Zurich, New York, Mountain View or San Francisco. Please clarify in the application questions which location(s) work best for you. At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. Snapshot Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. As a Research Scientist, you will design, implement, and empirically validate approaches to alignment and risk mitigation, and integrate successful approaches into our best AI systems. About Us Conducting research into any transformative technology comes with responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalisation in machine learning systems. Proactive research in these areas is essential to the fulfilment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems. Research Scientists work on the forefront of technical approaches to designing systems that reliably function as intended while discovering and mitigating risks, in close collaboration with other AI research groups within and outside of Google DeepMind. Our culture We're a dedicated scientific community, committed to 'solving intelligence' and ensuring our technology is used for widespread public benefit. We've built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don't set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals. We constantly iterate on our workplace experience with the goal of ensuring it encourages a balanced life. From excellent office facilities through to extensive manager support, we strive to support our people and their needs as effectively as possible. Roles We are seeking research scientists for our Gemini Safety and AGI Safety & Alignment (ASAT) teams. Gemini Safety Gemini Safety is seeking research scientists to contribute to the following areas: Pretraining In this role you will investigate new techniques to improve the safety behavior of Gemini via pretraining interventions. You will conduct empirical studies on model behavior, analyze model performance across different scales, experiment with synthetic datasets, data weighting, and related techniques. You should enjoy working with very large scale datasets and have an empirical mindset. Text output (core Gemini, reasoning models, Search AI mode, etc) This role is focused on post training safety. You will be part of a very fast paced, intense effort at the heart of Gemini to improve safety and helpfulness for the core model, and help adapt the model to specific use cases such as reasoning or search. Red teaming and adversarial resilience In this role, you will build and apply automated red teaming via our most capable models, find losses and vulnerabilities in our Gen AI products, including Gemini itself, reasoning models, image and video generation, and whatever else we are building. You may also work to improve resilience to jailbreaks and adversarial prompts across models and modalities, driving progress on a fundamentally unsolved problem with serious implications for future safety. Image and video generation This role is about safety for image and video generation, including Imagen, Veo, and Gemini. You will design evaluations for safety and fairness, improve the safety behavior of the relevant models working closely with the core modeling teams, and design mitigations outside the model (e.g. external classifiers). AGI Safety & Alignment (ASAT) Our AGI Safety & Alignment team is seeking research scientists to contribute to the following areas. Applied Interpretability The focus of this role is to put insights from model internals research into practice on both safety in Gemini post-training and dangerous capability evaluations in support of our Frontier Safety Framework . Key responsibilities: Rapid research ideation, iteration and production implementation of interpretability applications to address promising use cases. Adapting techniques emerging from collaboration with mechanistic interpretability researchers, as well as using other approaches such as probing and training data attribution. Inventing novel methods to address novel frontier AI advances. AGI Safety Research In this role you will advance AGI safety & alignment research within one of our priority areas. Candidates should have expertise in the area they apply to. We are also open to candidates who could lead a new research area with clear impact on AGI safety & alignment. Areas of interest include, but are not limited to: Dangerous Capability Evaluations: Designing evaluations for dangerous capabilities for use in the Frontier Safety Framework , particularly for automation of ML R&D Safety cases: Producing conceptual arguments backed by empirical evidence that a specific AI system is safe Alignable Systems Design: Prototyping AI systems that could plausibly support safety cases Externalized Reasoning: Understanding the strengths and limitations of monitoring the "out loud" chain of thought produced by modern LLMs Amplified Oversight: Supervising systems that may outperform humans Interpretability: Understanding the internal representations and algorithms in trained LLMs, and using this knowledge to improve safety Robustness: Expanding the distribution on which LLMs are trained to reduce out-of-distribution failures Monitoring: Detecting dangerous outputs and responding to them appropriately Control Evaluations: Designing and running red team evaluations that conservatively estimate risk from AI systems under the assumption that they are misaligned. Alignment Stress Testing: Identifying assumptions made by particular alignment plans, and red teaming them to see whether they hold About you You have extensive research experience with deep learning and/or foundation models (for example, a PhD in machine learning). You are adept at generating ideas and designing experiments, and implementing these in Python with real AI systems. You are keen to address risks from foundation models, and have thought about how to do so. You plan for your research to impact production systems on a timescale between "immediately" and "a few years". You are excited to work with strong contributors to make progress towards a shared ambitious goal. With strong, clear communication skills, you are confident engaging technical stakeholders to share research insights tailored to their background. In addition, any of the following would be an advantage: PhD in Computer Science or Machine Learning related field. A track record of publications at venues such as NeurIPS, ICLR, ICML, RL/DL, EMNLP, AAAI and UAI. Experience in areas such as Safety, Fairness and Alignment. Engineering experience with LLM training and inference. Experience with collaborating or leading an applied research project. What we offer At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc. We are also open to relocating candidates to Mountain View and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility). The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. Application deadline: 12pm PST Friday 28th February 2025
Research Scientist/Research Engineer - Societal Impacts
AI Safety Institute
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented Research Scientists and Research Engineers to work in the Societal impacts team. Societal Impacts Societal Impacts is a multidisciplinary team that studies how advanced AI models can impact people and society. Core research topics include the use of AI for assisting with criminal activities, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks. In this role, you'll join a strongly collaborative technical research team led by the Societal Impacts Research Director, Professor Christopher Summerfield. You will receive mentorship, training, and opportunities for development. You'll also regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge), as well as other partners across government. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification Successful candidates will work with other researchers to design and run studies that answer important questions about the effect AI will have on society. For example, can AI effectively change people's political and social views? Research Scientists/Engineers have scope to use a range of research methodologies and drive the strategy of the team. This is a multidisciplinary team and we look for people with a diversity of backgrounds. We are especially excited about candidates with experience of research in one or more of these areas: Computational social science Machine learning (research engineer / research scientist) Data Science, especially including Natural Language Processing Advanced statistical modelling and experimental design. Required Skills and Experience We select based on skills and experience regarding the following areas: Writing production quality code Writing code efficiently, especially using Python Demonstrable interest in the societal impacts of AI Experimental design Demonstrable experience running research experiments involving AI models and/or human participants Strong quantitative skills Data analytics Data science methods Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Research problem selection Research science Verbal communication Teamwork Interpersonal skills Desired Skills and Experience Written communication Published work related to cognitive, social or political social science. A specialization in a particular field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field. Front-end software engineering skills to build UI for studies with human participants. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Feb 12, 2025
Full time
Role Description The AI Safety Institute research unit is looking for exceptionally motivated and talented Research Scientists and Research Engineers to work in the Societal impacts team. Societal Impacts Societal Impacts is a multidisciplinary team that studies how advanced AI models can impact people and society. Core research topics include the use of AI for assisting with criminal activities, undermining trust in information, jeopardising psychological wellbeing, or for malicious social engineering. We are interested in both immediate and medium-term risks. In this role, you'll join a strongly collaborative technical research team led by the Societal Impacts Research Director, Professor Christopher Summerfield. You will receive mentorship, training, and opportunities for development. You'll also regularly interact with our highly talented and experienced staff across the Institute (including alumni from Anthropic, DeepMind, OpenAI and ML professors from Oxford and Cambridge), as well as other partners across government. In addition to Junior roles, Senior, Staff and Principal RE positions are available for candidates with the required seniority and experience. Person Specification Successful candidates will work with other researchers to design and run studies that answer important questions about the effect AI will have on society. For example, can AI effectively change people's political and social views? Research Scientists/Engineers have scope to use a range of research methodologies and drive the strategy of the team. This is a multidisciplinary team and we look for people with a diversity of backgrounds. We are especially excited about candidates with experience of research in one or more of these areas: Computational social science Machine learning (research engineer / research scientist) Data Science, especially including Natural Language Processing Advanced statistical modelling and experimental design. Required Skills and Experience We select based on skills and experience regarding the following areas: Writing production quality code Writing code efficiently, especially using Python Demonstrable interest in the societal impacts of AI Experimental design Demonstrable experience running research experiments involving AI models and/or human participants Strong quantitative skills Data analytics Data science methods Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Research problem selection Research science Verbal communication Teamwork Interpersonal skills Desired Skills and Experience Written communication Published work related to cognitive, social or political social science. A specialization in a particular field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field. Front-end software engineering skills to build UI for studies with human participants. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L3: £65,000 - £75,000 L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
Lead Research Scientist - Control Team
AI Safety Institute
Control Team Our team's focus is on ensuring that even if frontier AI systems are misaligned, they can be effectively controlled . To achieve this, we are attempting to advance the state of conceptual research into control protocols and corresponding safety cases. Additionally, we will conduct realistic empirical research on mock frontier AI development infrastructure, helping to identify flaws in theoretical approaches and refine them accordingly. Role Summary As the lead research scientist on Control, you'll lead the conceptual and theoretical research efforts on control. Your team will initially include 3-4 research scientists, including researchers with existing experience in the control agenda and/or experience at frontier labs. Your responsibilities will encompass setting the research direction & agenda, ambitiously advancing the state of control research, as well as managing and developing an exceptional team. The ultimate goal is to make substantial improvements in the robustness of control protocols across major labs, particularly as we progress towards AGI. The role will involve close collaboration with our research directors, including Geoffrey Irving and Yarin Gal , and work hand-in-hand with the Control empirics team. The empirics team will support your efforts by building realistic control settings that closely mimic the infrastructure and codebases used for frontier AI development, and by helping to develop empirical experiments. Research partnerships and collaborations with many of the leading frontier AI labs will also be a significant part of your role. From a compute perspective, you will have excellent access to resources from both our research platform team and the UK's Isambard supercomputer (5,000 H100s). Person Specification You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don't need to meet all of these criteria, and if you're unsure, we encourage you to apply. Experience leading a research team or group that has delivered exceptional research in deep learning or a related field. Comprehensive understanding of frontier AI development, including key processes involved in research, data collection & generation, pre-training, post-training and safety assessment. Proven track record of academic excellence, demonstrated by novel research contributions and spotlight papers at top-tier conferences (e.g., NeurIPS, ICML, ICLR). Exceptional written and verbal communication skills, with the ability to convey complex ideas clearly and effectively to diverse audiences. Extensive experience in collaborating with multi-disciplinary teams, including researchers and engineers, and leading high-impact projects. A strong desire to improve the global state of AI safety. While existing experience working on control is desired, it is not a requirement for this role. Salary & Benefits We are hiring individuals at the more senior ranges of the following scale (L5-L7). Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page. Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280 Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505 Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195 Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230 Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. Required Experience We select based on skills and experience regarding the following areas: Research problem selection Research science Writing code efficiently Python Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Written communication Verbal communication Teamwork Interpersonal skills Tackle challenging problems Learn through coaching Desired Experience We additionally may factor in experience with any of the areas that our work-streams specialise in: Autonomous systems Cyber security Chemistry or Biology Safeguards Safety Cases Societal Impacts
Feb 11, 2025
Full time
Control Team Our team's focus is on ensuring that even if frontier AI systems are misaligned, they can be effectively controlled . To achieve this, we are attempting to advance the state of conceptual research into control protocols and corresponding safety cases. Additionally, we will conduct realistic empirical research on mock frontier AI development infrastructure, helping to identify flaws in theoretical approaches and refine them accordingly. Role Summary As the lead research scientist on Control, you'll lead the conceptual and theoretical research efforts on control. Your team will initially include 3-4 research scientists, including researchers with existing experience in the control agenda and/or experience at frontier labs. Your responsibilities will encompass setting the research direction & agenda, ambitiously advancing the state of control research, as well as managing and developing an exceptional team. The ultimate goal is to make substantial improvements in the robustness of control protocols across major labs, particularly as we progress towards AGI. The role will involve close collaboration with our research directors, including Geoffrey Irving and Yarin Gal , and work hand-in-hand with the Control empirics team. The empirics team will support your efforts by building realistic control settings that closely mimic the infrastructure and codebases used for frontier AI development, and by helping to develop empirical experiments. Research partnerships and collaborations with many of the leading frontier AI labs will also be a significant part of your role. From a compute perspective, you will have excellent access to resources from both our research platform team and the UK's Isambard supercomputer (5,000 H100s). Person Specification You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don't need to meet all of these criteria, and if you're unsure, we encourage you to apply. Experience leading a research team or group that has delivered exceptional research in deep learning or a related field. Comprehensive understanding of frontier AI development, including key processes involved in research, data collection & generation, pre-training, post-training and safety assessment. Proven track record of academic excellence, demonstrated by novel research contributions and spotlight papers at top-tier conferences (e.g., NeurIPS, ICML, ICLR). Exceptional written and verbal communication skills, with the ability to convey complex ideas clearly and effectively to diverse audiences. Extensive experience in collaborating with multi-disciplinary teams, including researchers and engineers, and leading high-impact projects. A strong desire to improve the global state of AI safety. While existing experience working on control is desired, it is not a requirement for this role. Salary & Benefits We are hiring individuals at the more senior ranges of the following scale (L5-L7). Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page. Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280 Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505 Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195 Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230 Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230 There are a range of pension options available which can be found through the Civil Service website. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. Required Experience We select based on skills and experience regarding the following areas: Research problem selection Research science Writing code efficiently Python Frontier model architecture knowledge Frontier model training knowledge Model evaluations knowledge AI safety research knowledge Written communication Verbal communication Teamwork Interpersonal skills Tackle challenging problems Learn through coaching Desired Experience We additionally may factor in experience with any of the areas that our work-streams specialise in: Autonomous systems Cyber security Chemistry or Biology Safeguards Safety Cases Societal Impacts
Lead Research Scientist
Faculty
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Lead the AI safety team's research agenda, setting priorities and ensuring alignment with Faculty's long-term goals. Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems. Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI). Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics. Provide mentorship and technical guidance to researchers across diverse AI safety topics. Technical Contributions: Lead hands-on contributions to technical research. Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Feb 11, 2025
Full time
About Faculty At Faculty, we transform organisational performance through safe, impactful and human-centric AI. With a decade of experience, we provide over 300 global customers with software, bespoke AI consultancy , and Fellows from our award winning Fellowship programme . Our expert team brings together leaders from across government, academia and global tech giants to solve the biggest challenges in applied AI. Should you join us, you'll have the chance to work with, and learn from, some of the brilliant minds who are bringing Frontier AI to the frontlines of the world. About the role As a Lead Research Scientist at Faculty, you will be leading scientific research, and other researchers, in the area of AI safety that progresses scientific understanding. You will contribute to both external publications, and Faculty's commercial ambition to build safe AI systems. This is a great opportunity to join a small, high agency team of machine learning researchers and practitioners applying data science and machine learning to business problems in the real world. What you'll be doing Your role will evolve alongside business needs, but you can expect your key responsibilities to include: Research Leadership: Lead the AI safety team's research agenda, setting priorities and ensuring alignment with Faculty's long-term goals. Conduct and oversee the development of cutting-edge AI safety research, with a focus on large language models and other safety-critical AI systems. Publish high-impact research in leading conferences and journals (e.g., NeurIPS, ACL, ICML, ICLR, AAAI). Support Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Research Agenda Development: Shape our research agenda by identifying impactful research opportunities and balancing scientific and practical priorities. Interface with the wider business to ensure alignment between the R&D team's research efforts and the company's long term goals with a specific focus in the AI safety and commercial projects in the space. Team Management and Mentorship: Build and lead a growing team of researchers, fostering a collaborative and innovative culture across a wide-range of AI Safety-relevant research topics. Provide mentorship and technical guidance to researchers across diverse AI safety topics. Technical Contributions: Lead hands-on contributions to technical research. Collaborate on delivery of evaluations and red-teaming projects in high-risk domains, such as CBRN and cybersecurity, with a focus on government and commercial partners. Who we are looking for A proven track record of high-impact AI research, evidenced by top-tier academic publications (ideally in top machine learning or NLP conferences ACL/Neurips, ICML, ICLR, AAAI) or equivalent experience (e.g. within model providers labs). Deep domain knowledge in language models and AI safety, with the ability to contribute well-informed views about the differential value, and tractability, of different parts of the traditional AI Safety research agenda, or other areas of machine learning (e.g. explainability). Practical experience of machine learning, with a focus on areas such as robustness, explainability, or uncertainty estimation. Advanced programming and mathematical skills with Python and an experience with the standard Python data science stack (NumPy, pandas, Scikit-learn etc.). The ability to conduct and oversee complex technical research projects. A passion for leading and developing technical teams; adopting a caring attitude towards the personal and professional development of others. Excellent verbal and written communication skills. The following would be a bonus, but are by no means required: Commercial experience applying AI safety principles in practical or high-stakes contexts. Background in red-teaming, evaluations, or safety testing for government or industry applications. We welcome applicants who have academic research experience in a STEM or related subject. A PhD is great, but certainly not necessary. What we can offer you: The Faculty team is diverse and distinctive, and we all come from different personal, professional and organisational backgrounds. We all have one thing in common: we are driven by a deep intellectual curiosity that powers us forward each day. Faculty is the professional challenge of a lifetime. You'll be surrounded by an impressive group of brilliant minds working to achieve our collective goals. Our consultants, product developers, business development specialists, operations professionals and more all bring something unique to Faculty, and you'll learn something new from everyone you meet.
Research Scientist - Science of Evaluations
AI Safety Institute
The Science of Evaluations Team AISI's Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation. Measurement of Capabilities: The goal is to develop and apply rigorous scientific techniques for the measurement of frontier AI system capabilities, so they are accurate, robust, and useful in decision making. This is a nascent area of research which supports one of AISI's core products: conducting tests of frontier AI systems and feeding back results, insights, and recommendations to model developers and policy makers. The team will be an independent voice on the quality of our testing reports and the limitations of our evaluations. You will collaborate closely with researchers and engineers from the workstreams who develop and run our evaluations, getting into the details of their key strengths and weaknesses, proposing improvements, and developing techniques to get the most out of our results. The key challenge is increasing the confidence in our claims about system capabilities, based on solid evidence and analysis. Directions we are exploring include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Example checks could include: performance as a function of context length, auditing areas with surprising model performance, checking for soft refusal performance issues, and efficient comparisons of system performance between pre-deployment and post-deployment testing. Running in-depth analyses of evaluations results to understand successes and failures and using these insights to create best practices for testing exercises. Developing our approach to uncertainty quantification and significance testing, increasing statistical power (given time and token constraints). Developing methods for inferring model capabilities across given domains from task or benchmark success rates, and assigning confidence levels to claims about capabilities. Predictive Evaluations: The goal is to develop approaches to estimate the capabilities of frontier AI systems on tasks or benchmarks, before they are run. Ideally, we would be able to do this at some point early in the training process of a new model, using information about the architecture, dataset, or training compute. This research aims to provide us with advance warning of models reaching a particular level of capability, where additional safety mitigations may need to be put in place. This work is complementary to both safety cases -an AISI foundational research effort-and AISI's general evaluations work. This topic is currently an area of active research, and we believe it is poised to develop rapidly. We are particularly interested in developing predictive evaluations for complex, long-horizon agent tasks, since we believe this will be the most important type of evaluation as AI capabilities advance. You will help develop this field of research, both by direct technical work and via collaborations with external experts, partner organizations, and policy makers. Across both focus areas, there will be significant scope to contribute to the overall vision and strategy of the science of evaluations team as an early hire. You'll receive coaching from your manager and mentorship from the research directors at AISI (including Geoffrey Irving and Yarin Gal), and work closely with talented Policy / Strategy leads and Research Engineers and Research Scientists. Responsibilities This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work will include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Conducting in-depth analysis of evaluations methodology and results, diagnosing possible sources of uncertainty or bias, to improve our confidence in estimates of AI system capabilities. Improving the statistical analysis of evaluations results (e.g. model selection, hypothesis testing, significance testing, uncertainty quantification). Developing and implementing internal best-practices and protocols for evaluations and testing exercises. Staying well informed of the details and strengths and weaknesses of evaluations across domains in AISI and the state of the art in frontier AI evaluations research more broadly. Conducting research on predictive evaluations using the latest techniques from the published literature on AISI's internal evaluations, as well as conducting novel research to improve these techniques. Writing and editing scientific reports and other materials aimed at diverse audiences, focusing on synthesizing empirical results and recommendations to key decision-makers, ensuring high standards in clarity, precision, and style. Person Specification To set you up for success, we are looking for some of the following skills, experience and attitudes, but we are flexible in shaping the role to your background and expertise. Experience working within a world-leading team in machine learning or a related field (e.g. multiple first author publications at top-tier conferences). Strong track record of academic excellence (e.g. PhD in a technical field and/or spotlight papers at top-tier conferences). Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, hands-on experience with designing and running evaluations, scaling laws, fine-tuning, scaffolding, prompting. Broad experience in empirical research methodologies, potentially in fields outside of machine learning, and statistical analysis (T-shaped: some deep knowledge, lots of shallow knowledge, in e.g. experimental design, A/B testing, Bayesian inference, model selection, hypothesis testing, significance testing). Deeply care about methodological and statistical rigor, balanced with pragmatism, and willingness to get into the weeds. Experience with data visualization and presentation. Proven track record of excellent scientific writing and communication, with the ability to understand and communicate complex technical concepts for non-technical stakeholders and synthesize scientific results into compelling narratives. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 The Department for Science, Innovation and Technology offers a competitive mix of benefits including: A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the Civil Service Pension Scheme , with an average employer contribution of 27%. A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts. The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted: Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team Required Experience We select based on skills and experience regarding the following areas: . click apply for full job details
Feb 11, 2025
Full time
The Science of Evaluations Team AISI's Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation. Measurement of Capabilities: The goal is to develop and apply rigorous scientific techniques for the measurement of frontier AI system capabilities, so they are accurate, robust, and useful in decision making. This is a nascent area of research which supports one of AISI's core products: conducting tests of frontier AI systems and feeding back results, insights, and recommendations to model developers and policy makers. The team will be an independent voice on the quality of our testing reports and the limitations of our evaluations. You will collaborate closely with researchers and engineers from the workstreams who develop and run our evaluations, getting into the details of their key strengths and weaknesses, proposing improvements, and developing techniques to get the most out of our results. The key challenge is increasing the confidence in our claims about system capabilities, based on solid evidence and analysis. Directions we are exploring include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Example checks could include: performance as a function of context length, auditing areas with surprising model performance, checking for soft refusal performance issues, and efficient comparisons of system performance between pre-deployment and post-deployment testing. Running in-depth analyses of evaluations results to understand successes and failures and using these insights to create best practices for testing exercises. Developing our approach to uncertainty quantification and significance testing, increasing statistical power (given time and token constraints). Developing methods for inferring model capabilities across given domains from task or benchmark success rates, and assigning confidence levels to claims about capabilities. Predictive Evaluations: The goal is to develop approaches to estimate the capabilities of frontier AI systems on tasks or benchmarks, before they are run. Ideally, we would be able to do this at some point early in the training process of a new model, using information about the architecture, dataset, or training compute. This research aims to provide us with advance warning of models reaching a particular level of capability, where additional safety mitigations may need to be put in place. This work is complementary to both safety cases -an AISI foundational research effort-and AISI's general evaluations work. This topic is currently an area of active research, and we believe it is poised to develop rapidly. We are particularly interested in developing predictive evaluations for complex, long-horizon agent tasks, since we believe this will be the most important type of evaluation as AI capabilities advance. You will help develop this field of research, both by direct technical work and via collaborations with external experts, partner organizations, and policy makers. Across both focus areas, there will be significant scope to contribute to the overall vision and strategy of the science of evaluations team as an early hire. You'll receive coaching from your manager and mentorship from the research directors at AISI (including Geoffrey Irving and Yarin Gal), and work closely with talented Policy / Strategy leads and Research Engineers and Research Scientists. Responsibilities This role offers the opportunity to progress deep technical work at the frontier of AI safety and governance. Your work will include: Running internal red teaming of testing exercises and adversarial collaborations with the evaluations teams, and developing "sanity checks" to ensure the claims made in our reports are as strong as possible. Conducting in-depth analysis of evaluations methodology and results, diagnosing possible sources of uncertainty or bias, to improve our confidence in estimates of AI system capabilities. Improving the statistical analysis of evaluations results (e.g. model selection, hypothesis testing, significance testing, uncertainty quantification). Developing and implementing internal best-practices and protocols for evaluations and testing exercises. Staying well informed of the details and strengths and weaknesses of evaluations across domains in AISI and the state of the art in frontier AI evaluations research more broadly. Conducting research on predictive evaluations using the latest techniques from the published literature on AISI's internal evaluations, as well as conducting novel research to improve these techniques. Writing and editing scientific reports and other materials aimed at diverse audiences, focusing on synthesizing empirical results and recommendations to key decision-makers, ensuring high standards in clarity, precision, and style. Person Specification To set you up for success, we are looking for some of the following skills, experience and attitudes, but we are flexible in shaping the role to your background and expertise. Experience working within a world-leading team in machine learning or a related field (e.g. multiple first author publications at top-tier conferences). Strong track record of academic excellence (e.g. PhD in a technical field and/or spotlight papers at top-tier conferences). Comprehensive understanding of large language models (e.g. GPT-4). This includes both a broad understanding of the literature, hands-on experience with designing and running evaluations, scaling laws, fine-tuning, scaffolding, prompting. Broad experience in empirical research methodologies, potentially in fields outside of machine learning, and statistical analysis (T-shaped: some deep knowledge, lots of shallow knowledge, in e.g. experimental design, A/B testing, Bayesian inference, model selection, hypothesis testing, significance testing). Deeply care about methodological and statistical rigor, balanced with pragmatism, and willingness to get into the weeds. Experience with data visualization and presentation. Proven track record of excellent scientific writing and communication, with the ability to understand and communicate complex technical concepts for non-technical stakeholders and synthesize scientific results into compelling narratives. Motivated to conduct technical research with an emphasis on direct policy impact rather than exploring novel ideas. Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done. Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team. Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team's success. Salary & Benefits We are hiring individuals at all ranges of seniority and experience within the research unit, and this advert allows you to apply for any of the roles within this range. We will discuss and calibrate with you as part of the process. The full range of salaries available is as follows: L4: £85,000 - £95,000 L5: £105,000 - £115,000 L6: £125,000 - £135,000 L7: £145,000 The Department for Science, Innovation and Technology offers a competitive mix of benefits including: A culture of flexible working, such as job sharing, homeworking and compressed hours. Automatic enrolment into the Civil Service Pension Scheme , with an average employer contribution of 27%. A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30. An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue. Access to a range of retail, travel and lifestyle employee discounts. The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period. Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process. The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your workstream lead. The process will culminate in a conversation with members of the senior team here at AISI. Candidates should expect to go through some or all of the following stages once an application has been submitted: Initial interview Technical take home test Second interview and review of take home test Third interview Final interview with members of the senior team Required Experience We select based on skills and experience regarding the following areas: . click apply for full job details
Senior User Researcher
Appcastenterprise
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Dec 04, 2021
Full time
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Senior User Researcher
Appcastenterprise
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Dec 04, 2021
Full time
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Senior User Researcher
Appcastenterprise
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Dec 04, 2021
Full time
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Senior User Researcher
Appcastenterprise Sutton, Surrey
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Dec 04, 2021
Full time
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Senior User Researcher
Appcastenterprise Barnet, Hertfordshire
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.
Dec 04, 2021
Full time
With over 35 nationalities and a range of backgrounds represented in our Benevolent team, we aim to build an inclusive environment where our people can bring their authentic selves to work, be respected for who they are and the exceptional work they do. We welcome and actively encourage applications from all sections of society and are committed to offering equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, marital, domestic or civil partnership status, sexual orientation, gender identity, parental status, disability, age, citizenship, or any other basis. We see our diversity as an asset as we tackle challenging problems that bridge the gap between drug discovery and technology. The Role Over the past two years, we have successfully established the user research practice at BenevolentAI covering user research operations, training and upskilling initiatives, and evaluation programmes. As a Senior User Researcher, you will lead the evolution of this practice by taking the responsibility for shaping it and influencing how user research is done across the company. In this role, you will be involved in all aspects of product development, from early-stage concept generation and usability evaluation, to informing product strategy at large and measuring the impact our products have on drug discovery. You will work closely with drug discovery scientists - our internal users - and with members of our cross-functional product development teams who will rely on your insight and support to understand and analyse user motivations, needs, and behaviours. You will uncover and help us understand non-trivial user needs focused around exploring and understanding vast quantities of mined and inferred biomedical data, and using it to make scientific and business decisions. Primary Responsibilities Develop research methodologies that generate strategic and tactical insights, align with the product roadmap, and integrate well into product development practices. Propose, plan, and carry out user research activities by combining relevant methods and approaches (e.g. semi-structured interviews, usability analysis, contextual research, and more). Document research activities and outcomes, generate insights, disseminate research findings across teams, and partner with product, design, and engineering to leverage learnings. Champion a user-centred research-driven culture across the company, promoting best practices and supporting colleagues in their own research activities. Lead the evolution of user research theory and practice across the organisation. Balance opportunities to lead research initiatives with opportunities to empower others to carry out their own user research. Collaborate with UX designers and support relevant design work where appropriate. We are looking for someone with Education background that incorporates research methods, such as a degree in Anthropology, Psychology, Sociology, Human Factors, HCI/Computer Science or other related fields, or equivalent practical experience. 5+ years of professional experience conducting research for product design and development purposes, using both formative and summative evaluations. Demonstrated expertise and experience in qualitative and quantitative research methods. Experience in implementing and/or developing user research operations and frameworks in organisations. Excellent and versatile communication skills catering for various needs (including interviewing co-workers, collaborating with a team, storytelling, negotiation, oral presentation, clear writing, etc.). Initiative to lead the evolution and strengthening of the user research practice and operations in the company, improving our ways of working hands-on and through soft influence. Enjoys working with science and engineering teams in a cross-functional setting, and embraces agile development and lean design practices. Ability to work independently but towards common goals, under time pressure, and with shifting priorities and requirements. Ability to write clearly (e.g. UX microcopy, user guides). Nice to haves Experience in a relevant sector (life science, clinical studies, healthcare, etc.) About us BenevolentAI unites AI with human expertise to discover new and more effective medicines. Our unique computational R&D platform spans every step of the drug discovery process, powering an in-house pipeline of over 25 drug programmes. We advance our mission to reinvent drug discovery by harnessing the power of a diverse team, rich with different backgrounds, experiences, opinions and personalities. In our offices in London and New York and research facility in Cambridge (UK), we work in highly collaborative, multidisciplinary teams, harnessing skills across biology, chemistry, engineering, AI, machine learning, informatics, precision medicine and drug discovery. We share a passion for being part of a mission that matters, and we are always looking for curious and collaborative people who share our values and want to be part of our journey. If that sounds like a fit for you, hit the apply button and join us. Want to do a little more research before you apply? Head over to our Glassdoor page to learn about our benefits, culture and to find out what our team think about life at Benevolent. You can also find out more about us on LinkedIn and Twitter.

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2025 Jobsite Jobs | Designed by Web Design Agency