• Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
  • Sign in
  • Sign up
  • Home
  • Search Jobs
  • Register CV
  • Post a Job
  • Employer Pricing
  • Contact Us
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

3 jobs found

Email me jobs like this
Refine Search
Current Search
novel cycles engineer
Senior Research Scientist - AI Safety
Faculty
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch-defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the Team Faculty's Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape. About the role We are seeking a Senior Research Scientist to join our high impact R&D. You will lead novel research that advances scientific understanding and fuels our ambition to build safe AI systems. This is a crucial opportunity to join a small, high agency team conducting vital red team and evaluations for frontier models in sensitive areas like cybersecurity and national security. You'll shape the future of safe AI deployment in the real world. What you'll be doing: Owning and driving forward high impact AI research themes in AI safety. Contributing to the wider vision and development of Faculty's AI safety research agenda. Supporting Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Shaping our research agenda by identifying impactful opportunities and balancing scientific and practical priorities. Leading technical research within the AI Safety space, from concept to publication. Supporting the delivery of evaluations and red team projects in high risk domains, such as CBRN and cybersecurity, with government and commercial partners. Who we're looking for: You have a track record of working with high impact AI research, evidenced by top tier academic publications or equivalent experience. You bring proven experience or a clear passion for Applied AI safety, perhaps from labs, academia, or evaluation and red team roles. You possess deep domain knowledge in language models and generative AI model architectures, including fine tuning techniques beyond API level implementation. You have practical machine learning experience, with a focus on areas such as robustness, explainability, or uncertainty estimation. You are proficient with deep learning frameworks (PyTorch, TensorFlow, or similar) and familiar with the HuggingFace ecosystem or equivalent ML tooling. You have demonstrable Python engineering experience to build and support robust research projects. You have the ability to conduct and oversee complex technical research projects and possess excellent verbal and written communication skills. Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - Please know we are open to conversations about part time roles or condensed hours.
Feb 07, 2026
Full time
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human-centric AI. You can read about our real-world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch-defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the Team Faculty's Research team conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape. About the role We are seeking a Senior Research Scientist to join our high impact R&D. You will lead novel research that advances scientific understanding and fuels our ambition to build safe AI systems. This is a crucial opportunity to join a small, high agency team conducting vital red team and evaluations for frontier models in sensitive areas like cybersecurity and national security. You'll shape the future of safe AI deployment in the real world. What you'll be doing: Owning and driving forward high impact AI research themes in AI safety. Contributing to the wider vision and development of Faculty's AI safety research agenda. Supporting Faculty's positioning as a leader in AI safety through thought leadership and stakeholder engagement. Shaping our research agenda by identifying impactful opportunities and balancing scientific and practical priorities. Leading technical research within the AI Safety space, from concept to publication. Supporting the delivery of evaluations and red team projects in high risk domains, such as CBRN and cybersecurity, with government and commercial partners. Who we're looking for: You have a track record of working with high impact AI research, evidenced by top tier academic publications or equivalent experience. You bring proven experience or a clear passion for Applied AI safety, perhaps from labs, academia, or evaluation and red team roles. You possess deep domain knowledge in language models and generative AI model architectures, including fine tuning techniques beyond API level implementation. You have practical machine learning experience, with a focus on areas such as robustness, explainability, or uncertainty estimation. You are proficient with deep learning frameworks (PyTorch, TensorFlow, or similar) and familiar with the HuggingFace ecosystem or equivalent ML tooling. You have demonstrable Python engineering experience to build and support robust research projects. You have the ability to conduct and oversee complex technical research projects and possess excellent verbal and written communication skills. Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - Please know we are open to conversations about part time roles or condensed hours.
Technical Product Manager
Unlikely Artificial Intelligence Limited.
At UnlikelyAI, we are building the future of AI: one that is reliable, accurate and transparent. Our neurosymbolic technology harnesses the power of LLMs and generative AI, and combines it with classical symbolic technology to produce hallucination-resistant artificial intelligence for high-trust applications. The Role We're hiring a Technical Product Manager to lead our platform strategy and execution. This is a hands on role that requires deep technical expertise, systematic thinking, and proven experience working with complex enterprise customers. You'll work at the intersection of cutting edge AI technology and real world enterprise needs, translating ambiguous customer requirements into clear product direction while maintaining a strategic view across multiple customer segments. This isn't about writing engineering specs - it's about leveraging technical depth to set realistic requirements, inform roadmaps, and create clarity from complexity. What You'll Do You'll own the developer platform product from strategy through execution: Lead customer discovery with enterprise clients, extracting accurate functional requirements from ambiguous needs and translating them into actionable product direction. Define product vision and roadmap that balances individual customer requests with broader platform scalability and long term strategic. Apply deep technical understanding of AI and platform development to set realistic requirements (e.g., understanding ML accuracy limitations to avoid specifying unachievable targets). Manage complex stakeholder dynamics across engineering, sales, research, and leadership - including constructively challenging decisions when warranted. Create rigorous product documentation (PRDs, specs, strategy documents) that synthesises research and stakeholder input with analytical clarity. Partner closely with engineering teams, understanding the boundary between requirements and implementation without over specifying solutions. Navigate enterprise sales cycles and multi stakeholder complexity while maintaining product vision against competing pressures. What We're Looking For Essential: Technical degree (computer science, engineering, or related) with prior experience in a technical role. 4+ years in technical product management roles in enterprise software/SaaS Proven track record working with large, complex enterprise customers at scale Deep understanding of AI and platform development with genuine passion for the space Exceptional analytical and systematic thinking - ability to synthesise massive volumes of complex information Strong communication skills and confidence to challenge senior leadership constructively Evidence of rigorous analytical work through high quality documentation (we'll ask for samples) You'll thrive in this role if you: Use technical knowledge to inform requirements, not dictate engineering solutions Can lead complex customer conversations and extract signal from noise Produce clear, well researched specs that demonstrate thorough stakeholder synthesis Balance accommodating customer needs with maintaining broader product vision Are comfortable with ambiguity and turn it into actionable clarity Navigate between tactical delivery and strategic "true north" - you know where we're ultimately headed while executing what's needed now Have a strong personality without being defensive - independent thinking is essential What's in it for You Shape a novel neurosymbolic AI platform working with world class enterprise customers Work in a fast moving, technically sophisticated environment where clarity of thought is valued Influence both product direction and the evolution of our product function Join a collaborative team tackling genuinely hard problems in AI. Working at UnlikelyAI We offer a range of benefits designed to support our team's wellbeing and work life balance: We have a hybrid working arrangement, flexibly balancing working from home and office based working. 3 days in the office is encouraged. Our office is located in Bloomsbury, approximately a five minute walk to both Tottenham Court Road and Holborn stations We provide free team lunches every Tuesday, Wednesday and Thursday We schedule a variety of optional social and extra curricular activities We have an annual offsite, usually to an international where we can work and socialise in the sun Equal Opportunities We are committed to having a truly diverse team where everyone is encouraged to be their authentic selves. We do not discriminate based on gender, race, religion, sexual orientation, national origin, political affiliation, disability, age status, medical history, parental status or genetic information.
Feb 05, 2026
Full time
At UnlikelyAI, we are building the future of AI: one that is reliable, accurate and transparent. Our neurosymbolic technology harnesses the power of LLMs and generative AI, and combines it with classical symbolic technology to produce hallucination-resistant artificial intelligence for high-trust applications. The Role We're hiring a Technical Product Manager to lead our platform strategy and execution. This is a hands on role that requires deep technical expertise, systematic thinking, and proven experience working with complex enterprise customers. You'll work at the intersection of cutting edge AI technology and real world enterprise needs, translating ambiguous customer requirements into clear product direction while maintaining a strategic view across multiple customer segments. This isn't about writing engineering specs - it's about leveraging technical depth to set realistic requirements, inform roadmaps, and create clarity from complexity. What You'll Do You'll own the developer platform product from strategy through execution: Lead customer discovery with enterprise clients, extracting accurate functional requirements from ambiguous needs and translating them into actionable product direction. Define product vision and roadmap that balances individual customer requests with broader platform scalability and long term strategic. Apply deep technical understanding of AI and platform development to set realistic requirements (e.g., understanding ML accuracy limitations to avoid specifying unachievable targets). Manage complex stakeholder dynamics across engineering, sales, research, and leadership - including constructively challenging decisions when warranted. Create rigorous product documentation (PRDs, specs, strategy documents) that synthesises research and stakeholder input with analytical clarity. Partner closely with engineering teams, understanding the boundary between requirements and implementation without over specifying solutions. Navigate enterprise sales cycles and multi stakeholder complexity while maintaining product vision against competing pressures. What We're Looking For Essential: Technical degree (computer science, engineering, or related) with prior experience in a technical role. 4+ years in technical product management roles in enterprise software/SaaS Proven track record working with large, complex enterprise customers at scale Deep understanding of AI and platform development with genuine passion for the space Exceptional analytical and systematic thinking - ability to synthesise massive volumes of complex information Strong communication skills and confidence to challenge senior leadership constructively Evidence of rigorous analytical work through high quality documentation (we'll ask for samples) You'll thrive in this role if you: Use technical knowledge to inform requirements, not dictate engineering solutions Can lead complex customer conversations and extract signal from noise Produce clear, well researched specs that demonstrate thorough stakeholder synthesis Balance accommodating customer needs with maintaining broader product vision Are comfortable with ambiguity and turn it into actionable clarity Navigate between tactical delivery and strategic "true north" - you know where we're ultimately headed while executing what's needed now Have a strong personality without being defensive - independent thinking is essential What's in it for You Shape a novel neurosymbolic AI platform working with world class enterprise customers Work in a fast moving, technically sophisticated environment where clarity of thought is valued Influence both product direction and the evolution of our product function Join a collaborative team tackling genuinely hard problems in AI. Working at UnlikelyAI We offer a range of benefits designed to support our team's wellbeing and work life balance: We have a hybrid working arrangement, flexibly balancing working from home and office based working. 3 days in the office is encouraged. Our office is located in Bloomsbury, approximately a five minute walk to both Tottenham Court Road and Holborn stations We provide free team lunches every Tuesday, Wednesday and Thursday We schedule a variety of optional social and extra curricular activities We have an annual offsite, usually to an international where we can work and socialise in the sun Equal Opportunities We are committed to having a truly diverse team where everyone is encouraged to be their authentic selves. We do not discriminate based on gender, race, religion, sexual orientation, national origin, political affiliation, disability, age status, medical history, parental status or genetic information.
Principal Research Scientist - AI Safety
Faculty
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human centric AI. You can read about our real world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the team Faculty conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape. About the role The Principal Research Scientist for AI Safety will be the driving force behind Faculty's small, high agency research team, shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety, focusing on large language models and other critical systems. This role involves leading researchers, driving external publications, and ensuring alignment with Faculty's commercial ambition to build trustworthy AI, giving you the opportunity to make a high impact contribution in a rapidly evolving, critical field. What you'll be doing: Leading the AI safety team's ambitious research agenda, setting priorities aligned with long term company goals. Conducting and overseeing cutting edge AI safety research, specifically for large language models and safety critical AI systems. Publishing high impact research findings in leading academic conferences and journals. Shaping the research agenda by identifying impactful opportunities and balancing scientific and practical priorities. Helping to build and mentor a growing team of researchers, fostering an innovative and collaborative culture. Collaborating on delivery of evaluations and red teaming projects in high risk domains like CBRN and cybersecurity. Position Faculty as a thought leader in AI safety through research and strategic stakeholder engagement. Who we're looking for: You have a proven track record of high impact AI research, demonstrated through top tier academic publications or equivalent experience. You possess deep domain knowledge in language models and the evolving field of AI safety. You exhibit strong research judgment and extensive experience in AI safety, including generating and executing novel research directions. You have the ability to conduct and oversee complex technical research projects, with advanced programming skills (Python, standard data science stack) to review team work. You bring excellent verbal and written communication skills, capable of sharing complex ideas with diverse audiences. You have a deep understanding of the AI safety research landscape and the ability to build connections to secure resources for impactful work. Our Interview Process Talent Team Screen (30 mins) Experience & Theory interview (45 mins) Research presentation and coding interview (75 mins) Leadership and Principles interview (60 mins) Final stage with our CEO (45 mins) Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - Please know we are open to conversations about part time roles or condensed hours.
Feb 05, 2026
Full time
Why Faculty? We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then, we've worked with over 350 global customers to transform their performance through human centric AI. You can read about our real world impact here. We don't chase hype cycles. We innovate, build and deploy responsible AI which moves the needle - and we know a thing or two about doing it well. We bring an unparalleled depth of technical, product and delivery expertise to our clients who span government, finance, retail, energy, life sciences and defence. Our business, and reputation, is growing fast and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch defining technology, join a company where you'll be empowered to envision its most powerful applications, and to make them happen. About the team Faculty conducts critical red teaming and builds evaluations for misuse capabilities in sensitive areas, such as CBRN, cybersecurity and international security, for several leading frontier model developers and national safety institutes; notably, our work has been featured in OpenAI's system card for o1. Our commitment also extends to conducting fundamental technical research on mitigation strategies, with our findings published in peer reviewed conferences and delivered to national security institutes. Complementing this, we design evaluations for model developers across broader safety relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape. About the role The Principal Research Scientist for AI Safety will be the driving force behind Faculty's small, high agency research team, shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety, focusing on large language models and other critical systems. This role involves leading researchers, driving external publications, and ensuring alignment with Faculty's commercial ambition to build trustworthy AI, giving you the opportunity to make a high impact contribution in a rapidly evolving, critical field. What you'll be doing: Leading the AI safety team's ambitious research agenda, setting priorities aligned with long term company goals. Conducting and overseeing cutting edge AI safety research, specifically for large language models and safety critical AI systems. Publishing high impact research findings in leading academic conferences and journals. Shaping the research agenda by identifying impactful opportunities and balancing scientific and practical priorities. Helping to build and mentor a growing team of researchers, fostering an innovative and collaborative culture. Collaborating on delivery of evaluations and red teaming projects in high risk domains like CBRN and cybersecurity. Position Faculty as a thought leader in AI safety through research and strategic stakeholder engagement. Who we're looking for: You have a proven track record of high impact AI research, demonstrated through top tier academic publications or equivalent experience. You possess deep domain knowledge in language models and the evolving field of AI safety. You exhibit strong research judgment and extensive experience in AI safety, including generating and executing novel research directions. You have the ability to conduct and oversee complex technical research projects, with advanced programming skills (Python, standard data science stack) to review team work. You bring excellent verbal and written communication skills, capable of sharing complex ideas with diverse audiences. You have a deep understanding of the AI safety research landscape and the ability to build connections to secure resources for impactful work. Our Interview Process Talent Team Screen (30 mins) Experience & Theory interview (45 mins) Research presentation and coding interview (75 mins) Leadership and Principles interview (60 mins) Final stage with our CEO (45 mins) Our Recruitment Ethos We aim to grow the best team - not the most similar one. We know that diversity of individuals fosters diversity of thought, and that strengthens our principle of seeking truth. And we know from experience that diverse teams deliver better work, relevant to the world in which we live. We're united by a deep intellectual curiosity and desire to use our abilities for measurable positive impact. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations. Some of our standout benefits: Unlimited Annual Leave Policy Private healthcare and dental Enhanced parental leave Family Friendly Flexibility & Flexible working Sanctus Coaching Hybrid Working (2 days in our Old Street office, London) If you don't feel you meet all the requirements, but are excited by the role and know you bring some key strengths, please do apply or reach out to our Talent Acquisition team for a confidential chat - Please know we are open to conversations about part time roles or condensed hours.

Modal Window

  • Home
  • Contact
  • About Us
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
Parent and Partner sites: IT Job Board | Jobs Near Me | RightTalent.co.uk | Quantity Surveyor jobs | Building Surveyor jobs | Construction Recruitment | Talent Recruiter | Construction Job Board | Property jobs | myJobsnearme.com | Jobs near me
© 2008-2026 Jobsite Jobs | Designed by Web Design Agency