Social values, ethics and diversity in generative AI ‘value alignment’ (QUT Generative AI Lab PhD scholarship)

QUT
Published
24/06/2024
Location
Australia
Job Type
Other  
Page Views
14

Description

Application dates

Applications close 31 August 2024

What you'll receive

  • You'll receive a stipend scholarship of $33,637 per annum for a maximum duration of 3.5 years. The duration includes an extension of up to 6 months (PhD) if approved for your candidature. This is the full-time, tax-exempt rate which will index annually.
  • You will receive tuition fee coverage, either through a Research Training Program Fee-Offset place, or a QUT tuition fee sponsorship, for your research degree.
  • As the scholarship recipient, you will have the opportunity to work with a team of leading researchers, to undertake your own innovative research in and across the field.

Eligibility

You need to meet the entry requirements for a QUT Doctor of Philosophy, including any English language requirements.

We are seeking to recruit a student to take up the scholarship and begin full-time study in January 2025.

You will have:

  • a background in a relevant computer science, social science, or humanities discipline, such as machine learning, digital sociology,  communications and media studies, data science, linguistics, cognitive science, philosophy, or internet studies
  • recently completed a first-class honours degree, a research master degree, or a coursework master degree with a significant research component from a recognised institution and in a cognate discipline
  • a strong interest in undertaking a three-year research project on the values, processes, and/or technical systems used to achieve 'value alignment' in GenAI systems
  • demonstrated excellent capacity and potential for research.

Applicants need not necessarily have strong STEM background to apply. An openness to learn relevant methodologies and contribute to a vibrant cross-disciplinary team is more important that specific technical skills. We particularly encourage women and candidates from under-represented groups to apply.

How to apply

Apply for this scholarship at the same time you apply for admission to a QUT Doctor of Philosophy.

  • The first step is to email Dr Aaron Snoswell detailing your academic and research background, your motivation to research in this field and interest in this scholarship and include your CV, full academic transcript and details of three referees (email and contact number) as soon as possible.
  • Shortlisted applicants will be invited to an online interview with members of the supervisory team.
  • If supported to apply, you will be invited to submit an expression of interest (EOI) following the advice at how to apply for a research degree.
  • In your EOI, nominate Dr Aaron Snoswell as your proposed principal supervisor, and copy the link to this scholarship website into question 2 of the financial details section.

About the scholarship

The QUT Generative AI Lab (GenAI Lab) is a new, specialist research initiative focused on addressing the emerging social and cultural challenges and possibilities of generative AI. Staffed by a multidisciplinary team led by Distinguished Professor Jean Burgess, the lab aims to develop, disseminate and apply new sociotechnical research capabilities specific to generative AI, combining critical, technical, and externally-engaged approaches. The GenAI Lab is aligned with QUT’s Digital Media Research Centre (DMRC) and the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

This scholarship supports one of a cohort of three PhD students who will commence together in the lab’s first year of operation.

The successful candidate will undertake research on the values, processes, and technical systems used to achieve 'value alignment' in GenAI systems, using a combination of critical, computational, and qualitative methods. The proposed project could cover issues such as; issues of representation, fairness, observability, and interpretability in the design and evaluation of AI reward and preference models, the apparent trade-off between helpfulness, honesty, and harmlessness in value alignment; or institutional, geo-political, and algorithmic assumptions and arrangements that underlie and frame contemporary approaches to value alignment, such as Reinforcement Learning from Human Feedback (RLHF).

The candidate will be supervised by Dr Aaron Snoswell and Distinguished Professor Jean Burgess and have the opportunity to engage in a dynamic environment with members of the QUT School of Communication and Digital Media Research Centre, as well as the national ADM+S Centre.  The closing date for applications is 31 August 2024.