CSRBOX

Applications Invited for the Winter 2025 FIG Fellowship

Applications Invited for the Winter 2025 FIG Fellowship

Organization: Future Impact Group (FIG)

Apply By: 19 Oct 2025

Follow us@ngobox

About the Organization

Most opportunities to work on the world’s most pressing problems are full-time, in-person roles in just a handful of cities - and it can be difficult to choose which problems to prioritise. We believe there are a lot of talented would-be researchers out there with the desire to have a positive impact, so we’ve decided to make it easier for people to work on these problems part-time and remote-first.

Since starting in 2023, we’ve supported high-quality work on a wide range of topics, ranging from digital minds to global health. Now, we’re focusing on three priority cause areas that we believe could have an outsized effect on the long-term future.

  • AI Policy
  • Philosophy for Safe AI
  • AI Sentience

About the Fellowship  

Our flagship program offers applicants the chance to work as research associates on specific projects, supervised by experienced leads. Associates dedicate 8+ hours per week to crucial topics in AI governance, technical AI safety, and digital sentience; gaining valuable research experience and building lasting professional networks.

FIG provides ongoing support, including co-working sessions, issue troubleshooting, and career guidance. The program features opening and closing events, networking opportunities, research sprints, and guest speakers from key cause areas.

Fellowship Projects:

  • AI Policy

AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to developments in AI.

In the next few months, we will work on:

Policy & Governance: projects in shaping rules, standards and institutions around AI on a national and international scale across private and public sectors.

Economy, Ethics & Society: projects on managing the effects of AI on economies, societies and power structures.

  • Philosophy for Safe AI

Philosophy for Safe AI projects use tools and concepts from academic philosophy to inform our approach to advanced AI.

In the next few months, we will work on:

Technical AI Safety: projects in LLM reward-seeking behaviour, definitions of cooperative artificial intelligence, and LLM interpretability.

Philosophical Fundamentals of AI Safety: projects in conceptual approaches to coexistence with advanced AI, and how AI agents make decisions under uncertainty.

  • AI Sentience

AI Sentience projects combine research and philosophy to investigate ethical theories and consciousness models including the welfare of artificial agents.

In the next few months, we will work on:

Governance of AI Sentience: projects in research ethics and best practices for AI welfare, constructing reliable welfare evaluations, and more.

Foundational AI Sentience Research: projects in models of consciousness, eliciting preferences from LLMs, individuating digital minds and evaluating normative competence.

How to Apply

Applications Close Midnight (Anywhere on Earth) 19th October, 2025.

You can submit an expression of interest to our part-time, remote-first, 12-week fellowship here.

For more information please check the Link

Stay in the loop with the newest RFPs and Grants through NGOBOX's WhatsApp Channel. Join now by clicking here!

 
https://csrbox.org/india-water-and-rivers-forum/index.html
 

https://indiacsrsummit.in/registration.php
 

© Renalysis Consultants Pvt Ltd