The role of data and automated (non-artificial intelligence [AI]) algorithmic targeting in adaptive social cash systems is gaining increasing significance, but few governments have yet leveraged on AI technologies to reap its benefits. Hence, there is mounting pressure on social cash policymakers and practitioners to rapidly embrace the opportunities arising from AI applications, especially in times of crisis. While data and algorithmic targeting (non-AI and AI) are efficient in enrolling beneficiaries in emergency social cash systems, it may also pose serious challenges. Through a qualitative case study of an adaptive social cash programme in Pakistan, the research critically examines the data/algorithmic targeting process, and unveils the shortcomings prevalent in design, data and algorithmic decision-making that lead to certain exclusionary outcomes. The study makes several contributions to the data and policy literature. Drawing on the limitations, it first offers a set of practical recommendations for greater enrolment, and hence inclusion of beneficiaries. Second, it discusses novel opportunities that AI technologies may present in adaptive social cash systems, whilst carefully assessing the risks. Third, the study proposes an organisational AI governance framework to guide the development of responsible and ethical AI practices. The study affords policy and practical implications for governments, social cash policymakers, and practitioners in providing invaluable insights into how changing targeting practices, via AI technologies, under a governance framework can direct ethical practices that positively impacts on beneficiaries, social cash organisations, and stakeholders.