Professional female doctor reviewing digital records on a monitor in an office setting (wilpunt, via GettyImages)
Strengthening the hidden infrastructure of public health through AI
Across low-resource healthcare systems, fragmented datasets and weak information flows quietly undermine service delivery. Drawing on IGC-supported research from across Africa and South Asia, this post argues that treating AI as infrastructure and investing in operational data systems can help governments improve data quality, embed learning into decision-making, and strengthen the hidden machinery of public health.
In many large, low-resource government hospitals, intake, prioritisation, and bed assignment often rely on paper records and informal communication. Patients may be delayed, misrouted, or effectively lost within the system, even when the resources to treat them exist. Clinicians spend time reconstructing where patients are rather than caring for them. Whether a vaccine reaches the right population, a treatment course is completed, or a referral is made – all depend on data.
How can public health systems benefit from better coordination and reporting?
Surveillance and routine reporting are formally recognised as core public health interventions. Yet in many low- and middle-income countries, these systems remain under-resourced and weakly integrated into management and decision-making. Even modest improvements can translate into faster access to specialised care and better use of scarce resources.
AI tools offer enormous potential to support basic coordination: digitised intake records, simple prioritisation aids for non-clinical staff, real-time visibility on bed availability, and feedback loops that allow the system to learn where breakdowns occur. With the rapid expansion of AI-enabled tools, governments and healthcare providers now have the opportunity to multiply this impact by strengthening the hidden infrastructure of health systems – the information flows, operational management, and feedback loops that allow resources and intent to translate into reliable care.
AI as infrastructure: Moving from tools to institutional systems
Treating AI as infrastructure requires shifting the focus from individual tools to the institutional systems that shape how information flows and decisions are made. When designed to operate within political and administrative realities, this model strengthens the everyday machinery of the state by building reliable information flows, improving implementation fidelity, and creating fast feedback loops that allow systems to adapt over time – emphasising the durability rather than novelty of AI.
In health, this approach translates into a deliberately sequenced portfolio that prioritises system strengthening over standalone innovation. Rather than starting with high-risk predictive applications, the focus is on foundations and function: strengthening health data and interoperability; deploying operational analytics that bring delivery risks to the surface early; using low-risk AI to reduce administrative friction and make data more usable; and embedding a learning architecture that turns routine measurement into continuous improvement.
Strengthening health data foundations and interoperability
The highest-return investments are often in the least glamorous layer of the system. Hardening master facility lists, improving service registries, establishing unique identifiers, and making sure the correct items are counted against the right total make routine data more interpretable and actionable. AI-enabled validation rules, anomaly detection, and automated quality checks can help surface inconsistencies early, while publishing internal data-quality metrics can shift incentives toward improvement.
Past work has illustrated the power of this approach. In Zambia, the International Growth Centre (IGC) supported the integration of multiple administrative datasets to construct a unified view of health service availability and need. This demonstrated how fragmented datasets obscure basic allocation problems, and how relatively low-tech data integration can materially improve planning for staffing and laboratory access. The lesson was not about sophisticated modelling, but about making existing data usable for planning and management. These investments rarely attract attention, but they multiply the impact of every downstream intervention.
Using operational analytics to identify health system failures early
Many service delivery failures in healthcare are predictable. Stockouts follow patterns in consumption, ordering, and lead times. Drop-offs in care often cluster at specific points in multi-visit pathways. Performance swings may reflect reporting artefacts rather than real change. AI-supported operational analytics – including early warning systems for medicine shortages, cohort tracking for immunisation or antenatal care, and tools that decompose performance changes into real effects versus data problems – can help make these risks visible early enough to act.
Evidence from IGC-supported work reinforces this point: In Mozambique, the IGC partnered with the Ministry of Health to evaluate appointment scheduling in antenatal care clinics, collecting detailed administrative and time-use data to show that simple changes in patient flow reduced waiting times by over 100 minutes and increased completion of recommended visits. The project demonstrated that many delivery failures are managerial rather than clinical, and that routine operational data – when used well – can directly shift outcomes.
Reducing administrative burden through AI-enabled automation
Another high-impact application of AI involves reducing administrative friction. Automated coding and standardisation of records, structured extraction from unstructured supervision notes or referral forms, and summarisation and retrieval of guidelines or circulars can significantly reduce the time staff spend producing and searching for information. These tools are valuable because they make decision-relevant information easier to access within existing workflows. Crucially, they can be deployed with human oversight and clear accuracy benchmarks, keeping risks manageable while building trust. These low-risk, high-value applications are an ideal starting point for increasing the visibility and appetite for applied AI in government systems.
IGC-supported research is invested in improving how health events are measured, particularly where standard surveys are costly or inaccurate. In India, researchers developed and piloted the Post Health Event Survey (PHES), combining frequent phone screening with targeted follow-ups to capture rare but costly health shocks with less recall error and lower cost. This innovation showed that better data pipelines, not just larger samples, can substantially improve the evidence base for health insurance policy. Similarly, evaluations of employer-sponsored health insurance in Bangladesh combined administrative claims data with household surveys to show how insurance affected behaviour even when financial protection gains were modest.
Changing public health behaviour through embedded learning
Many important public health decisions – uptake, completion, wastage, targeting – are adaptive rather than technical. They require rapid feedback loops, not just retrospective evaluation. AI can support this by lowering the cost of learning: instrumenting pilots, tracking outcomes, and documenting decision histories that record what the system showed, what decision changed, and what outcome moved. Without this architecture, digitisation risks producing more data without better decisions.
Across the portfolio of IGC-supported health research, data matters most when it feeds into incentives and learning loops. In Zambia, work on recruiting and managing frontline health workers demonstrated how incentive design affects who enters the system and how they perform, with downstream effects on maternal and child health.
What does this mean for policymakers investing in AI for health?
For governments and development partners, framing AI as infrastructure opens a practical path from insight to action. First, AI interventions in the health sector should be deliberately biased toward system strengthening. Investments in data foundations, interoperability, and operational analytics may yield higher returns than isolated pilots.
Second, ambition should be sequenced. Low-risk, internal applications that raise productivity and visibility can build the foundations for more advanced analytics over time. Third, learning must be embedded from the start. Clear scaling criteria, stopping rules, and documentation of how evidence informs decisions are essential for moving from pilots to impact.
By embedding capability, backing it with reusable tools, and linking evidence to decision-making, AI offers public health partners a strategic and practical path to make the invisible work of public health more visible – and more effective.