Digital Public Infrastructure Through an Open Government Lens
Posted: 11 February, 2026 | Author: AfricLaw | Filed under: Hlengiwe Dube | Tags: access, accountability, AI registers, algorithmic bias, Algorithmic Impact Assessment (AIA) framework, automated social protection, data exchange platforms, Democratic Legitimacy, digital identity systems, Digital Public Infrastructure, digital service delivery, digital transformation, DPI, e-government, government reform, inclusion, oversight, public participation, Public Trust, transparency |Leave a comment
Author: Hlengiwe Dube
Senior digital rights and policy expert
Abstract
Digital Public Infrastructure (DPI) is rapidly being deployed worldwide, yet its governance is a significant blind spot in open government reform. While governments focus on digital service delivery, the underlying systems that determine access, inclusion, and fairness often operate without transparency, accountability, or public participation. This article argues that DPI must be governed through open government principles to prevent systemic harm such as exclusion from essential services, algorithmic bias, and eroded public trust and to realize its potential for public good. Using global cases, it shows how integrating transparency, oversight, and participatory design into DPI can turn digital infrastructure into a force for democratic accountability, rather than hidden control. Finally, the article calls for explicit inclusion of DPI governance into frameworks like the Open Government Partnership, ensuring that digital transformation encodes democratic values, not just technical efficiency, into the infrastructure of the state.

Introduction
Digital Public Infrastructure (DPI) has quickly become a central reference point in global discussions on digital transformation. Governments, development institutions, civil society organisations, and the private sector increasingly frame DPI as a pathway to delivering digital services at scale and accelerating national development goals. Countries such as Brazil, India, Singapore, Australia, and Thailand are frequently cited as examples of how shared digital systems, ranging from identity and payments to data exchange, can underpin inclusive and innovative digital economies. Yet as interest in DPI grows, so too does ambiguity about what the term actually encompasses. The concept is evolving rapidly, shaped by diverse national experiments, institutional arrangements, and political contexts. This fluidity raises an important question for countries embarking on digital transformation: what distinguishes DPI from earlier waves of e-government and digitisation, and how can it be designed and governed to deliver public value rather than simply scale technology?
These questions are particularly salient in the African context. As governments across the continent digitise core public functions, DPI is becoming central to how states deliver services, manage data, and exercise authority. From digital identity systems and data exchange platforms to automated social protection and electoral technologies, DPI increasingly shapes citizens’ everyday interactions with the state. As DPI expands, it raises important governance questions about how digital systems can deliver public value in ways that promote transparency and inclusion, rather than entrenching secrecy or abuse of power. Open government values such as transparency, accountability, participation, and inclusion, provide a practical framework for managing these risks and grounding DPI in democratic governance.
Why Open Government Matters for DPI
Digital Public Infrastructure is often framed primarily as a technical or efficiency-driven reform, an opportunity to streamline services, reduce costs, and accelerate digitisation. However, since DPI sits at the heart of public administration, it is fundamentally a governance project. Choices about system design, data flows, interoperability, and access are not neutral. They directly affect rights, equality, and democratic accountability.
When governed through open government principles, DPI can move beyond efficiency to actively support democratic outcomes. Well-governed DPI can:
- Enhance transparency in that clear and accessible information about how digital systems are designed, operated, and used enables citizens, civil society, academics, journalists, and oversight bodies to scrutinise them. For example, India’s Aadhaar system publishes extensive documentation on its architecture and authentication processes, allowing independent research and public debate about privacy and security implications. Similarly, open Application Programming Interface (APIs) in Kenya’s Huduma Namba digital identity initiative could provide opportunities for external auditing if properly implemented.The system has faced significant criticism for lack of transparency regarding data handling, procurement, and legal frameworks.
- Establish accountability mechanisms when clearly defined lines of institutional responsibility, coupled with oversight and redress pathways, ensure that errors or harms can be addressed. In Brazil, the Cadastro Único social registry links social protection benefits to a single database, but also incorporates grievance mechanisms for those wrongly excluded, demonstrating how accountability structures can mitigate the human costs of errors in large-scale systems. In Nigeria, the BVN/SIM Registration, a DPI initiative that combines financial identity with telecoms identity is a powerful example of DPI with significant governance questions. It is often bureaucratic and ineffective.
- Enable meaningful participation, considering that open government frameworks encourage stakeholders, including civil society, affected communities, and technical experts, to actively shape the design, implementation, and governance of digital services. In Thailand, participatory consultations around the National e-ID project have engaged privacy advocates and civic groups, illustrating how early participation can influence policy choices and technical design. In Uganda, social protection platform, the civil society sector has been engaged in some aspects, but reports also cite challenges with exclusion due to digital access and verification hurdles. This is a good example of attempted participatory design, though outcomes are mixed.
- Strengthen safeguards against potential harms. DPI can inadvertently reinforce surveillance, algorithmic bias, or systemic exclusion if safeguards are weak. Open government values provide the principles to integrate protective measures into every stage of a system,including limiting unnecessary data collection and auditing algorithms used in public service allocation. South Africa’s Home Affairs National Identification system, for instance, has been critiqued for gaps in data protection and error remediation, highlighting the need for stronger preventive and oversight measures. The system is notorious for errors such as ‘ghosts’ on the system incorrect statuses, that lead to denial of services. Therefore, the call for safeguards and accountability is a direct response to real, documented problems of exclusion.
- Build public trust. Trust is essential for citizens to adopt and rely on digital systems. If inclusivity, safety, equity, and accountability are adequately demonstrated, open government approaches can strengthen confidence in DPI. The e-government services portal in Estonia, with transparent reporting and user feedback mechanisms, shows how trust can be cultivated through consistent, accountable digital governance. The Ghanaian e-Gov Portal, although it is a leading model, is dependent on sustained political will and digital literacy.
Essentially, open government values provide a practical framework to operationalise rights-based DPI governance. They move principles from paper into practice, ensuring that digital infrastructure serves the public interest rather than merely accelerating technological adoption. In contexts where DPI is expanding, these values should not be optional but central to ensuring digital systems work for people, rather than against them.
DPI and the Transparency Gap
One of the main challenges with DPI is opacity. Procurement contracts, technical architectures, data‑sharing agreements, and algorithmic systems that underpin DPI are often not proactively disclosed to the public. Instead, they are frequently hidden behind claims of national security, commercial confidentiality, or technical complexity. The transparency gaps limit public understanding and undermine democratic oversight, reducing the ability of citizens, civil society, journalists, and oversight bodies to scrutinise how these systems function and whose interests they serve. In many cases, DPI systems are developed and deployed with minimal proactive disclosure about who built them, how data flows between agencies, how decisions are made algorithmically, and how personal data is governed. Without such visibility, technical complexity becomes a shield that insulates public infrastructure from accountability, even when real‑world harms, like exclusion errors or privacy breaches, emerge.
For example, Uganda’s biometric voter registration and verification systems, used in recent elections, function as a form of electoral DPI underpinning political participation. While biometric technologies were introduced to improve electoral integrity, limited information has been publicly available about system procurement, vendor contracts, data retention policies, and auditability of the technology. Mexico’s experience with integrated social protection and digital registries also illustrates how opacity can emerge in public systems. Efforts to consolidate and digitalise beneficiary registries for cash transfer and social programmes highlighted challenges in transparency around eligibility criteria and data governance, as independent evaluation and clear documentation of how centralised registries are used remains limited. Absence of publicly accessible information on how data is used to determine eligibility or how decisions are made when beneficiaries are excluded, underscoring how algorithmic and data‑driven decision‑making can function as a “black box” without clear accountability or explanation to affected people. Similarly, critics of Kenya’s Huduma Namba digital ID rollout have pointed to a lack of transparency in data handling, procurement decisions, and legal frameworks, raising concerns about oversight and citizen rights. Bangladesh’s national digital identity system (Smart NID) is another example. While the system underpins voter registration, banking, SIM verification, and social service access, details about data-sharing agreements, third-party access, contractual terms, and cybersecurity safeguards are generally not proactively disclosed. Public information about who can access citizen data, how it is used, and what audit or oversight mechanisms exist is limited, highlighting a transparency gap in the governance of this critical digital infrastructure
Open government approaches can help close this gap. Central to open government is proactive disclosure making information about public systems available early, systematically, and in accessible forms, not only upon request. According to the OECD, proactive disclosure should include clear, complete, timely, reliable, and relevant public sector data and information that is easy to find and understand, enabling public use and reuse. Applying these open government principles to DPI means taking transparency beyond theory and integrating it into every stage of system development and operation:
- Releasing procurement documents, contracts, and vendor relationships so that oversight bodies and the public can evaluate how public funds are spent and whether vendors’ interests align with public objectives.
- Publishing technical documentation and architecture details that explain how systems work (for instance, protocols, APIs, data flows) in ways that are accessible to non‑technical stakeholders.
- Making data governance frameworks and impact assessments publicly available well before systems go live, so that privacy, security, and inclusion risks can be examined and debated.
- Disclosing audit results and performance reviews, including findings from independent auditors, civil society technical assessments, and judicial oversight.
Transparency is not about exposing sensitive personal data but about making public systems understandable and subject to scrutiny while protecting privacy. Clear, proactive information empowers citizens and watchdogs to question whether digital infrastructures are designed to serve public needs or to entrench privilege. Efforts like Ukraine’s Prozorro e-procurement platform, which publishes detailed tender and contract information, show how digital systems can open up traditionally opaque state functions. In the context of DPI, similar proactive disclosure would demystify systems, spark public debate, and enable meaningful oversight. In this sense, transparency is a precondition for accountability and democratic legitimacy, not an afterthought.
Accountability and Redress in Digital Systems
As DPI becomes part of essential public services, failures are inevitable. Errors in databases, biased algorithms, or poorly designed systems can have immense consequences, such as excluding people from healthcare, social protection, or political participation. A classic example is the United Kingdom Post Office Horizon scandal where a faulty accounting system used as core public infrastructure generated false financial shortfalls and was trusted over human testimony, leading to the wrongful prosecution of more than 900 sub-postmasters and devastating personal, economic, and legal harm, due to an uncorrected error in a core digital system trusted by a public institution. In India, linking subsidised food rations to biometric authentication through the Aadhaar ID system led to widespread exclusion of legitimate beneficiaries when fingerprint mismatches, network problems, errors in linking ration cards, or other technical issues prevented authentication, leaving many poor and marginalised people unable to access essential food supplies.
Similarly, in the United States, a healthcare risk-prediction algorithm in health data systems was shown to be racially biased because it relied on historical healthcare costs as a proxy for need, systematically underestimating the health needs of Black patients and diverting critical care resources away from those who needed them most. In Australia, the Robodebt scandal involved an automated debt assessment system that used flawed data-matching and averaging methods to generate welfare overpayment notices, wrongfully accusing hundreds of thousands of people of owing money and inflicting widespread financial distress, anxiety, and harm on vulnerable citizens. Brazil’s Cadastro Único social registry is another example where data quality and entry errors led to the unjustified exclusion or suspension of benefits for families who should have been eligible for social programmes, demonstrating the human costs of unaccountable and poorly managed digital systems.
In the political participation and electoral context, there are well‑documented examples of digital and electronic voting systems facing serious failures or vulnerabilities that illustrate security flaws, vulnerabilities to manipulation, and procedural problems that have undermined confidence, required votes to be annulled or systems to be abandoned, and highlighted the challenges of ensuring secure, verifiable digital voting. For instance, Finland’s online voting pilot in 2008 faced significant flaws that led the Supreme Administrative Court to annul the results and call for a re‑vote using paper ballots. Estonia is one of the few countries to adopt nationwide online voting, but independent experts critiqued the system for its security and auditability problems, suggesting vulnerabilities to attack or tampering and transparency issues, necessary for full confidence in results. A live internet voting system used in New South Wales state elections was found to have severe security vulnerabilities, including flaws that could be exploited to manipulate votes, violate ballot privacy, and compromise integrity. These are issues not detected by election authorities before votes were cast. At least one parliamentary seat was decided by a smaller margin than the number of votes cast while the system was vulnerable.
Despite these risks, accountability mechanisms often lag behind technological deployment. Digital systems are rolled out rapidly, yet oversight structures, grievance redress processes, and independent audits are frequently underdeveloped or under-resourced. Without clear lines of responsibility, citizens often have little ability to challenge decisions, correct errors, or seek remedies, leaving the system effectively unaccountable by default.
Applying open government principles to DPI provides a pathway to bridge these accountability gaps. Key approaches include:
- Clearly defining responsibility. Every DPI system should have designated authorities accountable for decisions, errors, and policy implementation. This includes clarifying the roles of government agencies, vendors, and third-party partners in operating, maintaining, and updating systems.
- Empowering independent oversight bodies. Data protection authorities, auditors, ombuds institutions, or parliamentary committees should be equipped to monitor system performance, audit algorithms, and investigate complaints effectively. In India, the Supreme Court’s 2018 ruling on Aadhaar emphasised that individuals must have mechanisms to seek redress for misuse of personal data, highlighting how judicial oversight can reinforce accountability.
- Creating accessible redress mechanisms. Citizens should be able to correct errors in their records, contest wrongful exclusion, and report misuse of their data without undue burden. Nigeria’s Bank Verification Number (BVN) system, linked to SIM registration, has complaint hotlines and correction procedures to address issues such as mismatched identities or blocked accounts. India’s Centralized Public Grievance Redress and Monitoring System (CPGRAMS) provides a 24/7 online platform for citizens to lodge grievances with government authorities, track their status with a unique registration ID, and appeal unsatisfactory outcomes, demonstrating how institutionalized procedural channels can support accountability and remedy.
- Auditing algorithms and processes. Automated decision-making tools used in public service allocation must be independently evaluated for bias, fairness, and compliance with legal standards. The National Audit Office of Sweden reviewed several automated decision‑making systems used by government agencies to assess their effectiveness, efficiency, and whether they did not jeopardize legal certainty. This type of audit is a concrete example of public sector audit institutions evaluating automated systems for legality and soundness. While not an audit of one system, New York City’s algorithmic accountability law requires municipal agencies to audit high‑impact automated decision systems for fairness, equity, and legality, and to report the results. This creates a formal structure for algorithmic oversight and independent review. The Netherlands Court of Audit conducts annual audits of government algorithms using a formal framework that assesses governance, data quality, privacy compliance and IT controls, identifying risks such as GDPR non‑compliance and recommending checks for bias and accountability. In Brazil, audit initiatives have been used in social protection systems to reduce exclusion errors and prevent discrimination, illustrating how structured oversight can identify problems and prompt corrective action.
Integrating these mechanisms, can move DPI beyond mere technological deployment toward responsible governance, where citizens can rely on digital systems without fear of exclusion or injustice. Accountability and redress are essential safeguards that transform DPI from a technical tool into a democratic instrument that is capable of serving the public interest.
Participation Beyond Consultation
Too often, public participation in digital reforms is characterised by late‑stage consultations or technical briefings that happen after core decisions about system architecture, standards, and implementation have already been made. Meaningful participation requires earlier and extensive engagement, particularly with the communities and civil society organisations most likely to be affected by DPI systems. Open government frameworks emphasise participation as an ongoing, iterative process, not a once‑off exercise confined to a comment period. For DPI, this means involving civil society, technologists, marginalised groups, and rights advocates throughout the lifecycle of digital systems, including problem definition, design, implementation, evaluation, and governance.
For example, in Sierra Leone’s Digital Economy Programme, workshops over 18 months facilitated by the Digital Impact Alliance (DIAL) brought together over 220 stakeholders, including around 25 civil society organisations, to help shape key national digital governance strategies and policy frameworks. This inclusive process allowed CSOs to bring rights‑based perspectives into policies on user consent, interoperability, and accountability long before systems were built or deployed.
Similarly, discussions from the East Africa Internet Governance Forum in 2025 spotlighted how civil society is often brought in too late, mainly to react to problems after digital identity and payments systems are already live, rather than co‑creating solutions from the outset. Participants noted that governments often invite CSOs into the conversation only when something goes wrong, such as concerns about exclusion, privacy, or cost, rather than during earlier design and strategy phases.
Beyond formal policy dialogues, other civic tech and engagement initiatives show what meaningful participation can look like. Platforms such as Uchaguzi in Kenya have mobilised citizens to monitor electoral processes collaboratively with civil society and other stakeholders, demonstrating how digital tools can enhance democratic participation when communities are empowered to contribute and monitor public systems.
Research also underscores how civil society’s role in DPI design is still underdeveloped in Africa. Although many organisations are aware of DPI’s implications for rights and access, they are often excluded from early design, governance, and oversight stages, reducing their role to reactive critique rather than co‑creation.
Therefore, meaningful participation is not just about adding voices but about redistributing power in the governance of digital systems. It involves creating spaces for sustained dialogue, building technical literacy so diverse groups can engage effectively, and establishing mechanisms that ensure community feedback genuinely influences decisions. Embedding participation at every stage makes DPI systems more responsive to real needs, better suited to local contexts, and more accountable to the people they are meant to serve.
Safeguards, Public Trust, and Democratic Legitimacy
Public trust is the foundation on which DPI succeeds or fails. DPI systems require people to share sensitive personal data, rely on automated decisions, and interact with digital interfaces to access essential services. Without strong safeguards, these systems can exacerbate mistrust, particularly in contexts where states have histories of surveillance, exclusion, or weak service delivery. Safeguards are institutional, legal, and participatory mechanisms that ensure DPI respects rights and operates in the public interest. They should not be viewed as merely technical fixes. Some of the core safeguards include data protection laws, purpose limitation, proportionality, security standards, human oversight, and meaningful consent. When these are weak or absent, DPI risks becoming a tool for control rather than empowerment.
Several African experiences illustrate how the absence of safeguards undermines trust. In South Africa, cybersecurity failures at municipal level, such as a major ransomware attack on KwaDukuza’s systems, disrupted access to essential services and exposed poor data governance practices, including a lack of clear data protection protocols and transparency with affected residents, further eroding trust in local digital public systems.
Kenya’s Huduma Namba digital ID initiative faced sustained legal challenges and public resistance due to unclear data protection arrangements, weak consent mechanisms, and uncertainty about how data would be shared across agencies. Civil society concerns were validated when courts halted aspects of the rollout, citing inadequate safeguards and the risk of rights violations. The controversy did lasting damage to public confidence, even as the state sought to reposition the system.
By contrast, countries that foreground safeguards early tend to experience greater legitimacy. Estonia’s digital government ecosystem is often cited not because it is flawless but because it embeds transparency and control into system design. Through tools like the data tracker, Estonian citizens can log into the national portal and see when, by which institution, and for what purpose their personal data were accessed, creating a verifiable record rather than blind faith in technology. Strong legal protections enforced by an independent Data Protection Inspectorate give individuals real oversight and avenues to challenge misuse of personal data. These transparency and accountability mechanisms, backed by secure infrastructure and clear laws are essential in sustaining public trust in digital public services.
Safeguards are especially necessary for marginalised communities, who are often the most dependent on DPI-enabled services and the least able to navigate errors or abuse. In India, for instance, Aadhaar-linked welfare systems initially caused widespread exclusion when biometric authentication failed for elderly people, manual labourers, and those in rural areas. Subsequent policy changes, such as allowing alternative authentication methods and decoupling Aadhaar from certain services, highlight how safeguards can be retrofitted, but also how costly it is to introduce them too late.
Open government approaches strengthen safeguards by making them visible and contestable. Publishing privacy impact assessments, algorithmic audits, system evaluations, and incident reports enables public debate and course correction. Engaging civil society, technical experts, and affected communities in DPI design and oversight helps identify risks that governments or vendors may overlook, making participation itself a safeguard. The OECD highlights multiple countries and cities that have adopted algorithm registers or related transparency mechanisms as part of broader open government and algorithm accountability strategies. These registers make information about government AI systems public, allowing civil society, journalists, and experts to review, evaluate, and raise concerns. For example, the Government of Canada’s Algorithmic Impact Assessment (AIA) framework requires public sector bodies to assess and publish risk evaluations of automated decision systems through consultation with stakeholders, enabling scrutiny of how safeguards are integrated. Similarly, the AI registers in Amsterdam and Helsinki provide publicly accessible details about municipal algorithmic systems, giving citizens a clear view of which systems are used, what data they rely on, and the decisions they affect. By making systems and assessments transparent, these mechanisms support accountability, debate, and informed oversight, turning participation itself into a functional safeguard.
Ultimately, trust cannot be engineered through public relations or technical sophistication alone. It is built through predictable mechanisms, enforceable rights, and demonstrated accountability. When safeguards are effective, DPI can promote inclusion, efficiency, and innovation; when they are weak, even well-intentioned systems face resistance, litigation, and long-term legitimacy deficits. Unless democratic safeguards are embedded across the entire DPI lifecycle, from design and procurement to deployment and oversight, digital public infrastructure risks reproducing existing power asymmetries instead of transforming them in service of the public good.
Building trust and embedding safeguards are not just aspirational; they require concrete, enforceable practices. The next step is to translate these principles into actionable governance measures that ensure DPI serves citizens equitably, transparently, and accountably.
The DPI Blind Spot in Open Government
Digital governance is now one of the fastest-growing areas of reform in the Open Government Partnership (OGP). Governments are committing to open data portals, digital service delivery, and e-government strategies at scale. However, there is a striking omission in many of these efforts, the Digital Public Infrastructure itself. The invisible foundational systems that make them work effectively, securely, and for everyone. This is the missing piece. DPI is not the services or data themselves, but the underlying platforms that enable them. This gap matters. When DPI is treated as a neutral technical backbone rather than a site of power, open government reforms risk stopping at the interface level, while the deeper infrastructures that shape identity, data flows, eligibility, and political participation remain opaque and unaccountable. In practice, this means transparency without visibility into decision-making systems, participation without influence over design choices, and accountability without meaningful redress.
Reframing DPI as an open government priority offers a way forward. OGP provides an existing platform where governments, civil society, and technical actors can jointly address the governance questions that DPI raises: who controls public data, how automated systems make decisions, what safeguards prevent exclusion and abuse, and how people can challenge harmful outcomes. Bringing DPI explicitly into OGP commitments would move digital reform beyond innovation narratives toward democratic accountability at the infrastructural level.
This is not about slowing digital transformation. It is about governing it deliberately, before systems harden and inequities become integrated by design. As governments and development partners continue to invest heavily in DPI, the important policy question is no longer whether digital infrastructure will shape governance, but whose values it will encode and whose interests it will serve.
Anchoring DPI in open government principles offers a practical, rights-based pathway to ensure that the digital state strengthens trust, inclusion, and democratic oversight, instead of quietly reshaping power beyond public scrutiny.
About the Author:
Hlengiwe Dube
is a senior digital rights and policy expert with extensive experience advancing rights-respecting governance for emerging technologies in Africa. She has contributed to continental digital rights norms and led multi-stakeholder initiatives on data protection, digital inclusion, and AI governance. She provides research, policy guidance, and capacity-building on digital rights and governance. She holds an MPhil in Human Rights and Democratisation in Africa.
