The future of AI infrastructure isn’t being built in isolation. It’s being shaped in conversations between technologists and policymakers, in debates around openness and sovereignty, and in the quiet decisions that determine who controls the digital foundations of tomorrow. The AI Impact Summit 2026 Power Index: Leaders Driving the Next Infrastructure Era is not simply a list of influential names, it is a reflection of the people redefining how intelligence is developed, deployed, and governed at scale.
Power Index Profile:
Professor Amanda Brock, CEO, OpenUK and OpenHQ
Among the participants shaping this dialogue is Professor Amanda Brock, the dynamic CEO of OpenUK and OpenHQ, the UK’s leading organisation advocating for open technology from software and hardware to data and collaborative standards. With more than 25 years of legal and technology experience, Brock has built OpenUK into a globally recognised voice in open innovation and digital policy, and serves on multiple advisory and board roles spanning government, industry and international initiatives.
A seasoned international keynote speaker and editor of influential work on open source law and policy, she brings a rare blend of strategic vision and practical governance to the infrastructure conversation. Her voice underscores a central theme of the AI Impact Summit 2026 that the future of digital infrastructure will be shaped not only by technological capability but by inclusive governance, open collaboration, and the shared frameworks that enable trust and access at scale
We had the opportunity to exclusively interview Prof. Amanda Brock to explore her views on AI infrastructure, openness, and the leadership required for the next digital era. Below is the full, unfiltered conversation.
Q1. In your own words, how do you define resilience in AI infrastructure for governments and large organisations and how is that different from reliability or simply uptime?
“Resilience is the pathway to success in AI. It enables nation states and enterprises to navigate the 5 layers of the AI stack by removing dependency on enterprises from other nations whilst taking the best innovation through collaboration. It allows respect for culture and language, enables security and trust and ultimately will depend on a level of openness to enable innovators who want to build and scale locally whilst enabling access for all.
It is resilience not sovereignty that will secure the future of middle nations in AI and that’s not just about using words for words sake. It’s about understanding what is required and how it can work. Where it is appropriate to rely on others and work with others and where a nation must manage its own decisions and infrastructure. Most of all there must be transparency for the sake of trust and access to enable empowerment.
It is open source that has enabled China to lead, and influence and collectively the middle powers can also lead. They need investment to achieve resilience, to remove dependency and ensure that where they share and collaborate they do so on open source. That is critical but using the words open source isn’t a magic spell. It will require actual understanding of the ingredients needed in a healthy open source ecosystem and to enable those with funding and support at a national as well as an enterprise level. Simply putting the funding and support in place without the understanding will lead to the wrong steps and these missteps will hinder the ability to lead. China’s process over the last 10 years in shifting to an open source strategy has been very exact and built with understanding we don’t see at many government levels. That is a big part of its success.
For the middle nations, the collective power of intellect and financing, along with the ability to iteratively develop will be the key to building this resilience for each nation’s future.”
Q2. You’ve been a long-time advocate of open source as a foundational technology. From your summit observations, where is open source actually delivering resilience today, and where is it still aspirational?
At the moment there’s a lot of lip-service to open source in nations which are newly shifting their government strategy to this. Post Summit there’s criticism of lack of deep understanding in AI and risk too, but it was really apparent in conversations across the Summit that we are ready for an ontology. This is never going to define AI. We are at too early stage and there is so much evolution to happen. It is needed to ensure that we have a common language in our discussions and needs to cover many things beyond open source including the current AI Stack. When we look to conversations around openness, we need to apply openness to the underlying components in the Stack. When we talk about digital public good, open source software, open source licences – they already have defined meanings – and when we look at terms like civic tech, public good etc, rather than use them to fluff the conversation as we did at the Summit we must use them with care.
The discussions are now approaching a stage of maturity where they are shifting to action and its impact, but we will not see the desired impact without this clarity. Partly as we cannot measure what is unclear, but also because there will not be success without clarity. A definition that’s just a legal definition will not enable the community engagement, and contributor building that enables innovation. There is a broad landscape of ecosystem and infrastructure to shift that change to open source, and the open source communities will be the only people equipped to support that understanding. Without that we will be in a situation of open washing and policy capture, which is relatively easy to do but will inhibit success.
Q.3 India’s AI Impact Summit emphasised inclusive access and local deployment. How does open source actually enable data sovereignty in AI and where have you seen implementation fall short in global contexts?
Open source enables sovereignty only when its principles are followed. That’s more than a legal definition and it’s about enabling collaboration, contribution, iterative development, access for all and access to the necessary tooling. It allowed anyone to use its outputs under a licence that allows anyone to use those for any purpose – within the law. That means innovators are empowered to innovate and develop iteratively. That’s how DeepSeek came up with R1, building on past innovation, putting that innovation in the hands of the many, not the few. Then its outputs are enshrined in openness for the duration, like the open protocol MCP by being held in a neutral organization, on an open licence allowing use by all.
By removing it from the hands of a few anyone can have access and this enables nation states and enterprises around the world to engage. It allows researchers and innovators to build. Importantly it also allows inclusivity. Perhaps not a fashionable term in some parts of the world, but that access for all, through true open source licensing of Models and open weights, is how we can deliver products for the benefit of all people, giving real access for all.
Openness means that AI can be locally trained as well as innovated on. We see this with DeepSeek R1 which was trained on Chinese data. Because it had good clear instructions opened with the models, Hugging Face was able to train on other data to create R1 open within days. This is the kind of approach that can be taken by any nation state.
Software tools providing de facto governance for open source held in foundations and commons are normalising as the tools of AI development and being distributed as open source enables de facto standards to be met, and anyone to use them, to make their AI outputs more likely to meet the standards society is coming to expect.
The piece that open source does not solve and which was sometimes raised in the Summit as an open source issue is access to infrastructure – compute, power etc. This is a universal challenge that open source cannot fix, other than by driving to leaner AI that uses less of these.
Q4. Looking ahead 12–18 months, what is one concrete shift you expect to see in how open source is adopted for enterprise and public sector AI resilience?
I expect that by Geneva in 2027 we will see a sharper focus on governance, building open source that really creates influence and power by having the funding and ecosystem to do so in place in nation states, and by establishing clearer collaboration between nation states around this. That shift in the environment for AI development is what will enable resilience.
If I am allowed a second shift, I’d suggest that it will be requiring more openness of international providers. I think we are already seeing Nvidia go down that road.
At The Helm
As our conversation with Professor Amanda Brock draws to a close, one theme resonates clearly: the next infrastructure era will be defined as much by values as by velocity. In a world captivated by model size and compute scale, Amanda Brock redirects attention to the foundations, governance, openness, interoperability, and long-term digital resilience.
Through her leadership at OpenUK and OpenHQ, she continues to advocate for an ecosystem where innovation is not confined to a few gatekeepers, but shaped through collaborative frameworks that balance competitiveness with accountability. Her insights remind us that AI infrastructure is not just an engineering challenge, it is a societal design choice.
The AI Impact Summit 2026 Power Index is ultimately about the people steering these choices. And as this conversation illustrates, the leaders shaping the future of infrastructure are not only building systems, they are defining the principles those systems will stand on.
