Finland AI Snapshot: Governing Innovation in Practice
Finland has emerged as one of Europe’s most referenced examples when it comes to responsible artificial intelligence, not only for its technological capabilities, but for the governance structures that underpin them. With its national AI strategy launched as early as 2017., the country has taken a deliberate path: treating trust, transparency, and accountability not as constraints to innovation, but as its foundation.
From the development of public AI registers and coordinated national programmes, to city-level experimentation in places like Helsinki, Tampere, and Turku, Finland offers a model where deployment is closely tied to institutional capacity and public legitimacy.
At the center of this evolving landscape are leaders working to ensure that AI is not only advanced, but aligned with societal values. Suvianna Grecu, Founder of the AI for Change Foundation, is one of those voices, advocating for a shift from reactive adoption to intentional, mission-driven leadership, where AI is shaped not just by what is possible, but by what is responsible.
Through the AI for Change Foundation you work at the intersection of technology, policy, and business. How would you describe the role of the Finnish innovation ecosystem in shaping responsible AI today?
Finland’s contribution to responsible AI is often misread as purely technical. What makes it interesting from a governance perspective is the underlying assumption: that trust is infrastructure. Finland launched its national AI strategy in 2017 but what followed wasn’t just a race to deploy. It was a sustained effort to build the institutional conditions under which deployment could be legitimate. The Finnish Center for Artificial Intelligence (FCAI), the AuroraAI programme, the national AI register, these reflect a culture that treats public accountability as a prerequisite, not an afterthought.
Through the AI for Change Foundation, I work with organisations that are trying to make that same shift, from AI as a competitive tool to AI as a governed system. Finland is a useful reference point precisely because it shows that ambition and accountability are not in tension, they require each other.
Finland is often recognized for its strong digital infrastructure and innovation culture. What are some examples of Finnish cities currently experimenting with AI in practical ways?
Helsinki is probably the most instructive example in Europe, not because it is the most ambitious, but because of how deliberately it approaches experimentation. The city runs an Experimental Accelerator programme, rapid three-month sprints where civil servants propose and test AI use cases with small budgets. Experiments have included AI-assisted support for jobseekers, shift planning in social and healthcare, and machine learning tools for analysing pedestrian crossing deterioration from aerial images. Crucially, the code for one of the pedestrian crossing tools was published on GitHub for other cities to use AI for Good, a small detail that signals a genuinely open approach to urban AI.
Helsinki also co-launched, alongside Amsterdam, what are considered the world’s first public AI registers, a transparent window into the AI systems cities use, built on principles of responsibility, transparency and security. AI for Good. More recently, Helsinki introduced an AI-powered chatbot to assist residents in navigating rental housing services, providing multilingual support around the clock.
Beyond Helsinki, Tampere and Turku developed dedicated AI Hubs through Business Finland’s AI Business programme European Commission, and Tampere has its own City Lab for urban AI experimentation. These are methodical, which is what makes them replicable.
Your initiatives bring together businesses, public institutions, and researchers. How important is this type of cross-sector collaboration when cities begin working with AI?
It is not just important, it is the variable that determines whether AI in cities actually serves citizens or serves procurement budgets. The failure mode I see repeatedly is when cities acquire AI systems built entirely outside their institutional context, with no shared ownership of the problem definition or the risk framework. The technology works in a narrow technical sense but fails to integrate into how decisions are actually made, how staff actually work, or how residents actually experience services.
Cross-sector collaboration changes the dynamic because it forces the conversation about purpose earlier. When researchers, public administrators, and business actors are at the table together from the beginning, the questions shift from “what can this system do?” to “what should it do, and who is accountable when it doesn’t?” That second set of questions is where governance lives. Finland’s FCAI model ( connecting academic AI research with public sector actors and industry) is a structural example of this. But structure alone is not sufficient. What matters is whether the collaboration includes genuine challenge, not just coordination.
Across Finland, what are some recent policies, public programmes, or city initiatives that are helping municipalities experiment with AI or data-driven services?
Several layers are worth noting. At the national level, in October 2024, Finland’s Ministry of Finance formed a cooperation group to unify generative AI pilot projects across ministries, share best practices, and build on earlier AI community work established in 2023. Chambers and Partners This kind of coordination infrastructure is what prevents AI experimentation from fragmenting into isolated silos.
Finland is also in the process of aligning national legislation with the EU AI Act, which entered into force in August 2024, and has drafted a new Act on the Supervision of Certain AI Systems expected to take effect in 2025. This matters for municipalities because it establishes clearer accountability frameworks for high-risk AI in public services.
On the research and talent side, the Finnish Ministry of Education and Culture committed €10 million per year from 2025 to 2028 for a national ELLIS Institute, an AI research hub within the European Laboratory for Learning and Intelligent Systems Aalto University a signal that Finland is investing in AI capacity as a long-term structural priority, not just a short-term response to market trends.
Looking ahead, what are the next priorities for the AI for Change Foundation, and how do you see cities or urban innovation actors participating in the initiatives you are developing?
Our focus is moving from awareness to architecture. There is no shortage of conversation about ethical AI or responsible deployment, what is missing are the structural mechanisms that make accountability real rather than rhetorical. That means human-in-the-loop requirements built into procurement standards, not just ethics guidelines. It means personal liability frameworks for AI harms, not just corporate responsibility statements. And it means building the capacity of institutions (cities included) to be intelligent buyers and governors of AI, not just passive recipients of it.
Cities are particularly interesting to me as actors right now because they are close enough to citizens to feel the consequences of bad AI deployment, but often without the legal frameworks or institutional capacity to push back on vendors or set meaningful conditions. Part of what the Foundation is building is exactly that: tools, frameworks, and cross-sector networks that give urban institutions and the people inside them, more agency in how AI enters their operations.
I see urban innovation actors as essential partners in this, not as an audience. The most important conversations are not happening at Davos but happening in city procurement offices, in hospital IT departments, in municipal councils debating what data they will and won’t share. That is where the governance gap is largest and where the work matters most.




