DFCE 2025: Regulation, Innovation, and Friction

Last week, I had the opportunity to attend DFCE 2025 organized by AITI. At the conference, I wore two hats from work: that of a data governance enabler and a data architect for our effort in migrating (and re-architecting) a data platform to the cloud. There are a lot of interesting topics covered, generally around data governance (specifically, the Personal Data Protection Order, PDPO) and, of course, AI, with optimism mixed with cautionary reminders.
At the heart of the conference, though, is an important message: That to facilitate economic growth through digital innovation, trust is the foundation that business organizations and the governing bodies need to build to attract more consumers and extend their reach to broader regional markets. And it needs to be built at the outset—while we’re still early in our digital transformation phase—using policies as the tools, though admittedly at the cost of friction.
In this post, I explore the concept of regulatory friction—which I agree is a crucial foundation for a digital economy—and its impact on innovation, and how we can utilize metadata management tools to streamline the implementation of policy enforcement.
Regulatory Friction vs. the Speed of Innovation
On the drive back home after the event, I listened to a recent a16z podcast where Marc Andreessen lamented the EU’s heavy-handed regulatory approach—among other things:
Marc Andreessen: And so the startup process is a way to especially get smart young people to do ambitious things. So that’s great. It’s all fantastic. Against that is just this incredible drag by bad governments and bad policies. As you mentioned, in Europe, you’re persistent shooting itself, not just in the foot but in the other foot and in the ankle and in the knee and in the gut. And they’re just on this absolute frenzy to regulate and kill tech in Europe [and] the UK.
Eric Torenberg: And they’re proud of it.
Marc Andreessen: And they’re proud of it. Yes. The actual European line now is we quote—this is in the [Financial Times]—this is an actual quote from a European senior politician: “We know we cannot be the global leader in tech innovation, so therefore we will be the global leader in tech regulation.” And you can imagine being like a German or French tech founder and reading that just being like, “Oh God, … get to the U.S. embassy and apply for a visa as fast as possible.”
And going back just a year ago in July 2024, Stratechery wrote an article aptly titled “The E.U. Goes Too Far”, where the author lamented about “Europe’s data obsession”, which leads to the annoying consent pop-ups, personal data collections, and unnecessary frictions:
… for the first time in a while, I was traveling as a tourist with my family, and thus visiting things like museums, making restaurant reservations, etc.; what stood out to me was just how much information all of these entities wanted: seemingly every entity required me to make an account, share my mailing address, often my passport information, etc., just to buy a ticket or secure a table. It felt bizarrely old-fashioned, as if services like OpenTable or Resy didn’t exist, or even niceties like “Sign In With Google”; what exactly is a museum or individual restaurant going to do with so much of my personal information - I just want to see a famous painting or grab a meal!
The friction faced by consumers, thanks to heavy-handed regulation, is the least of the problems. The more critical secondary effect of regulations is how they either cultivate or stifle innovation and influence economic growth. Spending more time to ensure regulatory compliance means less time spent getting products to market faster; unless, of course, you spend more resources to account for that time. This also means additional risks that otherwise would not have existed without stringent regulations, scaring off startups and investors. A recent report published by the European Commission highlights the lack of appetite by venture capital firms to invest in AI in the EU compared to the US and China:
The EU’s efforts in advanced technologies, such as artificial intelligence and cloud computing, are far from matching those of the US. The main instrument available to the EU, the European Innovation Council, had a budget of 256 million euros in 2024, while the US allocated more than 6 billion dollars for this purpose, including 4.1 billion from the Defense Advanced Research Projects Agency and 2 billion dollars from other related agencies.
The situation is repeated when looking at venture capital investment. In 2023, they invested about \$8 billion in venture capital in artificial intelligence in the EU, compared to \$68 billion in the U.S. and \$15 billion in China. The few companies that are creating generative AI models in Europe, such as Aleph Alpha and Mistral, need large investments to avoid losing the race to U.S. firms. However, European markets do not meet this need, pushing European firms to look outside for funding.
To get some idea about the cost of this friction from GDPR enforcement alone, you can head over to the GDPR Enforcement Tracker website1. It’s no surprise, then, as the senior EU politician is well aware, that the EU lags behind the US and China in the AI race, in many respects, due to the continent being the “global leader in tech regulation”, instead of lack of talent—which, if this trend continues, may eventually be true due to brain drain. From the same report:
Excessive regulation and administrative barriers in the EU are obstacles to technology companies deciding to settle or simply stay in Europe. In fact, if between 2008 and 2021 147 unicorns were founded in Europe, i.e. companies whose valuation exceeds 1 billion dollars, 40 moved their headquarters abroad, the bulk of them to the United States.
Now, I don’t mean to undermine AITI’s—and the Brunei Government’s—good intentions behind PDPO. To the conference’s credit, this was discussed at some length in a panel discussion titled Striking the Balance: Innovation, Regulation, and Building Public Trust. Here’s the description of the panel discussion in the programming document:
As technology reshapes economies and societies, policy must ensure innovation delivers public value while safeguarding trust.
How do we ensure that regulatory frameworks remain agile enough to respond to fast-evolving technologies like AI and IoT? Can Brunei position itself as a model for digital trust in the region by embedding governance into innovation from the outset?
Clearly, AITI understands the need to strike a balance, hopefully to be careful so as to avoid the missteps of the EU, where the economy is stagnating. And given Brunei’s current economic climate, the last thing we want is to repeat the same mistake.
So if there’s one key observation I can take away from the conference, it is that AITI—and by extension, the Minister of Transport and Info-communications and the Government of Brunei Darussalam—is aware of the general perplexity from the private sectors about what PDPO asks from them, and the potentially enormous complexity and efforts it might entail. Hence, the many presentations around AI and data governance throughout the conferences, which I thought were well delivered and pertinent to the discussion around balancing innovations and regulations, are welcome.
Among the many presentations that stood out to me (and still too many to discuss in one article) are the presentations by DST on how it transformed itself into a more digital company using AWS, embracing economies of speed, and by Drew & Napier—a Singaporean law firm that contributed to the PDPO—on principles of data ethics, IoT securities, and a little overview on concepts (or “Accountability Tools”, as the DPO guide calls it) such as Data Protection by Design (DPbD), Data Inventory Mapping, and Data Protection Impact Assessment (DPIA). The former boasts about innovations that leverage public cloud services, while the latter emphasizes data ethics, security, and the importance of regulations as key enablers. Again, the key question is, can we strike a good balance between innovation and regulation?
I think DST’s presentation is pertinent to the question of balance between innovation and regulation, at least to the local Bruneian audience: First and foremost, DST is familiar to everyone in Brunei—everyone who owns a mobile phone pre-UNN most likely was a customer to DST—and those who stick with DST may observe how much they’ve embraced digital transformation throughout the last five years, perhaps much more than their competitors do, at least amongst the three salescos. After all, they have an “army of developers”, as one panel speaker puts it. Secondly and more critically, they’re able to make this transformation to the cloud in spite of the regulatory scrutiny by the regulators from AITI, while PDPO was still being drafted as early as 2021. In fact, DST’s response to AITI’s public consultation on PDPO highlighted that the PDPO draft “should not hinder any innovation development.” I don’t see this sense of concern about PDPO potentially slowing down innovation in either Progressif’s response or Imagine’s response. DST may own the bragging rights at the conference (and deservedly so), but the main beneficiary of the presentation here is AITI, who want to see increasing confidence in the private sector to comply with PDPO. I’d go further to suggest that it is no coincidence that DST is making such a timely presentation about its transformation into a digital organization using public cloud technology—especially for a company that holds a huge portion of Bruneians’ personal data—at the same conference where AITI announced that it is publishing its guide for appointing Data Protection Officers (DPOs). (PDPO requires an organization to assign at least one DPO to be responsible for ensuring that the organization complies with the law.)
From this perspective, at least, the conference fosters awareness, trust, and confidence in the practical feasibility of complying with PDPO for local companies—especially for those that want to innovate more by leveraging the cloud. Of course, all this is in the hope that this buildup of trust and confidence doesn’t stop there; when done right, the consumers and investors should feel it too.
Reducing Friction with More Software
To be sure, friction isn’t inherently bad—we need friction as a society to be stable. During COVID, we had to take an ART test every few days before we could enter the office; it’s pretty tedious and uncomfortable, but at least everyone feels a bit safer. But what if the ART test takes seconds instead of minutes, and is less intrusive to carry out, and yet still offers the same accuracy (or better) for its test results? Everyone would still feel safer, but with less friction!
To that point, what’s not explicitly discussed in the conference, though, is how the costs of friction introduced by regulations like PDPO can be reduced significantly with scalable technological implementations to meet PDPO requirements. Within the last decade, the rising costs and complexities incurred by regulations like GDPR in the EU and CCPA and HIPAA in the US have increased the demand for technology that streamlines tasks and processes for compliance with these regulations .
Take, for example, the Data Inventory Mapping exercise, which is an essential exercise as highlighted in the presentations by AITI and Drew & Napier. “Producing a Data Inventory Map” is listed as an “Accountability Tool” under the DPO guideline. From the guideline, in section 4.1.2, the exercise is described below:
(a) A Data Inventory Map is a type of data mapping which identifies the data assets and the flows of data relating to a system or process.
(b) It maps out the types of personal data, the purposes of its collection, the access controls, the methods of transfer, the types of storage and the disposal method.
(c) It captures the processes by mapping the flow of data including the assigned staff involved in the management of personal data ranging from internal departments or external third-party organisations.
(d) Organisations should review and update its inventory map periodically along with any developments relating to the system or processes.
(e) Generally, the data inventory map can be implemented by project managers involved in the development of the system or process, with the assistance of the DPO
If this exercise sounds too resource-intensive as organizations and their data grow exponentially—it is because it is indeed a taxing exercise. Data catalogs or metadata management tools, such as DataHub, Amazon DataZone on AWS, Microsoft Purview on Azure, and Unity Catalog on Databricks—among many others—exist to make this exercise less taxing through automation and streamlining metadata curation workflows. All share similar features to make data observable, thereby fostering the accountability and trustworthiness of the data.
For example, DataHub supports the ingestion of metadata from Oracle and SQL Server databases (among many others), so that it can (1) parse the SQL scripts in the databases to infer the schema of the tables and views in the databases, (2) infer the lineage between tables and views, even across data systems2, (3) use AI to classify whether a table or view contains PII automatically (and have it vetted manually), and, if so, (4) automatically test for policy compliance such as “Does this dataset have an owner?” or “Has this dataset been stored beyond its retention period?” Since parts (1) and (2) of this flow are done automatically, the DPO’s responsibility for carrying out task (d) in Data Inventory Mapping becomes operationally trivial.
This development in data governance software isn’t surprising. What DPOs are responsible for doing is inherently dealing with software problems; that of ensuring data in their organization’s software systems is secure (with DPbD), observable (with Data Mapping), and measurable (with DPIA). That data in discussion just happens to be personally identifiable.
If we observe the general trend of how software is eating the world, the pattern is that most of the time, the answer to software problems is to add more software on top! This is true for many aspects of IT: from upfront IT infrastructure procurement to using X as a service on demand; from manual testing and deployment to CI/CD; and from centralized and intermediary-dependent payment rails to decentralized value-settlement networks without intermediaries. So, it is not surprising that the data governance aspect follows a similar trajectory too.
Organizations with more sophisticated data engineering teams will find the effort to do the exercise above a much less tedious and time-consuming activity, as they are capable of leveraging the emerging capabilities of data systems and their integration with data catalogs or metadata management tools. Those without such a team would be doomed to resort to Excel (or worse, Microsoft Word!) as their data inventory map repository. Of course, this is a moot point when discussing organizations with data that grows very slowly or are not large enough to necessitate such a scalable approach.
More Than Just Good Intentions
Good intentions don’t work. Mechanisms do.
– Jeff Bezos
The conference did well to underscore the importance of building trust as a strategy to pave the way for economic growth through digital transformation. Pertinent to that strategy is the challenge of balancing regulation and innovation, and being cautious of the potential second-order complexities, such as those faced in the EU. As usual, the market finds its ways to even things out, and data catalogs and metadata management tools emerge from the necessity for reducing regulatory friction. Looking ahead, the next conference in 2026 should aim to close this gap with more examples showcasing how conformance to regulatory compliance is done in the real world at scale, not just with good intentions in mind, but also supplemented with effective mechanisms that make it generally feasible.
This begs the question: Will PDPO be as transparent with a publicly accessible enforcement tracker? ↩︎
In my test, I managed to extract data lineage information for data in a SQL Server database that obtains its source data from an upstream Oracle database, requiring some effort to write a custom transformer. ↩︎