Unique Contributions

AI diplomacy : the difficult balancing act between innovation and regulation

RELX Season 4 Episode 2

YS Chi speaks with Philippa Scarlett, RELX global head of government affairs. As a child of diplomats, Pippa has a unique background which gives her cross-cultural competency in a role that requires global thinking.  She shares her insights on how different governments think about AI and the opportunities and challenges they face when leveraging AI and generative AI for their own use. What are the different governmental approaches to AI regulation? How best to approach regulation when facing such a pace of technological change? And in the absence of regulation, what roles can companies play in advancing innovation and responsible development and use of AI?

Pippa, welcome.

Philippa Scarlett:

Thank you, YS.

YS Chi:

Yeah, before we dive in, can I ask you to provide us a bit of a personal introduction of yourself, because you have one of the most unique backgrounds and your journey up to this point has been fascinating.

Philippa Scarlett:

Well, we share a unique background YS. I am a child of diplomats, as are you. Mine in the United States government. So, I grew up all over the world. My parents joined the US Foreign Service when I was about two years old, and as a result, I've spent about half my life in the United States and the other parts, including Cameroon, Brazil, the Philippines, Yugoslavia… having to leave Yugoslavia when the war in Bosnia started - care of Uncle Sam, bringing us out. And then my parents went on to serve in Ireland. So, grown up all over and as you shared, my role is a global one and so I feel I can bring to bear that experience and cross-cultural competency in a role that requires all members of the team to think globally.

YS Chi:

And now you are based in…

Philippa Scarlett:

Atlanta, Georgia.

YS Chi:

That's it, yeah. But you're still traveling quite a bit to go work with your teams across the globe.

Philippa Scarlett:

Yes

YS Chi:

That’s great. When one looks at your background, it's quite interesting to see the jumping back and forth between public service and private sector. What triggers these switches?

Philippa Scarlett:

Yes, I've had kind of a non linear career, perhaps. I think because of my parent’s public service, I've always had an interest in serving and making an impact at scale, beyond just my immediate community. And if I'm fortunate enough to be a part of an organisation that's with leadership that I admire to do that. That has been the animating principle, I would say, in my different career moves. Yes, I've had the opportunity to serve in government, both as a career civil servant at the US Department of Justice, and then later as a political appointee, and also in the White House. But also, in the private sector. Part of the reason to go to the private sector is, I think, if you want to effectuate change at scale. If you have an organisation that has principles of integrity and ethics and serving their, in this case, customers, you could actually make quite a difference.

YS Chi:

Probably quicker…

Philippa Scarlett:

Yeah, and a lot quicker. And that's an important point. Democracy takes a long time to do things appropriately. But private sector, the largest employers, they want to make a difference. They can do so as a matter of internal policy and how, what products and services they seek to develop.

YS Chi:

Well, that rich background is really wonderful for tackling the issue on hand, right? AI, both as a lawyer, practicing lawyer, but also having been part of a leading tech consumer company, and now the leading information tech company. As we said, the public sector is not always the most innovative instigator of emerging technologies, perhaps with the exception of one thing, which is the internet.

Philippa Scarlett:

Right.

YS Chi:

That was absolutely invented inside the Department of Defense. But outside, aside from that, government is really not the instigator of new technology. How has this been different with AI, since it's really grown up outside, and government is trying to figure out where it should stand?

Philippa Scarlett:

Yes, big distinction with the internet, as you mentioned. All of the innovation is happening, or the majority of the innovation is happening within the private sector. There may be some exceptions in China that we could speak to but looking at the West, all the major large language model, developers and AI developers and deployers, private sector. So that's a really exciting thing, because there are no rules formally on how to leverage this technology and use cases. And that's why it's really important that businesses think about it, not waiting for governments to decide, because it will take some time. But also to figure out what's the right thing, how best to use this technology and innovate and improve our society. So yes, I think the locus has been in private sector and the governments, mainly in the West, are really playing catch up. They don't really understand it. They can't attract, necessarily always the same talent within government as the private sector. So, there are some challenges ahead.

YS Chi:

So, one of the interesting questions is, how will government use Gen AI capabilities for its own use, not just as a regulator, but for its vast amount of services it provides to citizens…

Philippa Scarlett:

And data that it has…

YS Chi:

And data that it sits on already.

Philippa Scarlett:

So, this is a really tricky area. There's no clear answer yet, but clearly the implications for military or national security are quite different than the provision of government services, like benefits. Exactly… One use case that you could imagine that governments would be interested in harnessing is understanding weather patterns. And how to… using these models protect against major storms and the like. You could see the very easy applications beyond the others that are more obvious in terms of operational efficiency. But there in the government, especially in democracies, the need for human oversight is even more pronounced. The government is answerable to the people, and the services are for the people. Therefore, machines shouldn't be making decisions, ultimately.

YS Chi:

Especially if that service is unique, without private sector competition.

Philippa Scarlett:

Correct. So, pensions and health benefits or emergency care. I think the stakes are even higher in government use of generative. AI But governments are grappling with how themselves, how they themselves, as users of this new technology, what should the framework be?

YS Chi:

And in some countries, governments will try to do it themselves. Other places will see governments looking for private sector solutions for them as well.

Philippa Scarlett:

Right, and there will be partnerships too. There can be places where the private sector can help the government in its own work.

YS Chi:

Like education for example.

Philippa Scarlett:

In education for sure.

YS Chi:

And health care.

Philippa Scarlett:

Health care. I think that's another frontier.

YS Chi:

Different governments are approaching this, obviously differently. You mentioned China being one of them. Europe is approaching it slightly different from the US as well. Certain ownership issues as well. Can you describe for our audience, what are some of the most visible differences between these different governments that are trying to get some handle on this?

Philippa Scarlett:

The European Union with the ‘EU AI Act’ was the first out.

YS Chi:

Absolutely jumped out of the gate.

Philippa Scarlett:

As the largest jurisdiction or economy,

YS Chi:

As they did with GDPR.

Philippa Scarlett:

As they did with GDPR privacy. That is significant. Some say, they're doing that because most of the major companies that are developing this technology aren't there, so it's easier to regulate. But nonetheless, they've done it, and it's quite an achievement. Now there's the process of bringing that into force, which is underway in Brussels. They've taken a very kind of hands on and consultative, I would imagine, approach. But the flip side of that, that you will hear on this side of the Atlantic in the US is… well, the reason why there are no AI companies in Europe, quote, unquote, is because of so much regulation. How to find that balance, to build some frameworks and structure without it being wild or totally regulated.

YS Chi:

Right. And you see that actually, within even the EU, right?

Philippa Scarlett:

Yes.

YS Chi:

You have the more of the, let's figure out innovation mindset of ‘a la France’ and then the opposite, and I'm not going to name the country.

Philippa Scarlett:

Yes, for sure. I think the UK is also trying to figure out post Brexit what role it could play, perhaps as an intermediate between the European model of more aggressive, shall we say, efforts at regulating versus the US, which, in your first question, I'll speak to that. Obviously, there's been very little regulation and you would say largely by design. Colorado is an important exception. The first US state to seek to regulate artificial intelligence that happened this year.

YS Chi:

When I visit different countries, in particular different language zones, since we're talking about large language models. I see an effort by different governments to make sure they don't fall behind by creating large language models in their own language and their own content. Do you see any part of the world that is particularly noteworthy right now in making that investment.

Philippa Scarlett:

Outside of China?

YS Chi:

Right outside of China, and outside of EU, US.

Philippa Scarlett:

You would know better YS, what would you say?

YS Chi:

I’m impressed with the model that is being built in Arabic language, for example, Arabic content. I'm sure, I don't know, but there is one in Russian language, I'm certain of that. Although I don't have access to that. I'm seeing some of the Portuguese language effort in Brazil as well. But in each country, I do see them not wanting to make this an English language or romance language, driven and dominated. I do see that. This, then is actually an opportunity for us. We are obviously a strong content company, but also a very strong analytics company. Tell us some of the things that you see as an opportunity for us, when we get in front of these regulators, to say, “Hey, don't go too far here. Don't go too far there.”

Philippa Scarlett:

100 percent. RELX, as a company, has been using algorithms or early versions of AI for about a decade. This is not brand new for us. And we have undertaken for ourselves to develop organising principles about the responsible development and deployment of AI, which we've published a couple years ago, ahead of regulation. I think those are important kind of framework structures that we've built, hand in hand with our innovation. That's exciting. The other piece is where… our customers are in law and in science and in medicine, for example, and so our generative AI products are for customers that recognise the importance of veracity, of trustworthiness in the data and therefore the outputs of the generative AI. I think that uniquely positions our company. In the law, you need reliable information. That's how businesses and people's lives can be decided, likewise in health. All of the efforts that we've undertaken to curate the underlying information over many, many years. I think, puts the value of our products and services at an even higher level, and their enterprise, not consumer facing only.

YS Chi:

That's right, we are pretty much a B2B company, right?

Philippa Scarlett:

Yeah, in those areas.

YS Chi:

Yeah, we are. I think you give me an opportunity to jump to the next question that I really wanted to ask, and that is… we've set on, principles of responsible use of technology, responsible use of AI, data analytics for a long time,

Philippa Scarlett:

Yeah.

YS Chi:

How has that affected our ability to deal with these upcoming regulations, upcoming scepticisms, upcoming conflicts between different players in this field?

Philippa Scarlett:

I think it gives us credibility because we're not just waking up to what are the risks, what is our North Star in this technology, recognising, of course it will continue to evolve. We don't have a crystal ball and all of the future use cases, but we have undertaken and publicised that we think hard about this. Not because there's a regulation, but because that's the responsible thing to do for our customers and for the ultimate, the general public. I think it really enhances our credibility that we undertook this, not because of an external expectation, but our own drive, and to serve our customers.

YS Chi:

Can you give us some examples of really exemplary, responsible use of AI, among the products we have now already launched?

Philippa Scarlett:

Oh gosh. Well, in the legal area, I would say, we've emphasised privacy in our search queries because obviously a lawyer is doing research to represent his or her client, and those queries may tip someone off on the strategy or particular aspects of a client. We have certain principles there to protect against or to maintain privacy interests. In the health space with ClinicalKey AI. Another example would be, well, transparency. It's a big thing.

YS Chi:

Yeah, it is.

Philippa Scarlett:

In the consumer facing generative AI products, for example, there are some products that seek to do this, but generally you don't know what is the basis of the summary when you query a question.

YS Chi:

It is a black box.

Philippa Scarlett:

It's a black box,

YS Chi:

Ours is not.

Philippa Scarlett:

Ours is not and not only is it not a black box, but we empower the clinician who's using ours to find the most relevant information. It's transparent, and we help surface research or other, citations that might be helpful in that clinician’s work.

YS Chi:

So, as government sits now and watches everything unfold so quickly around. And billions and billions of dollars are being invested by competing tech companies to try to be the‘winner take all winner.’ What are the things that governments need to think about right now as they think of quote, unquote regulating it?

Philippa Scarlett:

I think the problem with regulation is that it's a moment in time, but it can't anticipate all the things that will happen in 10 years or five years, and with the pace of technological change, there's a real risk to a really rigid approach. I think the strength will be in principles, principle based framework, with follow up, that will enable the technology to grow, but in a way that is not harmful to the general public. That makes sense, that one starts in the security area, especially where the stakes are particularly high. I think the best regulation will be one in dialog with the developers and implementers of AI.

YS Chi:

It's a difficult balancing act.

Philippa Scarlett:

It is.

YS Chi:

You don't want to stop them from making great progress, but on the other hand, we don't want to hide under the carpet some of the impacts that could have on humanities.

Philippa Scarlett:

That's right.

YS Chi:

One of the capping elements of our responsible use of AI principle is human oversight

Philippa Scarlett:

Yes.

YS Chi:

How do we explain that? How do we explain the concept of human oversight, and why is it so important not to forget that piece?

Philippa Scarlett:

We serve people. We serve customers that have direct impact on human life.

YS Chi:

Yes.

Philippa Scarlett:

and property…

YS Chi:

You mean, like doctors and nurses…

Philippa Scarlett:

Doctors and nurses and lawyers.

YS Chi:

Law enforcement.

Philippa Scarlett:

Law enforcement, financial institutions. A business can thrive or not. In that case, it's really important. Data analytics can help uncover and insights, but it shouldn't be the decider of the decision based on those insights. That's where the human is important in democratic societies. Human beings are the ones who are accountable to the voter, and machines are not. If we're creating these machines, that needs a human oversight, particularly if you think about our use cases. Human oversight is critical.

YS Chi:

I don't want to create a crystal ball situation here, but if we move this time frame out a few years, how differently do you see government's role being in the world of this rapidly changing world of AI development? Whether that's Gen AI or predictive AI, or whatever the new AI variations would be.

Philippa Scarlett:

I think there is momentum now for governments to be more involved. They've recognised both the promise and some of the deep risks. I would anticipate we will see a significant uptake in government interest and engagement on generative AI in particular. But just as soon as they do that, there'll be another technology…

YS Chi:

Of course.

Philippa Scarlett:

Right around the corner is quantum computing, and what that will mean for encryption and all kinds of other things. It's always a one step behind, at least, but I would anticipate in 2025 there'll be a lot more government attention in all major markets, certainly in the west, to focus on AI.

YS Chi:

Do you think this is something that will necessitate global alliance?

Philippa Scarlett:

Yes, I think this is one of the examples where governments and private sector of countries that are already in alliance, will be critical. We see this in the UK and its relationship with the US, long standing business and political and intelligence and military relationship. But I would anticipate, for example, that alliance to think about regulation in a way that aligns also with the political values of those countries.

YS Chi:

And what can we advise smaller countries who may not have the same kind of resources and advancement in technology to do? As far as governments are concerned.

Philippa Scarlett:

Well, I think this is an important space for Global South. Well, we know Brazil as the host of the G20 this year and also of COP next year.

YS Chi:

That’s right.

Philippa Scarlett:

And India, also an important major economy and player here. I think we… quote smaller countries. I mean, India is not a smaller country.

YS Chi:

Right, nobody can say that.

Philippa Scarlett:

But in terms of where the major companies reside, I would imagine those governments will have more to say. Brazil is looking at an AI bill as we speak. Obviously informed by what may be happening in the US or in Europe, but I would anticipate they will have a bigger voice. There are a lot of people.

YS Chi:

Any wish list from government on this issue over the next 12 months?

Philippa Scarlett:

I think there is a recognition that this takes special subject matter expertise that, for some of the reasons that we discussed earlier, don't reside in the government currently. So, you will see governments trying to bring that know-how in because they don't understand it, and therefore, what's the best way to regulate it? How is it even possible you could come up with ideas in a kind of vacuum? But obviously they would like it to be effective, and so that requires some knowledge of actually how this technology works.

YS Chi:

It will be indeed interesting to see how government manages to leave the drivers of this progress as independent as possible, but certainly taking certain steps to avoid recklessness

Philippa Scarlett:

Right.

YS Chi:

And that balancing act is just not a science. Are there a sine qua non, non compromisable principles that you would like to leave the audience with that, we believe we need to advocate? All of us, all 35,000 of us, on this issue of generative AI?

Philippa Scarlett:

Transparency and human oversight. Those are the, I think, probably the biggest keys. What are we talking about? What went into this box? And then what will happen with the outputs? Humans need to be involved. They're the ones accountable in our systems of government.

YS Chi:

Transparency and human oversight. You've heard it from Pippa.

Philippa Scarlett:

Yes.

YS Chi:

Thank you so much for joining us today, Pippa. This has been very informative and look forward to seeing how governments do handle their participation in this journey with the private sector.

Philippa Scarlett:

Thanks so much YS.