Unique Contributions
In each episode of Unique Contributions, we bring you closer to some of the most interesting people from around our business working on industry-shaping issues that matter. We explore how they and we collectively as a business, create a positive impact on society through our knowledge, resources and skills. This is what we call our “unique contributions”. In this series, we explore the issue of trust: how can we build trust in data and technology and help create a world that works for everyone. Join our host YS Chi, director of corporate affairs at RELX and Chairman of Elsevier, as he dives deep into conversations with some of his friends and colleagues. Thank you to our listeners for tuning in. You can check back here for a new episode every other week. This podcast is brought to you by RELX, a global provider of information-based analytics and decision tools for professional and business customers, enabling them to make better decisions, get better results and be more productive. Our purpose is to benefit society by developing products that help researchers advance scientific knowledge; doctors and nurses improve the lives of patients; lawyers promote the rule of law and achieve justice and fair results for their clients; businesses and governments prevent fraud; consumers access financial services and get fair prices on insurance; and customers learn about markets and complete transactions. Our purpose guides our actions beyond the products that we develop. It defines us as a company. Every day across RELX our employees are inspired to undertake initiatives that make unique contributions to society and the communities in which we operate.
Unique Contributions
Responsible AI and Ethics - what it means in practice, with Kirk Borne, AI Influencer
In this second episode, YS Chi explores the subject of Artificial Intelligence and Ethics, also sometimes called Responsible AI. Like any technology gaining prominence, there is both substance and fuzziness to be found in discussions around AI. There are also many grey areas that are sometimes presented as black and white. Vijay Raghavan, chief technology officer of LexisNexis Risk Solutions, explains how we look at Responsible AI and offers a helpful framework that differentiates real-world bias from data bias or algorithmic bias.
Also in this episode, we hear from our first external podcast guest, Kirk Borne, chief science officer at DataPrime and one of the world's top artificial intelligence influencers. Kirk talks about how AI has evolved and the importance of transparency and human oversight.
This podcast is brought to you by RELX.
The Unique Contributions podcast is brought to you by RELX. Find out more about us by visiting RELX.com.
Kirk Borne:I saw it in business and sport. You know, even the recommender engines on ecommerce stores were already using machine learning to recommend products to people. And all of that just fascinated me. I mean, as a scientist who loves working with data to make discoveries. This is like I was a kid in a candy store.
YS Chi:Hello, and welcome to series three of unique contributions, a RELX podcast where we bring you closer to some of the most interesting people from around our businesses. I'm YS Chi, and together with my guests, I'll be exploring some of the biggest issues that matter to society, and how we're working to make a difference. We have some exciting new guests lined up for this new series. So I'm very excited to dive in. Today, I'm exploring the topic of AI in business and asking the question, how should we think about responsible AI? And what are we doing about it? Later in this episode, I'll be getting the thoughts of AI expert and influencer Kirk Borne, but my first guest this week is Vijay Raghavan, who is the EVP and Chief Technology Officer of LexisNexis Risk Solutions here at RELX. Vijay now joins us from our office in Georgia to explore the perennial question of data analytics and ethics in AI. So Vijay, welcome back.
Vijay Raghavan:Thank you for inviting me YS, it's a pleasure to be on your podcast.
YS Chi:So I'm going to start off with basics for our listeners who may not be as familiar with the technology lingo. There are a lot of buzzwords, right. AI, machine learning, deep learning, and so on. And I think this huge web of terminology can be confusing at times. Can you start with defining each of these terms? And explain how they are connected to one another?
Vijay Raghavan:Yeah, yes, you certainly you do have these terms that tend to overlap, so maybe you can define them using a few examples. Let's start with artificial intelligence. That's actually an umbrella term that loosely refers to the representation of human intelligence and machines. And there are many ways of representing AI. For example, if you think back to the days of Deep Blue back in the late 90s, if you recall, this was the IBM computer that beat Garry Kasparov, the reigning chess champion at chess, that was cutting edge AI at the time. And the way it worked was it had a library of opening chess moves, and then went through a tree of rules and probabilities to calculate the best next chess move up to a certain depth and Deep Blue used what was called symbolic AI, or good old fashioned AI. Today, we think of that just one type of AI, right? Machine learning is another type of AI. And as such, it falls onto this AI umbrella that I just talked about. And it's actually a set of AI techniques. And the essence of machine learning is that you train an algorithm with lots of historical data, which is where the term big data comes from right. And the algorithm effectively learns how to make predictions based on these large volumes of historical data or big data. The idea is that the algorithm gets progressively better over time, because it gets trained on more and more historical data. And a good example is the recommendation engine within Elsevier Science Direct, because it recommends articles related to what the subscriber is reading, based on how it has seen similar other subscribers picking and choosing similar other articles. There's actually a great example of machine learning I was reading about a few days ago. YS, are you familiar with Peter Jackson, who made the Lord of the Rings movies? Absolutely. Right, he made a docuseries about the Beatles recently called Get Back. And he made it out of 50 year old tapes. And the audio was recorded in mono back in 1969, during the Let It Be album sessions, and so the audio was garbled, the conversations were submerged, you couldn't really hear what the Beatles were saying. So Peter Jackson hired some machine learning experts to restore the audio, by training the software to recognise Paul McCartney's voice and John Lennon's voice, using other recordings available, and surfacing the audio, I thought it was a fascinating use of machine learning, obviously, not in the context of what we do, but it's a good example of machine learning. So when you move onto deep learning, just like machine learning is a subset of AI as a whole, deep learning is a further subset of machine learning. And it's an evolution from machine learning that tries to mimic the human brain using a concept called neural networks, similar to how the brain if you will, has neural networks. And what that really means is that a deep learning algorithm requires less training from an AI practitioner than a machine learning algorithm does. So a good example is again, going back to games, the AlphaGo computer, which learned how to play the game go. It's a great example of deep learning at work because it improved by itself by learning from each game that it played against increasingly sophisticated players, as opposed to humans training it after each game. So it ultimately got better than the best human Go player. And that's a good example of deep learning at work. And so to your point, YS, even when you look at Deep Learning and when you look at some of the techniques, like neural networks to use an example. There are specific neural networks like convolutional neural networks or CNNs as they call them which are good for processing images, and visual pattern recognition, then you have ordinands, or recurrent neural networks, which might be better suited for something like language translation on the fly like Google Translate. So, really all these techniques have become more powerful, more sophisticated, because computers become more powerful. Data has become more readily available, and which has led to the symbiotic concept where... situation where AI techniques are therefore proliferated, as people have taken advantage of the hardware and the availability of data.
YS Chi:It makes a lot of sense. So now that we have a better idea of what all these terms mean, how they are related to one another, let's explore some of the big questions we're facing today, especially within our industry, and how our work at RELX is connected to this. With any conversation about AI, you cannot avoid the issue of bias in the data or algorithm. So how big of an issue is bias in data? And is this a realistic goal for technology companies like RELX to have or are we approaching this challenge with a wrong mindset?
Vijay Raghavan:It's a great question. And the term bias itself is an interesting term. It has a pejorative connotation, but it isn't necessarily always a bad thing, right. So obviously, if the source data is collected in a skewed fashion, or the algorithm itself is flawed, or if the AI practitioner is biassed, all of which is possible, there is all kinds of bad bias that can creep in. But if you set those things aside for just a minute. And I'll come back to it. The point of an AI algorithm is to programmatically identify bias within the data. And in this case, when I say bias, I'm referring to the patterns within the data that need to be surfaced like a clustering algorithm that separates the data into what it thinks of as logical clusters, right. So if there is no bias in the data at all, the algorithm is not going to be able to create clusters or to find patterns that allow you to make meaningful decisions. So the goal isn't to remove all bias. First of all, that to your point in your question, that often is impossible. But it's also not always desirable. The goal, of course, should be to prevent bad bias from creeping in. So for example, if you're building relevant to our industry, if you're building a credit scoring model using machine learning, are we training that model with data that is broadly representative of the population that you want to implement the models for? Or are we training it using just data that's affiliated with a few affluent zip codes, or just from some low income zip codes, because those are the only places where we happen to be able to collect the data, right? That's the kind of thing that can introduce bad bias. And we do need new people to recognise and prevent that kind of thing.
YS Chi:Right. So that's where human intervention is absolutely necessary?
Vijay Raghavan:Exactly right. So when we talk about models, especially models that we that impact society that impact consumers, their livelihood, their wallets, we don't just, you know, use a deep learning model and cast it out into the wind and say, here's a model that's gone to production. There's always a human element in terms of understanding the attributes that go into the model, the data sets that go into the model. There is a review after the fact to make sure there's no bad bias that's crept in. So there's a there's a formal process to make sure that we're doing it the right way, and the ethical way.
YS Chi:Right, so this is one type of bias, right, about the the composition of the data itself. The other is whether that composition, or the data, encroaches on data privacy and digital security. And the COVID-19 pandemic has really shown us that we need rigorous data security and privacy regulations in place to protect ourselves, whether it's ecommerce fraud attacks, or large scale ransom attacks on corporations. We've seen just how vulnerable information can be in datasets. What are your predictions for data related regulations in the next few years to come?
Vijay Raghavan:Yeah, up until now, regulators have not been, frankly AI savvy enough to understand how to regulate AI. And that's problematic because that can lead to a patchwork of contradictory laws that can vary from state to state. It reminds me of you remember, when Mark Zuckerberg went to the Senate a couple of years ago, and one of the senators asked him "How do you make money if Facebook is free?", and he had to explain, Senator, we sell ads. So there's a knowledge gap in the legislative branch when it comes to social media monetization, let alone something like AI. Right. All that said, I do expect regulations around AI to increase. That's okay, as long as they increase in a manner that's consistent across jurisdictions. So if you take the California Consumer Privacy Act, the CCPA, which a lot of people have heard of, that's not actually AI regulation, it's more around privacy and security regulation. But it is a good example of one state setting the tone around privacy security regulations in the US and other states modelling the regulations on the CCPA as opposed to each of them rolling their own, because that'd be completely chaotic. And that's where that's how I'd like to see regulations head. What I see from early previews is that lawmakers want providers of AI based solutions to conform to certain tenets, like using compete and representative datasets to design AI models or for companies to test their AI algorithms for discriminatory outcomes. And regulators will want us to do what we say and say what we do similar to the regulations that exist around unfair, deceptive and abusive practices and those kinds of things. And they're going to want AI to be transparent, explainable and auditable, meaning available for independent reviews, all of which is fine. But the caveat is that the regulations will need to be context dependent, by which I mean, there should be more regulatory oversight around the explainability and transparency of AI, as relates to things like consumer credit models, because that directly impacts people's wallets or lifestyle or livelihood, as opposed to the same AI algorithm that might be used to flag say, information security threats, which is arguably a more benign scenario. For what it's worth, the RELX AI survey that we did, YS that you're familiar with, we found that only 62% of US business executives are confident that they can comply with AI regulations without significant additional investment. And this goes hand in hand with the waining desire on the part of US executives to see increased regulation around AI, for obvious reasons.
YS Chi:You know, I think that clearly shows the other side of the coin, which is that collecting data has been the engine of innovation for us in the last couple of decades, at least, right and more visibly, and it is somewhat debatable on how much consumers actually trust it. They're, I guess becoming more aware of this issue now? How do we establish trust with our customers, and secure their privacy, but still deliver things that are actually beneficial to them?
Vijay Raghavan:Very good. A big part of this, in my opinion YS is messaging to consumers and to our customers and regulators about what it is that we do, and how we do it in a very transparent way. As we've seen some of the negative news around Facebook or ClearView AI, lack of transparency around what you do, and how you do it makes you very unpopular, and can be seen as untrustworthy, right. For someone like us at RELX, we would not be in business if our customers didn't trust us to do the right thing for them, and for consumers. We use our data for good every day, whether it's to help lawyers win cases or doctors to make the right diagnoses, or underbanked consumers being able to get loans and mortgages, and so on. But even beyond that, any any US consumer can come to our website at LexisNexis Risk Solutions and ask for a full file disclosure report that tells them exactly what data we have on them. That builds trust, and we give them that information for free. If they find something incorrect on the data we have on them, they get to tell us and we are then obliged to correct it after we verify it. Right. That's an example of how you establish trust. Here's a different way of answering that question YS. If you think back to, let's say 100 120 years ago, during the early days of credit, the way in which a credit bureau might have established your credit worthiness was to send someone to your neighbourhood and ask or to my neighbourhood and asked my neighbours what kind of a guy is Vijay. And if my neighbours liked me, and were not biassed against me for some reason, they would probably say nice things about me, and I'll be seen as a good credit risk. But if they thought that I was in some way different from them, and were biassed towards me, I would be seen as a bad credit risk. So my point is, there was actually less transparency and more bias 100 or so years ago, than there is now with our AI based credit scoring models. And we need to be able to explain those kinds of things to consumers to put things in the right context. And when it comes to regulators, it's obviously it comes down to faithfully abiding by regulations and laws. There's an alphabet soup of regulations that someone like us has to abide by. So if you're affecting a consumers livelihood to my earlier point, we have to abide by the FCRA. There are certain rules that we have to abide by. If it falls under law enforcement, if that's a scenario, that's more likely to be a non FCRA situation, but we still have certain regulatory obligations around privacy and security. And because of all these things, our customers and suppliers and consumers trust us.
YS Chi:You know, listening to you, as much as technology has become complicated and whatnot, it does come down to very basic human nature, isn't it about transparency and authenticity and in sincerity in dealing with people. And whatever regulations we come up with, it needs to go back to that concept of data for good.
Vijay Raghavan:You're absolutely right. YS, have you seen an old movie called Judgement at Nuremberg?
YS Chi:No, I have not.
Vijay Raghavan:OK, I would recommend it. It's an old movie about the Nuremberg trials of the Nazis after World War Two. And in the movie, there's a pivotal scene where the judge who's played by Spencer Tracy tells a defence attorney who's defending the Nazis on trial, he says, "Counsellor, you are a very logical man, but to be logical is not to be right". And that sentence has stayed with me. So watch the movie, because the point I'm making is, yeah, we were already abiding by certain principles around privacy and security and transparency, even before some of these regulations were implemented, because we knew they were right, and not just logical. So long before we created models using machine learning, or deep learning, we already had best practices around how to collect data, how to link data accurately, how to create attributes, models in ways that minimise bad bias. We aren't waiting for regulations to be passed to keep us honest and ethical.
YS Chi:Yeah, and a lot of the negative feedback for some companies is because they have been not so forthright about what exactly it is they're doing, in fear that they were somehow giving away their secret sauce, when in fact, they should have been upfront about it. So I'm glad that RELX is doing a good job on that front. And as always, Vijay, in 15 minutes, I can learn more from you than in 15 weeks of class. I can't thank you enough for joining us today, and for sharing your insights and wisdom on this very contemporary topic. Thank you so much.
Vijay Raghavan:It was my pleasure. Thank you for having me.
YS Chi:Like Vijay, my next guest is definitely an AI expert. Kirk Borne is the Chief Science Officer at Data Prime, a B2B provider of data and AI services where he is responsible for developing data teams at client companies. Kirk is also an internationally renowned influencer and thought leader. We are very honoured to speak with him today on our podcast as our very first non-RELX guest. So before we delve into AI ethics and responsibility, welcome, Kirk.
Kirk Borne:Thank you. It's great to be here today. And I look forward to this conversation.
YS Chi:Well I have to ask, you are originally a trained astrophysicist. And here you are now, in the world of AI. What's the overlap between these two? And are there any elements of your astrophysics training that are relevant to what you do today?
Kirk Borne:Well, for me, it's a completely continuous transition. It may seem odd to people to go from one of those to the other. But they're both very similar, in that they're very focused on solving problems and discovering insights from data, and building models of complex things, informed by data. And so it's computational. It's data intensive. It's a scientific process that relies on creativity and curiosity. So all the things that inspired me to become an astrophysicist to become a scientist, I'm still a scientist. So none of that has changed. In fact, I've had a variety of different jobs in my career with jobs with the Hubble Space Telescope, I spent 20 years working at NASA. I was a professor of astrophysics at University for 12 years, I worked at a major international consulting firm for six years now I'm working at a startup part time and actually started my own little business recently. And doing all of these different jobs, I have only been in one career and that's being as a scientist.
YS Chi:That is true that you have a fundamental foundation as a scientist. But you've also been applying that scientific intuition and talent and interest in so many different directions, right, as you said, as a professor, as a businessman, as an entrepreneur, as a, as a researcher, and so on and so forth. What was it like to find applications all the time?
Kirk Borne:Well, I think the very first indication that what I was doing had application beyond just the sciences was about 20 years ago, after this terrorist event in United States called the 911 event. And I got contacted by the White House to brief the President on data mining techniques. Basically, how do you discover patterns and data to build predictive models of things? And I was doing this stuff in astronomy, and I didn't realise that things I was doing actually had this international importance. And so I started looking around and I discovered, well, businesses are doing this in medicine and healthcare is doing this. You know, even sports, I mean, sports analytics, I mean, this whole there's a movie about baseball, and how people use statistics, in baseball. So the idea was that to me, well, two things. First of all, I was I was surprised, at that point, I was just beginning to learn about machine learning and data science. And that very little bit that I knew was considered expert level because very few people were doing it. So that's surprised me. And the second thing was how what I was doing in astronomy had this massive application and benefit, you know, far beyond the sciences. And I saw it in business and sports and entertainment and medicine, logistics, you know, retail. Now even the recommender engines on ecommerce stores, were already using machine learning to recommend products to people. And all of that just fascinated me. I mean, as a scientist who loves working with data to make discoveries. This is like I was a kid in a candy store, I just, I just couldn't get enough of all the fun things that I saw people doing. And I just wanted to do that more and more.
YS Chi:Yeah, I bet you're just so excited. But then you know, it's still going, there's so many problems we need to help solve with data.
Kirk Borne:That's actually true. And actually, one of the reasons I left NASA was I realised that we're going to need to train the next generation to do this. And that was 18 years ago, I made that decision to leave my my lovely, wonderful work at the space agency, and go to a university. It was always my dream to be a professor at a university, and I became professor of astrophysics 18 years ago, but I never actually taught astrophysics, we actually laid out and started the world's first undergraduate data science degree programme. And that was really my goal is to bring data science to the masses. It's not just for the sciences, it's for everyone.
YS Chi:Well, I'm so glad you did, because we still need millions more in our world to solve these big, big problems. So why don't we do a little bit of training here? Vijay explained the differences between AI, machine learning and deep learning some concepts that can often become quite confusing. For the benefit of people who do not have technical background in either data science or mathematics. Can you please help describe some concrete applications of AI? And the business sectors that AI is impacting the most already?
Kirk Borne:Well, the impacts are primarily, I mean, everywhere, of course. But the really big use cases we see in finance, and healthcare, and insurance and government, logistics manufacturing. Oh, wait, I'm and I'm just practically naming every industry there is. But for me, I'd like to start with sort of a definition of terms. And I guess that's the professor inside of me. So I tell people that data science is a scientific process, right? It's this, it's the application of scientific method to discovery from data. So what we're doing is data science, because we're doing discovery from data, we test a hypothesis, we observe something, we infer how it works, that's called building a model, right? You try to build that model and you tweak the model, you change the parameters, you change the form of the model in order to see how you can best improve it. That's a that's a scientific process. And the way we do that is we use a set of mathematical algorithms called machine learning. So machine learning is simply mathematics that is patterned discovery mathematics, finding trends, correlations, clusters, outliers, anomalies, associations, all of those things are just mathematical techniques. And then when we when we learn what the the most meaningful patterns are through data science method through machine learning algorithms, then we deploy those things. And we have the actionable thing that we deploy is the artificial intelligence, the actual thing that does the work for us. And so it could be a recommender engine, or it could be a cancer diagnosis. For example, image understanding is one of the categories of AI that is looking at images and understanding what's in the image, for example, self driving cars, autonomous vehicles, need to understand what's in front of the car, what's near the car. And so, image understanding one of the big applications of AI, another one is just language understanding. So for example, when you talk into a chatbot, I do voice search on my phone, right, I want to search for something on a search engine, I just say the words, I don't type the words on my phone. And so voice understanding, but it goes both ways. And that is you can have a dialogue. And that's called a chatbot of a conversational AI. And we use these all the time without even realising it. But the most important one for me not just language understanding, and image understanding its context understanding, that is the other data that tells you what's going on in that environment. For example, during the COVID, period, there was a tremendous change in the kinds of things that people purchased, the all the models, predictive models of what kinds of products people would buy at different times of the year, or even different days of the week, or hours of the day. All that completely changed when everyone was working from home and we had this this traumatic thing called the pandemic. And so those models were all wrong. So the models had to understand that there was some app, there were some context. It wasn't just time of day. It was there's a context in which all these things were affecting the model.
YS Chi:That's right. And in fact, this is one of the things that I think many people are fearing and that that somehow this tsunami of talent that we need to develop and train is all technical that it's all mathematical. Whereas you are now talking about this contextual skill set, which can be also formed as domain expertise, isn't it?
Kirk Borne:Well, I'm so glad you said that, because that's one of my big passions. Going all the way back to that story where I said, I left, work at the space agency to train the next generation. And I wasn't talking about training the next generation of astrophysicists, or trying to train the next generation of mathematicians, it's about recognising that every single person in the world needs to understand how their data are being used, how data can generate value. When I taught my students, I taught very introductory courses, as well as the advanced courses. But an introduction course, I always brought out my smartphone during the first day of class, and I said, you all have one of these right? And they say,"Yeah", and I said, "Well, you know, that you're generating tonnes of data, you know, how, what you're looking at what you're searching for, you know, what videos, you're looking at what, you know, what things you're reading, all that is generating data for businesses, and they're making money, you're generating data, hey, don't you want to be part of that revolution and have value in your own life, not just for value creation for this other business?"
YS Chi:Yeah, I think that this issue of everyone participating also then has some rule setting that we need to do so that we are using these new skills and new capabilities with responsibility, right? So how do we ensure that AI does not contain unintended consequences, particularly around biases.
Kirk Borne:So there's two ways to deal with this one is just remember, you need to always have the human in the loop. That is you need to have someone with some domain expertise in some some human compassion or empathy, if you want to call it that, that looks at this application of AI, look at the algorithm, and then see if it's equitable, and see if it's just and see if it's doing the right kinds of things. But there's also a mathematical way that is approaching this problem. And I really like something I heard of one company was talking about, at a conference, they were actually a financial services company, who actually did sort of, basically loan decisions, for people, individual people. And one of the things they did with their with their algorithm is they did what they call the reverse engineering. And so they said, when they built the credit scoring model, they removed all the factors that they should not be using in the model, for example, gender, or maybe ethnicity, and other issues like that, which we shouldn't be using those types of factors and making decisions. So they removed that from when they built the model to predict what a credit score or a credit risk might be for a particular individual. So after they built the model, then they reversed it, they said, Okay, given that this person or this set of persons, we say do or do not give a loan to that is credit risk is either high or low for this particular group, let's reverse engineer and see if we can infer what the gender or ethnicity or whatever, whatever those factors were, that were removed, intentionally see if we can infer what those factors are without even knowing what they are just using the output from the model, reverse engineering, see if we can work back to inputs that we should not be using. And if they can infer some of those factors that should not be part of the decision making, then they realise that some kind of bias has leaked into their, into their algorithm. And then they can address that. And so that they're taking a very mathematical technical approach, which, which is a really good way to look at this because we want to have some objectivity, not just subjectivity, in how we handle this, because after all, that's what bias is, right? It's putting too much subjectivity into our models.
YS Chi:Right. Knowingly or unknowingly? Precisely, right. So beyond implicit bias, it's become clear that data and AI can do some real harm when put in the wrong hands. So the words responsible AI are now being tossed around very often. It's a big discussion for, you know, governments, academics, business people, ethicists to discuss, do you have any suggestions for a Responsible AI framework, something that will be broad enough to span all industries that make use of AI?
Kirk Borne:Actually, I have an idea about that. Back when I was at the university actually did I mean, this could be a little detour here, but I'll come right back to your question. When I was at the university, we did some research into sort of educational research, basically, what kind of things we can use to teach students better data science. And I had to sign a form and fill out some applications about what's called informed consent and human subject research. And this was very novel to me it was it was not surprising at all for all the education researchers, nor should it be surprising at all for medical researchers because they involve human beings. All of my research and my career had to deal with distant stars and galaxies. You know, I did not I did not need the consent of those stars or those galaxies, to do research on them. But if we're doing research on people, we need this informed consent, and there's principles of human subjects research, like do no harm, informed consent, shared benefits, shared risk. And I realise AI is like that, because what AI is the implementation, I should say, of AI across the world across all of our industries. It's really a grand experiment on humanity. Right, we're actually doing a grand experiment because it affects human beings. And so we need to take these principles of human subject research into account, that is do no harm, first, informed consents, so that, you know, people get to have a choice. And there's also this concept of shared benefit and shared risk. And that's quite an interesting one, because there will be risk, but there will also be benefits. And so we can't expect we're going to do things that have zero risk. But the point is, is that if there is risk, if there are benefits, they should be a shared equitably across all populations and users, you know, not benefit one population over another or harm one population over another.
YS Chi:So in following your, your insights there, is it possible that we allow some room where those who experiment and find some damage are not unreasonably punished? And that there are some rules for being able to correct the action? Or is it something that where we need to set the rules so stringently upfront that it can actually inhibit people from experimenting or worse yet, bypassing it and doing it clandestinely?
Kirk Borne:This is an enormous challenge. I don't think there's a single podcast or 100 podcasts that can resolve this question. But but all those things you're saying are absolutely serious and true. But I think one of the ways we can deal with this is just realise, even in clinical trials, medical clinical trials, they're testing, for example, drugs and treatments. And sometimes there'll be serious consequences. In fact, if you've ever heard any, like advertising for drugs, they always list all the possible side effects. So how did they learn there were these side effects? Well, they learned that there were these serious side effects, because during the clinical trials, there were people who probably contracted those particular side effects. And so if, again, if we focus on this as an experiment, and then you need if you're going to involve people in the experiment, they have to be informed about what's going on, they have to be able to say yes or no, I want to participate in this even though there might be risk. Now, of course, you'd like you said there can be clandestine things where people are doing these AI implementations with without sort of that oversight without that informed consent. And unfortunately, you'd need, you know, sort of regulation to help with that. But again, people can sidestep regulation. And again, I'm not going to get try to get into all those issues, about enforcement of regulations. But well, what we need to do is have, again, a balance. I think, in some cases, we've seen some AI, responsible AI documents, in some quarters of the world that are hundreds of pages long. And it's almost makes it impossible to do anything with this. So we so we need to have more balance, and not as just stop things in their tracks. And I'm gonna go all the way back to what I said the beginning, we've always used algorithms as humans, most of those were in our head. So it's not like this is the first time we've ever used algorithms. But the scale is so large that we need these regulations. But we don't need to make it so heavy, that we just stop humanity in its tracks.
YS Chi:You know, people who regulate, some are experts, but most of them are not, right? Particularly those of those who are elected officials that handle myriad different topics. How do we ensure that we can train those regulators or legislators to properly understand it rather than making, you know, overly protective process?
Kirk Borne:No, seriously, I think every person needs to have some kind of training on this, you know, not in-depth mathematical necessarily, but certainly awareness training. And so sort of our literacy, maybe that's a better word, there needs to be sort of AI literacy. So people, not necessarily being trained to do programming or trained in mathematics, and trained in model building, necessarily, but they need to understand the terminology, the implications, and the implementations that are beneficial. And so seeing both the risk and the benefit, seeing the applications that have worked and the ones that haven't worked, that's part of building the literacy and being able to use the words in a sentence correctly. As I was saying earlier, the difference between machine learning, data science and AI sometimes it's very blurred with people. So I like to make sure people understand the words we're using before we start describing more serious things like regulations, and biases and things like that.
YS Chi:So you know, when I try to explain some new concept to people who are curious, I tend to use examples, right a real case, give us a one case that you would be delighted to catch a congressman or senator in an elevator and give an example of how AI was used responsibly, and how good an impact it has had, for those concerned?
Kirk Borne:Well, if I had to do a one minute elevator speech, I would pull out my smartphone, I would turn it on, I would stare at it, and it would automatically log me in. It would use my face. And in fact, this happens a million times a day for me and everyone else. It uses facial recognition to unlock my phone, my face is my password to unlock my phone. And so facial recognition is a big hot topic for regulators, a big hot topic for a lot of ethicists. And I understand that but at the same time, facial recognition helps me a million times a day when I don't have to type in my passcode on my phone every time I want to use it. So it just used that example right there. I use my face, I use facial recognition, AI software, to do something very efficient for me, which is to unlock my phone and get me into my apps to do the things I need to do.
YS Chi:A very live example. In fact, there is a saying that I learned that for every discovery of scientific nature, there is the light and then there is the shadow. And the question is, how do we balance them so that the shadow does not overtake the light that comes from things like facial recognition? You know, Kirk, I am so glad that we were able to have this conversation. Thank you so much for spending the time with us today and giving us such a very, very simple but insightful and direct view toward AI and its future. Thank you so much.
Kirk Borne:You're welcome. It's my pleasure.
YS Chi:Thank you to our listeners for tuning in. Don't forget to hit subscribe on your podcast app to get new episodes as soon as they're released. And thank you for listening. Please stay well.