Unique Contributions

From generative to extractive and agentic AI: How RELX chief technology officers are applying AI to solve real-world customer challenges

In this final episode of season four, YS Chi sits down with the chief technology officers at LexisNexis Risk Solutions, Elsevier and LexisNexis Legal & Professional to explore the topic of technology at RELX.

RELX is not a tech company, but  a big user of technology. Around 12,000 technologists, over half of whom are software engineers, work at RELX. Annually, the company spends $1.9bn on technology. 

Vijay Raghavan, Jill Luber and Jeff Reihl share insights on how they are driving AI innovation, from extractive to generative and agentic technologies, to solve real-world challenges in law, healthcare and research.  They also share candid advice on staying adaptive, experimenting boldly and preparing for the future of work.

For insights on technology at RX, listen to Brian Brittain, chief technology officer at RX, in episode three.  

Watch the video version at https://youtu.be/IlfsybgDQlI

YS Chi:

Hello and welcome to the episode on technology at RELX. Around 12,000 technologists, over half of whom are software engineers, work at RELX. Annually the company spends $1.9 billion on technology. It is the combination of our rich data sets, technology infrastructure and knowledge of how to use next generation innovation that allows us to create effective solutions for customers. Today, I'm speaking with the Chief Technology Officers of three of our businesses to understand how this all works. My guests are Vijay Raghavan, CTO of LexisNexis Risk Solutions, Jill Luber, CTO at Elsevier and Jeff Reihl the CTO at LexisNexis Legal & Professional. Combined, they have over 60 years at RELX and I am barely above that average. So as we start as always, I like to ask you to each provide a very brief, personal and professional background for the audience, please. So why don't we start

Vijay Raghavan:

Sure. Thank you, YS. Thank you for having me on. with Vijay? I've been with the company for 23 years now, joined as part of the ChoicePoint organisation, which was acquired by RELX and LexisNexis back in 2008. Became the CTO back in 2012 so I've been on the soul for 13 years now. Probably some kind of a law against that in some parts of the...

YS Chi:

None whatsoever. We can add another zero to it Vijay.

Jill Luber:

Yeah, hi. Thanks for inviting me. Jill Luber, I've been with RELX for 22 years. I started with Risk Solutions, and I was part of an acquisition as well. I was part of Size It, which was a company that we acquired in 2003 out of South Florida. It's the company that was responsible for creating the HPCC Big Data platform. And so part of that acquisition started as a data engineer back then, and was part of Risk for 18 years and worked actually under Vijay for about 10 as his direct report, and then had the opportunity to join Elsevier as their CTO in 2022. So, little more than three and a half years I've in role here.

YS Chi:

Great and Jeff, way out on the west.

Jeff Reihl:

Sure, I'm Jeff Reihl. I'm the CTO for Lexis, and been with the company for 18 years. I actually was hired by Mike Walsh, running US editorial. Even though I have a technical background, I was running the editorial organisation, but quickly moved into the technology organisation, as we knew we needed to invest a lot in that area. I've been in this role for over a dozen years as the global CTO.

YS Chi:

All three of you have really interesting background, but unfortunately for this session, we can't delve into it because we have so much to cover on this ginormous topic called AI. This has become the focal point of every conversation, no matter where you are in recent years. Yet, it's something that RELX has been focused on for well over a decade. We've been using extractive AI and more recently embraced generative AI and agentic AI, and going on to other stuff which I cannot name all, which you will later on. I believe we now have over 13 commercially available generative AI products across the group. I'd like to start by talking about extractive AI. Vijay, can you explain, how does that work and why does it matter to everything that we've been doing since, and the things that are ahead of us?

Vijay Raghavan:

Yeah, it's a great context setting YS. We have a long history, as you know, at RELX, with big data and AI and machine learning. One of the things we like to say is that we were big data before big data was cool, and we say that without a trace of arrogance. But at the same time, we recognise that big data is not an end in and of itself. It's basically, the point is to get the small, actionable data, and that's central to how we leverage technology at RELX, especially in the context of extractive AI, which is what we started off with. The point was to start with large, large volumes of data, very diverse data sets, very different formats, from thousands of data sources acquired at very high speeds, different kinds of cadences. And we take this data, it's unstructured, structured all over the place, and we extract the data points from the content, and we enrich the data to make it more analysable. And as a function of doing that, we employ machine learning and natural language processing and AI techniques to provide our customers the insights they need to do their jobs. So just the process of linking, for example, linking all the data together. To give an example, we have 7.2 billion SSMS. We don't have that many people in the United States. That's because we have a duplicity of records across multiple sources. So the point is to say these 387 records belong to Jill Luber, or these 512 records belong to YS. Only if you get that right, and we use machine learning even for that. Only if you get that right are you in a position to create models and scoring models and attributes that are accurate, and that are reflective of the kinds of insights you want to give our customers. So in the end, to answer your question about extractive AI, the idea is whether, in all the different facets of solutions we offer customers, whether it's a university benchmarking its performance, or a doctor trying to decide the best way to treat a patient, or in Jeff's case, a litigator trying to determine whether to take a case to court, or in my business, an insurance underwriter trying to assess the likelihood of a claim or determining the insurance premium. All that involves machine learning or some form of AI for us to bring the right kinds of solutions to our customers, to help them. In my case, to help our customers reduce the risk of doing business with their customers.

YS Chi:

So you talk about how there are all these data sets all over the world, and you are finding relationships between them. Is that a simple way of talking about how we have spent the past dozen years or so or more trying to perfect?

Vijay Raghavan:

Yeah, the very first step is, later I'm sure we'll be talking about responsible AI, right? But the very first step, even before we get to that point, is to go very broad and deep in collecting accurate datasets. We're very particular about the provenance of the data that we get. We're very particular about making sure we collect the right kinds of data. For example, just because the data is convenient to get in three counties in the state, but not convenient. Well, that's not good enough. You've got to get it from every single county to make sure the data is representative. Once we get the data, then, of course, what you say is absolutely true. We've got to make sure that the linkages between the data is as accurate. We use these terms called precision and recall, which Jill is very familiar with, because she lived and breathed that world for better part of 15 years, probably. But getting the data to be as complete as possible, linking it very completely and linking it very accurately is our bread and

YS Chi:

I see we're going to get back to that, because there's a butter. concept later on that I want to talk about called trust. Let me jump right now though to Jeff and ask, how does all this stuff work in your domain? And Jill after that, how does that work in your new domain? I ask this because all of this is useless unless there is an application. So please, Jeff.

Jeff Reihl:

Sure. I'll jump in. So, as Vijay mentioned, we've been working with AI technologies for many, many years, and a lot of that started out in using simple NLP, natural language processing, and then more detailed machine learning, deep learning algorithms. As you mentioned, YS, we started out with extractive AI, which in our world it's taking a case, identifying the attorneys, the law firms, the judge, the legal topics, and doing that in an automated way using these AI tools. As we went further along, the deep learning algorithms got even more sophisticated, and one thing a lot of people may not know is we were actually using GPT-3 before get ChatGPT came out. So, we had GPT-3 integrated into our product before the whole ChatGPT application came out, but when that came out, that was a game changer. We had to step back, because all of a sudden the generative capabilities, the ability to create a document or to summarise a document for the legal industry, was a game changer. What we discovered was, prior to ChatGPT coming out, a lot of legal professionals were not what you would consider early adopters of technology. In fact, they were oftentimes laggards, but in this case, when they saw the capabilities of ChatGPT, all of that changed. We did a very quick pivot as an organisation to really shift our resources towards this opportunity, because it was a big one, and we quickly identified four use cases that made sense for our customers. We are working extremely closely with customers, get feedback on that, we reassigned our best data scientists from other teams to focus on this opportunity, and by October of 2023, we were ready to launch our first AI product, which was very, very well received. We made an early decision during that process to support multiple large language models. We early on had relationships with anthropic, with AWS Bedrock, with Microsoft to host the open AI models. Since then, we've even expanded to other models, including working with OpenAI directly the Mistral models. And so we're also hosting directly through Anthropic, so we've been very adaptive. Use the best technology, the best large language model to support the use case, at the best performance and the best price. We've had to set up an infrastructure that will allow us to quickly adopt new technologies. New models are coming out all the time, and to be able to test those in an automated way. But in the end, have legal professionals, humans in the loop to make sure that the quality is appropriate for our customers before we would release any product. But it's having a significant impact in the legal market because of the huge opportunity it presents.

YS Chi:

That speed element you were talking about is not something that LexisNexis Legal was reputed for in the past. So, it's a nice thing to hear about. Let me jump to Jill, and then I'm going to come back to two myths that I have been carrying, and I've been torturing Vijay from time to time to challenge him. But Jill go ahead and tell us about Elsevier case.

Jill Luber:

Sure. Yep. Elsevier is a bit different from Risk and the LexisNexis Legal & Professional organisations in as much as our content is not readily available. It's not public information. It is research that has been peer reviewed, and a lot of our research is only available through Elsevier's products. But we have found, especially with generative AI, that's not enough. It's not enough to just have the content and have this very unique, curated set of content. We need to do more for our users. We need to do more for our researchers. We need to do more for our clinicians. Right on the heels of LexisNexis, and in fact, using some of the lessons they've already learned and some of the technology that they were so, so gracious to burn the trails for us, we could take some of their learnings and create new offerings for our customers as well. The first two off the bat were ClinicalKey AI, which is in our healthcare division for clinicians. Now, it has two purposes, one being you can look into some of the literature and the research in the medical world and make more sense of it faster. But there's also point of care, and that point of care solution is one that is really revolutionary, because you can imagine the amount of literature that exists in the medical world and new methods, new medicines, new ways to diagnose, and for a clinician to really dive into all of that, they would have to do it offline. They'd have to do it after hours, after they've finished their patient load, and then go back to the patient with more information. This allows them to really dive into that, at the point of care, so they can have a more interactive conversation with their patient, using the literature as the background, and in that solution, accuracy is so important. Vijay talked about precision and recall, and so we've done a lot of work to make sure that our responses, and we talked a little bit about responsibility already, but there's a lot of different dimensions of responsible AI, and one that's super important in this space is to do no harm. We have a way to measure them, and we use doctors. We actually use clinicians to help us judge whether or not our answers are concise, are they relevant? Do they do harm? It's something that we stay on top of and we're constantly monitoring in the ClinicalKey space. We also released in our academic research space ScienceDirect AI. ScienceDirect is a product that we have that will allow researchers to dive into research. And again, I talk about that content that we have that is peer reviewed. But oftentimes these research articles are very, very long. There's a lot of information in them, and to really get to what you're looking for can be cumbersome, and so adding an AI layer on top of this content really helps the researchers get to the point. They really just want to know, what kind of testing method did you use? There's a way that the AI can quickly help you find that information. We use RAG, which is the method by which we don't just ask generative AI questions and let it answer the questions for us. We use the generative AI, one to search our content. And we talked a little bit about linking at the beginning, that still applies in generative AI. Certainly with, you think about taxonomies and some of the curation of content to say, in a specific domain, these two words have a connection. They have a meaning. We use that kind of linking in the generative AI search. So, if someone's searching for a topic like secondhand smoke, we've also connected it to the medical term for secondhand smoke. We make sure we're pulling back all the relevant articles. And so there's a linking bit there in that generative AI. We take the retrieved information and then we ask generative AI to answer the question in a natural language context, and that really reduces the hallucinations as well, and keeps that accuracy high. We always want to have a reference back to the source documentation. So anytime we answer a question, the user can dive deeper into the source of the literature, to continue to dive on their own and read the source. It's an important part of the solutions that we provide.

YS Chi:

This is all enormous amount of handling, handling of datasets, right? And this is all taking enormous amount of hardware power. The two myths that I've always been challenging about GenAI since it came out was one, I don't think it actually generates anything. It just regenerates things that are in the data sets. Whether that's from the linking or that's from precision or from the general recall. The second thing that I challenge is, can this be done without the hardware improvement that we have seen? And I'd like to ask Vijay, what is the optimism you have about how much more handling of these data sets, quickly and cheaply, occur because of the hardware side?

Vijay Raghavan:

I think there's a ton of room to grow. In fact, YS, I think I shared a slide a few weeks ago which showed the pace, the rate at which GPUs, the capability and the power of GPUs, has been escalating over the past 10 years in comparison to CPUs. And there's plenty of headroom just there. I can foresee, GPUs not reaching the ceiling for, I don't know, 10, 15 years, and then after that, there's quantum computing on the horizon, which is going to further bring about a sea change in terms of hardware capabilities. I don't expect that to stop for any time. I don't know, in my lifetime, I guess. The scale at which hardware can influence the speed at which we can process these data sets and that, again, going back to the first part of your question, yes, it's true, these things regenerate stuff in terms of a probabilistic kind of forecast of what the next word is, or what the next likely scenario is, or whatever it might be. But there's this debate that's been festering in academic circles right now in terms of, can it actually discover new things? Even the term 'discover' is fraught with interpretation. But it can certainly yield insights as the capability of the hardware increases. It can certainly yield insights that are probably much, much harder for humans to achieve.

YS Chi:

Yeah, so we can be very optimistic that we can get more and more sophisticated, more and more complex about the things that we're going to throw at the machine to compete for us. Is that okay? All right, so let me, at this point, jump to what is further ahead of this, not more of the same stuff, but from GenAI, agentic AI. What do you expect? Any one of you can just give me some insights, please.

Jeff Reihl:

Sure, I'll jump in. So, you mentioned agentic technologies, which is absolutely something that we're really focused on, and is really the next frontier. That is one area I would say, the models themselves continue to improve and get better. And then the third area is these models are also now reasoning. There's reasoning capabilities that have been used to help these models even come up with better answers. I think the combination of those three areas, the opportunity to do more fine tuning, what's called distillation, making the model smaller so they can operate faster and more fine tuned to the specific use cases that you're working on. That's really kind of where we're focused within LexisNexis and spending our attention, because we believe the agentic technologies will help us as we look to improve applications that help specific workflows for our lawyers. So an M&A lawyer will work differently than a litigator, and how can we use these agents to optimise the workflow that will come out to support what the lawyers are doing specific to them, optimised and personalised for the work that they do. What's really unique about these agents is they can work in an autonomous way. They can make decisions on their own based on the context that the user is working in, the data that's driving the workflow, and then their actual workflow steps and where they are within their workflow. I think that's going to change a lot of the way that these solutions are built and supported. And so I think those are the areas of focus that we're on, that we have within Lexis right now.

Jill Luber:

I'll piggyback on that with the agents. So just to make it even more practical, in some of the, you said the workflows and automation in our nursing education space. We're seeing a decrease in the number of nurses that are graduating and becoming part of the workforce in the United States. And that decrease is actually not lack of want of candidates wanting to go into programmes. It's a lack of educators. In many cases, the nursing educators are, in fact, nurses. So these are individuals who split their time between nursing and teaching the next generation. And so you can imagine that's hard to do. How do we make it easier for these nurses to come teach the next generation? We can create agents that given a unit of study within the programme can spin up their lesson plans, their quizzes, their homework, the test, the assessment. You can then feed those assessments back into the agent to say, look, where did my students fall short? Let's go reiterate a point or two that they're missing, and really take the burden off of those, again, they're nurses that are teaching the next generation. Taking away some of that nuance of being a teacher, and just to help them get it done without so much burden. So then hopefully we can have more nurses that will sign up to do this, and therefore we can graduate more. But it is going to be that kind of agent technology that really understands what you're trying to accomplish, and not just in moment, but for a few weeks or you can continue having this dialog with the agent like, how are your students doing? Are there a concept that they need to really double down on?

YS Chi:

All of this is happening because we seem to focus so much on customer, customer needs. You are CTOs. Yes, you do talk to customers, but there's probably good layer between you and those who are facing the customer needs and hearing it. How is this working inside RELX?

Vijay Raghavan:

I'll start. I actually think it works very well. Yeah, I don't talk to customers on a day to day basis in terms of understanding the requirements or anything like that. But when there is a market need, especially for a new product, I typically get involved in terms of understanding what that market need is, and then trying to figure out, well, what does it translate into, in terms of a new product offering, or how we go about approaching that market need with something that we don't have before? And the other aspect is, because each of us has different multiple business units within our respective divisions, one of the jobs that we have, one of the hats that we wear is to look across the common needs of those markets and try to synthesise what it is we have to bring to bear. That does require us getting fairly comfortable with what the customer's need is, and the nature of any organisation, as you know, YS, is there's always more demand in terms of product needs, and there are people to do it. So it's incumbent on us to try to figure out what are the things that we should be working on, so we can push back and say, maybe that's not what you should be working on right

YS Chi:

group, is how much you work as a team. Obviously, you now, maybe this is where we should be devoting our scarce resources. individually have different customer segments. You have different ways to try different technology and combine them, but you have created something that really shares a lot. One of you, please tell me about this CTO forum, and why does this work?

Vijay Raghavan:

Sure, Jeff or Jill, you want to take a crack at it?

Jeff Reihl:

Sure. I'll start. The four of us get together about every six weeks, and we cover a variety of different topics that are pertinent to all of our organisations. I would characterise it more as sharing best practices, leveraging the size and capabilities within RELX. Some of the topics that we cover are things like security, because we all have to deal with security, and we share best practices and any issues that we're individually focused on. Procurement, how do we leverage our scale as RELX to get the best price when we're working with large companies like Microsoft or AWS. Each one of us are the executive sponsor for one of those large vendors. I am the executive sponsor for AWS because we do a lot of AWS work within LexisNexis, and Vijay is the executive sponsor for Microsoft, because they're hosting within Microsoft. We will take point in certain areas, but we'll always have a dialogue around issues or opportunities to do things better across RELX.

YS Chi:

I mean, I remember... just one moment Jill. I remember not long ago, Vijay, there was a lot of tension between these tech people in different divisions, and yet you created this forum in hopes that you would become amazing friends. You've become great allies to each other. I'm so jealous.

Vijay Raghavan:

Yeah, I would say we do have a very strong working relationship, and part of it is, I think the secret sauce was to figure out where we should collaborate and where we shouldn't. Because we try to boil the ocean and try to get into each other's shots and every little thing, it simply wouldn't work. But we deliberately and very consciously chose the subset of things that we wanted to work on as a group, that basically were common across the divisions, right? Because the reality is 95 percent if not more, of our respective, of our time is spent within our divisions. With the few meetings we have per year. What are the salient things that we really need to share across the group? Even there, we can't do justice to it just among the four CTOs, so we set up these communities of best practices we call them. That way we have a community of best practice for generative AI, one for end user computing or infrastructure, and we appoint people within our respective divisions to collaborate more broadly and deeply in these areas, so we don't have to get involved, because we can't get involved at a deeper level.

YS Chi:

And this requires, obviously...

Jeff Reihl:

One other area YS that we collaborate a lot on, is with talent...

YS Chi:

I was going to just reach to that point and say, this cannot happen unless you have talents, and also talent that likes to work with each other. So, help me understand for the audience, what do we do to retract, retain, train the talent and have them be happy to work with each other and lever? What is it that we do so well?

Jill Luber:

I'll jump in, because I've been, you know, 22 years and across two divisions now. So, that moment of where I could take the next big step, and I chose to do it within RELX. I think a lot of our mission, and what we do across all of our divisions, you can find the mission and why it's important to do what we do. Within Elsevier, we're advancing human progress and Risk, there's so much work around helping combat fraud and helping , even in Brazil when there was no credit bureau at all and everyone was paying ridiculous interest rates because there was no way to understand who's good credit and who's not. I mean, this is that kind of setting up that ability, and it's for the betterment of everyone. It's for the betterment of society, and even with the case law, and understanding the law of the land and helping make sure that due process is part of the society, and really setting standards for things like law and risk, and even in research. People get behind that. They understand why they come to work every day. They understand what they're contributing. And I think that's really important. We we talk about those goals and that vision, often. You'll hear it very often in our townhalls and what our leadership say. What we've accomplished. Yes, we are a business, and we're a very large business, and we're very successful. But our success is because we are doing the right things for our customers, and that they find what we produce and what we provide is worthy to continue to purchase and continue to have a relationship with us. They're always hungry for, what is that the next thing that the RELX company is going to bring for us, because they know that we have a customer in mind, and it's something that it's worth spending their budget on, right? So it's helping them get their jobs done. And that's, I think, what our people can see that. And so they see that the contributions they're making. As a result, there's a nice respect amongst all the colleagues too, that we're all doing this together. I think that's a lot of what keeps people at RELX. We do have long tenure, certainly amongst the three of us in our divisions in tech, we all have very long tenure within a tech organisation which is not usual. And again, I think it's just people really enjoy who they work with, and what they're doing, and why.

Jeff Reihl:

Yeah, I think that, as you mentioned, YS, it's all about AI. So, if you're a technologist, we've got some very interesting problems that we're solving for our customers, as well as the technology that we're using. We're using state of the art technologies. We're hosting in the cloud, where, as a technologist, has never been a better time to be part of RELX.

Vijay Raghavan:

The other thing I would add to that YS is that we aren't dogmatic about where our talent comes from within technology. We do strongly encourage lateral movement within our divisions, and we try to do it across divisions as well. That's another thing that we share, as Jeff pointed out, but it's certainly important that if someone has an inclination for data, you know, they don't necessarily have a computer science background, that's so passe these days to say what was your degree. It's really important to first look broadly at what a person's aptitude is and what they've accomplished and what they aspire to do, and move them across laterally, even before they get that next promotion.

YS Chi:

That is a perfect point from which I can launch the last segment of this, which is about looking forward for our people, not just the 12,000 people within your organisations, but the other 25,000 people that are not in the tech organisation. So, can you each perhaps share some of your insights about what should they expect in their work, as is affected by technology, and your advice to them of what they should be doing, so that they remain very good at what they do, despite the tsunami of technology that is coming at us, as a non technologist. Please.

Jill Luber:

Yeah, I'll start just because I've recently been helping some of... in Elsevier, we have a lot of editors of journals, a lot. And so back to your, not very technical, but very good at what they do, but they are sitting in language all day. All day they're reading and commenting and making sure that the content they produce is of high quality. The challenge that I give to that team is, if you find yourself doing something over and over and over and it's a repetitive task, please raise that, let us know. Let the technology team know. We may have a way. We've already talked about generative AI, we've talked about agents, just internally do we have a way to help our teams do what they need to do. Spend their cognitive load in the right places by getting rid of some of those repetitive tasks. It feels more natural for technologists to have that conversation, and they think, if I'm doing something repetitive, I should automate it. But it can apply to anyone, really, anyone in the company, if you're doing repetitive tasks, there's probably a way to automate it, and we can take some of that, again, that cognitive load, and free up some time to do more things. That's what I encourage that team. Just take note of what it is you do often.

Vijay Raghavan:

Great point.

Jeff Reihl:

Yeah, and I think you want to experiment. We want to encourage experimentation. If you're in a sales organisation, what kind of tools can help you do your job better? Marketing, customer support. As Jill mentioned, it can affect any of our roles and help us improve. But I would also suggest that you be adaptive. If you know your role is going to change, and because this will have an impact on all of our roles. How can we leverage this technology? How can we learn more about this technology to see how it can improve the way that we work, make us more efficient, make us more effective, allow us to contribute even more to the company. And so, really have that kind of learning approach where, hey, I'm just going to try some of this stuff out and see what it what is capable of doing.

Vijay Raghavan:

Yeah, great points. The one thing I will add is, I think we've all heard the cliché that it's not AI that's going to take your job, it's going to be somebody else that

YS Chi:

This is why Vijay

Vijay Raghavan:

...to some extent, that's a little glib, because there are going to be jobs that, in the fullness of time, will be taken over by AI. It's just one of those things If I can ask knows how to use AI that's going to take your job... that, as it becomes more advanced, you can see that happening. But in the short term, back to Jeff's point about being adaptive and being a learning organisation and to wonderful thing, until

YS Chi:

and an awful lot of sharing across the talent to learning individual, it's really incumbent in every one of us, whether you're a technologist or not, to learn what the capabilities of AI are. It's not that easy, actually, because, for example, I'm using ChatGPT Enterprise right now. Often I find myself fighting it because I think I know better, or I'm getting impatient, or whatever it might be. It starts with leadership actually. All of us encouraging ourselves and then encouraging our teams to get converse into these tools and be that person who uses AI to make ourselves more productive and augment our own capabilities. make it actually work and at a speed that we can tolerate. I think we are now at a point where we have to do more experimentation, more open mindedness to look at different ways to solve our problems for our customers, and hope that when we do, that we share those experiences with others, so that we don't have to have it done 37,000 times. I want to thank you all for sharing those wisdom with us today. As always, Vijay, you're my tutor and I will always bother you for technology questions that I can't answer myself. Jill keep solving those problems for us in this space of medicine and research, because I think that area just has so much to gain from what Jeff called the repetitiveness. And Jeff, please get out there and keep the pace going, because we love what LNLP has been doing the last two years in exploring partnerships and new way of doing things so quickly. Once again, thank you so much for spending the time with us today, and I think that you guys will get a lot of follow up questions.

Vijay Raghavan:

Great. My pleasure.

Jeff Reihl:

Thanks YS.

YS Chi:

Thank you.

Jill Luber:

Thank you.