Professional Interviews


Hear from Industry Professionals!

In order to gain a greater understanding of how AI is affecting the real world, cohort XI has taken time to interview industry professionals to see what they have to say on the adoption of AI, AI's implications on modern work, and more! Scroll down below to see what all of these leaders have to say!

Table of Contents

Interviewee Title Jump to...
Christopher Smitherman II
McDonald's
Christopher Smitherman II Interview on Change Management Video
Tony Sumrall
Adjunct Instructor
Tony Sumrall Interview on the Future of Generative AI Video
David Gayda
Eli Lilly and Company
David Gayda Interview on Use Cases of AI in Academia and Industry Video
Kyle Lierer
Slalom
Kyle Lierer on AI and the Cloud Video
Bryce Williams
Eli Lilly and Company
Generative AI and Workforce Collaboration with Bryce Williams Video
Taylor White
OurGov
Taylor White on the Impact of AI Video
Siva Chittajallu
Roche
Professional interview on use cases for AI & Machine Learning with Siva Chittajallu Video



Christopher Smitherman II Interview on Change Management


The video features Patrick Hanley and Alex Coulombe, two students from Miami University, interviewing Christopher Smitherman II on Change Management. Christopher Smitherman II is a graduate of the 2018 class at Miami University. He majored in mechanical engineering with a minor in paper science. Smitherman currently works for the McDonald’s corporation as a National Real Estate Manager, maintaining McDonald’s brand standards and managing all property acquisitions for McDonald's in Michigan, Indiana, Illinois, Iowa, and Wisconsin. Since graduating he has started his own digital media consulting business and has spent time working with clients such as Sports Illustrated to create creative and engaging content, grow regular site/page viewers, and increase brand loyalty through strategic marketing and advertising.

Outside of work, Smitherman received his MBA in Global Leadership and MS in Organizational Leadership from Colorado Technical University. Smitherman really enjoys learning new things and keeps busy. Life is short and is what you make it. And he wants to make the most of it. He enjoys working out, watching movies, and reading books in my spare time. One thing at McDonald’s people say is to always remain a student of the business. Smitherman strives to remain a student of life and is constantly pushing himself to learn new things and take himself to the next level.

Alex and Patrick are both a part of the Lilly Leadership Institute, working on a big project about Embracing AI. Change management has a big impact on organizations and their interview discussion with Christopher provides insights into communication, leadership, and future development strategies from a professional point of view.


[Music]

[Patrick] Hello, my name is Patrick Hanley and I'm a junior here at Miami University in the Lilly Leadership Institute. And we have Alex and Chris here. If you guys want to introduce yourself a little bit.

[Alex] Hi, I'm also a junior in the Lilly Leadership Institute at Miami, and we're joined today with Chris Smitherman, who is going to introduce himself and talk a little bit about what he does. And I'll ask him some questions and get his professional perspective.

[Chris] Yeah, no, thanks. Thanks for having me. Y'all uh. You know, I'm Chris Smitherman, the second, I am a graduate of Miami University, class of 2018. I came through the Lilly Leadership Institute. So cohort 5. So y'all saying cohort 11. I'm like, wow, time has flown. But I'm currently a national real estate manager at McDonald's, the McDonald's Corporation. So the restaurant business I currently handle all of their property acquisitions and new restaurant locations in five states in the Midwest, which include Michigan, Indiana, Illinois, Iowa, and Wisconsin.

[Alex] OK, awesome. And then could you talk just a little bit briefly on kind of your own personal experience with change management, What are some of the experiences that you've had kind of with that and either roles that you've had or anything?

[Chris] Yeah, sure thing. So I first start off like I do have, I do hold a master's actually in organizational leadership and change Master of Science from Colorado Technical University. So I have an academic background and understanding of change management and how those processes go about. At the same time, you know what I would say is, you know, changes life has changed. So it's really just the study of change and how people handle it and then also how to execute it effectively. So some instances of change that I've experienced in my professional life are one, changing a location from city to city. So I'm originally from Cincinnati, OH and I transitioned in my first role with McDonald's to Buffalo, NY. So never, no family, no friends, nothing there, no base. They sent me there and I said hey, cool, I'm gonna make it work. So really had to change my understanding of the different geographic locations of the United States and had to change my, you know, different cultures come with that. So never, really never been outside of a Midwestern culture like that. Even actually had really no connection to the East Coast at all. So that was really new to me to understand that. Learn that over three years. Now I've had an ability to transition back or not back to the Midwest, but I live in Milwaukee, WI, which is, you know, it's similar but different at the same time. Learning those things, that's from a location perspective, from a from a professional perspective, I've changed roles and functions. So I began making my career with McDonald's as an operations associate in the operations department, so that was focused on running restaurants and that's what operations means. So my role was to work with franchise owners in the Western New York area which is Buffalo plus Rochester plus the Southern Tier what we would call it. So some smaller towns and in that Northwestern part of the New York State. And I would help support them and help them grow their businesses, improve operations, as well as hold them to McDonald's brand standards. Doing that after 2 1/2 years, I transitioned into the real estate development department. So now that's when I started doing real estate property acquisition. So that was a change of understanding the different discipline, different mindset, different way to look at the business. So that's how I've changed in that regard as well. Does that answer the question?

[Alex] Yeah, that's great. Thank you.

[Patrick] Yeah. So next question here is change often faces resistance, based on your experience, what strategies do you recommend for getting buy-in and overcoming resistance to change initiatives?

[Chris] Yeah, no that's a that's a great question. So the biggest part of you know getting any change to be defective is, is that buy-in. So couple of things that are effective or one communication I'd say that's really the biggest way you could do it. So on top of my changes like in career wise I've actually gone through an organizational restructure at my company. And they did that recently in April. That's the one I went through. I came in right after an organizational restructure and was still facing the ramifications of that. So while I didn't go through that process, I've gone through this one and seen it and a lot of the community, a lot of it comes down to communication, communicating through different mediums. So whether it's e-mail, whether it's in-person events. So if you have in-person workshops and trainings that are meant to describe and say, hey, this is what the change looks like, this is what's going to come and that's really important. The other way is having what you should. What you could really do to lower resistance is corral both formal and informal leaders within an organization to get behind the message. So if you have somebody who is trusted, who is respected within that organization, the others will have that change and accept the change much easier and faster. And change is a slow process, you know. Another strategy is you cannot rush this, it's got to be iterative, it's got to be slow, you got to start to see the breadcrumbs. And I would say with my academic training, I was able to really see them lay the foundations and see that my organization of the change to come, how they strategically did communications, different efforts, different things that they had going and that led to that ultimate, you know. Bam, hey, this is the change that goes on and then getting the buy in the process. Always great to include those people who are undergoing the change in the process. Get their feedback, get their communication, get their thoughts and ideas because if they feel a part of it, they're more willing to change.

[Alex] That's perfect. Thank you so much for the insights. Another question here, so talking about company culture, So what kind of role does a company's culture play in change management and maybe what are some ways that you've seen a company changing or adapting to fit change management in with their unique culture, their ways to work around it?

[Chris] No, that's a great question. I would tell you culture is everything. So culture needs to be created by the people in the organization, not the people necessarily leaders from the top, the leaders from the top, they set the tone of the organization and they have the opportunity to control the culture and to kind of set forth the vision which you use the vision mission etcetera. You set that foundation for the culture. But the other people in the organization they ultimately I would say have more control and build that. So you know from a change management perspective, it's just a lot of the things I mentioned before, so you know ideating those. Taking your time, so not trying to just rush it all in one moment because the faster you go, you know, it's like force. I mean, if you're going to make all the change come at once as a fist, you know you're going to have a much greater resistance come back at it to stop it. If you have inches forward, you'll have inches of resistance back. But then again, eventually, you know, people will come around and you just inch forward, inch forward, inch forward.

[Alex] On to ask another question that hopefully tackles a bit of a different perspective. So looking at an organization, when you're implementing large change, what are some specific ways that you can really support your employees and help them through the process?

[Chris] Yeah, no that's you know I would say so listening sessions are a great way you can do it. I think listening sessions not only from your immediate management but also the higher-up level leadership in the organization. You know maybe you can't. I know my organization they had a lot of coffee chats with leaders that number spots. But they had all these coffee chat dates and different dates with different leaders of the company to say hey let's talk about what are your concerns, what do you like that we're doing, what do you want to be heard. So that's a great way to get buy-in and have that conversation with your employees and get that feedback. You know another way, that I think, is really, really effective in getting that support and helping your people through the process is to, you know, give it. Might say you know be as transparent as possible. You know if you are saying you know with like one of the things that the organization did that I thought was nice was that they, you know they didn't say specifically what but they said we're going to change the way that we work and here's why we're going to change the way that we work because these are the needs of the business that are in the future, they're XYZ. So we need to have our structure be ABC. Now that level of transparency is just really important to help provide that. I'd say that's a support in itself. You know a lot of times when people are held in the dark you know I don't know what to think and when you're doing this, you know especially if you're talking about changing an organization or mindset, you know some people are not changing. Some people are going to struggle with that, but also some people may lose their employment, may lose their jobs in the process and their functions may no longer be there. So it's a very, you have to be very sensitive on how you handle it. I understand it's business. I understand that it's working, these things can happen. But that those are, you know, approaching it with some empathy and some sensitivity involving them in the process, having listening sessions to say let's hear your concerns, let's hear what we're talking about and then addressing those things through, are great supports that you can provide to your people and organization.

[Alex] OK. That's great. Thank you—one last kind of almost summarizing question. Looking to implement big organizational change management like we're looking with the generative AI, what are some of the big things that you need to hit on that you can't miss out on if you're trying to do something like this?

[Chris] Yeah. So one, you got to have a strategic approach and you have to understand not only where you want your organization to be in the next two years, but you have to understand where you want your organization to be in the next 5 to 10 years. So the changes that happened that you want to say, hey, we're gonna have a process, it's gonna be, it's gonna take 6 to 12 months. We're going to have to do different things at the first stage and then it's gonna take another 12 months to get us where we really wanna be right? That should also be an iterative process to get you where you want to go in the next 10 years. So let's say that's a critical success, but you’ve got to have people who are in leadership, who have that mindset, understand that goal and that vision for the organizational structure, and can get you there because the last thing you want. 'Cause what will help will make you critically fail is if again, you try to do too much at one time you try to say I'm gonna get it, you know. You see it with organizations all the time where they may do a mass layoff and say we don't need these people but they end up trying to go back and rehire and bring that talent in. The next critical success factor in breaking that down more is identifying what are gonna be the key roles and components driving your organization forward. So those should be the roles, whether they're new roles, whether they're improved roles or fine roles, whatever that is. You've got to have that critical success factor and understanding of that to be successful. You have to have a good communication plan. You need to have a good training plan as well because you need to make sure that your employees are tooled. And this connects to the previous question but you need to make sure employers are tooled to be able to step into that change. So if they don't have the skills necessary, we talked about generative AI, they don't have the skills necessary to hone AI and to manage what AI is going to do, then the chances of success are low or you're going to have a longer learning curve. So understand that's a definitely high-level critical success factor that you got to have thats the training component, the tooling opponent. And then I just say the big one is just patience, you know patience and patience and willingness to change. I mean you know, you're, you know these are road maps that these companies created and these big road maps and hey we want to do what we want to do. Well sometimes like any of us do, you get focused on the next step. So much that this is where long-term planning comes into play. You have to be thinking about what the outcome is and the long-term outcome. If you're finding that you're getting a lot of resistance in this area from your people, you're finding that this isn't quite working. You need to be able to pivot fast and you need to be able to change fast. So I know an engineering professor John Richter give him his credit. I remember he said, “fail fast, fail often.” You know at Miami University I'd say you got to be ready to fail fast, fail often. It's OK you're going to try things. You're going to do things but you got to be able to fail. You have to show and lead that you can change just as much as you want others to change. So say those are some big critical Success Factors.

[Alex] Fantastic, that is. Those are some wonderful insights. Thank you very much for joining us today and for all of your awesome perspectives on change management because I know it can certainly be a very complicated issue, especially when it comes to large organizations. So thank you for your expertise and for joining us here today.

[Chris] Hey, no problem. Thanks for having me.

[Patrick] Thank you.

[Alex] Of course.

[Music]


Back to Top



Tony Sumrall Interview on the Future of Generative AI


In this interview for the Lilly Leadership Institute’s Embracing AI project, Jessica Gentles and Sarah Freeman interview Tony Sumrall about the future work of generative AI. In this conversation, they discuss the potential influence of AI on day-to-day life and decision-making and strategies for organizations to leverage AI potential and manage associated risks. Tony also discusses how employees can prepare themselves to work with generative AI to make them more productive. In this interview, Tony also discusses how he anticipates that the future of work will change and improve because of the introduction of generative AI into our everyday lives.


[Music]

[Jessica] Hi, my name is Jessie Gentles. I am a third-year software engineering major at Miami University, Oxford, and I am a part of cohort 11 of the Lily Leadership Institute. Today, we will be speaking a little bit about knowledge workers and the ability to embrace AI as it advances in our society.

[Sarah] and I'm Sarah Freeman. I'm also a third-year engineering student. I'm studying mechanical and manufacturing engineering at Miami University in Oxford, Ohio, and I am also a member of cohort 11 in the Lily Leadership Institute. So now we'll have Tony go ahead and introduce himself.

[Tony] Hi, I'm Tony Sumrall. I'm well; I've been in tech since I graduated from system analysis, which was the precursor for computer science, and I've been out here in lovely California since 1980. I've been in the high-tech space ever since then, working with large companies, small companies. I've had a couple of startups; I've got some patents under my belt. I've been doing tech for as long as I can remember.

[Jessica] Well, we'll just go ahead and get started with our question. So my first question is, as someone who currently works in technology, what are some of the fears about generative AI that you've heard from people around you?

[Tony] Well, so it depends on, it depends on the people, the person, and kind of how long they've been in the business, right? They've been in the business a long time, then they've seen more changes than you can imagine, and either they're just tired of adapting, or they're excited about a new change. Tired of adapting is, I mean, that's just fatigue, right? If you haven't been in the business a long time, then it seems like a lot of the people that I've talked with are not so much afraid that they're going to lose their jobs, but they're afraid that they're not going to be able to keep up, and they don't really know if, well, of course, they don't know if it's a good thing or a bad thing, and they don't really know how it's going to affect their lives. But not knowing how to keep up when you're already full tilt, you know, working 10-12 hours a day, five-six days a week, and then you've got to adapt to a new technique that is supposed to make life easier for you, you know, all of these things are supposed to make life easier. So trying to incorporate something that's supposed to make your life easier while still meeting all of your deadlines, that's pretty, that's pretty heavy.

[Jessica] Thank you, and how do you believe some of these fears can be addressed as workplaces move to implement more AI tools in the next few years?

[Tony] I think a big part of it can be addressed. So for knowledge workers in particular, for people who are exposed on a daily basis to technology, demonstrating the goodness and the efficiencies and the optimizations that they can accrue helps a lot of people, right? If you're not a knowledge worker, then you've got to do something about the fear, and you know there's always this fear of the unknown. So again, it's demonstrating and showing how it can help, maybe offering retraining programs for non-knowledge workers to help them get their feet wet if you will. The things that are going on today with AI just out here in the everyday world are pretty amazing, and it's affecting or it's affecting, I guess, maybe is another term, all aspects of things. Amazon is using it to give better suggestions for what they want you to buy, you know. Amazon bought into another company in a big way. We all know about Open AI in Chat GPT, and we know about Bing, and we know about Bart, but Amazon bought into another company that has been fairly well fairly quiet that produces a product called Claude which is a phenomenal, phenomenal AI and of course, there's meta at Facebook who are developing their own AI. So I think as it becomes more a part of our everyday life, it should become easier for people to accept and embrace it.

[Jessica] All right, and my next question is which fields do you think generative AI will impact the most in the next five to ten years?

[Tony] I mean, I can't imagine. I can't imagine what it's going to do in the next year, right? The advances that we, like, I've been working with OpenAI's product since November last year when they first announced it, and the changes that I've seen just in the last 11 months are incredible. There are over a hundred new products that come out every week that utilize generative AI in some way, over a hundred, and this has been the case for months. So how will it, I mean, all I can say is, actually, there's a couple of things I can say. One is, we're not going to recognize things in 3 years. It's not going to be anything at all like what it is now. This is akin to the invention of the airplane or, you know, the Wright brothers or the invention of the personal computer. The things that we have seen in the last few months, if they are any indication, and I expect it'll slow down. I mean, your industries by the time you graduate will be completely different. It's being used everywhere. Now, it's being used in medicine, it's being used in obviously computer science, it's being used in advertising, it's being used in search engine optimization. You can create websites by typing on your computer and saying what you want it to do. You can, I mean, it just goes on and on and on. So, I think the only thing that I can say is we won't recognize it. Every industry will have a major effect on every industry.

[Sarah] All right so you alluded to this actually a little bit. So we're college students we're about to enter our careers, you've seen a lot of change, you've had a lot of experience in tech. What would you recommend to those that will be future knowledge workers in the next few years to prepare for the advancements that are going to be in the workplace to AI?

[Tony] Embrace it; embrace it now. Use it every day; use it in your daily life. I mean, to use every one that you can get your hands on, become familiar with it, learn what a prompt is. Now you know, prompting is the way. Today, writing a good prompt can give you good results, or writing a bad prompt can give you bad results, right? That's going to change. The whole inference part of taking apart the stuff that we give it to get responses is going to get a lot better. But nonetheless, understand it; the more you know, the better off you're going to be. And I don't care what your area of concentration is. If it's medicine, if it's the bioscience, if it's accounting, right? Becoming exposed to it, finding out as much as you can while you're in a learning environment is going to prepare you better than anything else. And I mean, I use it every day. I use it literally every day. I most often use it to help me write code. Yeah, I use it to help me learn new programming languages. You know, I sat down with it a few months ago. I had a thing that I needed to get done on Windows 11, and I don't know Powershell, which is a scripting language. So I sat down with ChatGPT, and I said, 'I need a Powershell program to do this,' and it spits out a program for me. Didn't work right out of the box, but two hours later, it was running on my computer, and I had no idea how to do Powershell, right? I mean, those are the things that I use it for. You can use it to pretty up an email. I use it; I use it every day, use it every chance you get.

[Sarah] All right, that's actually a really good transition to our next question which you touched on this already but how do you think generative AI will continue to evolve for everyday use so not necessarily specific to you know coding or in the workplace but just for everyone?

[Tony] Well, so I'm I'm a science fiction fan. I've been a science fiction fan since I was a kid. Everything that I have seen in all of my years has been predicted by science fiction. So how do I think it's going to affect every or be used in everyday life? I mean, you know, Google and Amazon tried to address some of this with the Google Home speakers and with the Amazon Echo devices and things like that. Well, Google is incorporating Bard into the Google Assistant, and now you'll be able to ask at things that have to do with the everyday world, right? This is going to happen more and more often if you happen to get into a smart home. I've got, I've got automation all over the place here. If you happen to get into a smart home, it will more effectively learn your patterns and respond to them, right? Your car will let's assume that that, that we go, that we go EV, and we get away from internal combustion engines, your car will figure out when you mostly use it and how much you use it and automatically, automatically power itself. And that's just, I mean, I mean, it's going to affect everything from optimizing the power use of your refrigerator to optimizing the power use of, well, of any device. I mean, the possibilities are endless. It won't necessarily reduce prices even though it probably should, but hopefully, it will give us some extra leisure time to pursue some of our other passions.

[Sarah] Do you have any insight in terms of like maybe managerial structure how embracing AI is going to have to trickle down from whoever is making the decision all the way down to the knowledge worker does that make sense?

[Tony] Yeah, it does. It does, and I would hope that it doesn't trickle down. I would hope that, the way that you make this work is you make it work at all levels in the organization, right? If it, if the people on the bottom, which is where I've been most of my life, feel like it's being handed down as an edict, they're going to resist it just on general principles in a lot of cases, right? If you can get people at every level of the organization who see the goodness, who understand what it can bring, can understand the optimizations and the efficiencies and can become, I wouldn't say cheerleaders, I kind of wouldn't even say ambassadors, they're just like local people that go, 'Yeah, that's a good thing. I can see how that would help me do this, right?' I've been through countless upgrades, and the ones that are effective, the ones that happen the smoothest, the quickest, and are most successful are the ones where the grassroots people already believe in it. So, yes, embrace it.

[Jessica] So I just want to make sure that I got what you were saying there you're saying that in order to get people to embrace AI we need to make it kind of available to everyone like in the workplace. Is that ...

[Tony] More than just available. I mean, like one of the big things that was going around the industry a few years ago was gamification, right? Where, you know, it's kind of a toy, and you kind of get to play with it, and you get points and stuff like that. That can help in some environments, but just, yeah, making it available, but also having good, accessible information that doesn't seem to be coming from an oracle, right? Seems to be coming from a coworker, somebody that you already know and trust, or somebody that if you don't already know and trust, you know people who do know and trust them. It's, I mean, organic change is the easiest to handle, and it's also kind of the most difficult to, to accomplish. To make it organic, people have to have to be familiar with it at least in passing, and understand enough of it to believe that it will help them, and it's not just some crazy idea that the CIO or the CTO or the CEO has gone off on, right? So, I hope that explains that.

[Jessica] Yes, it did thank you.

[Music]


Back to Top



David Gayda Interview on Use Cases of AI in Academia and Industry


The video features Ryan Holthouse and Michelle Ebu, two students from Miami University, interviewing David Gayda. David is currently a Sr. Principal Software Engineer in Global Statistics at Eli Lilly & Company driving modernization and innovation using machine learning applications. He is passionate about using technology and AI to create solutions focused on health. While at Miami, David studied Software Engineering and Music Performance – he was a member of Cohort 2 of the leadership institute.

In this video interview, Gayda talks about the propelling advancements across various sectors, including healthcare where AI is fostering innovation and driving growth. It's not merely about classification or problem-solving; AI is a creator in its own right, continuously evolving and expanding its capabilities.

In the realm of Artificial Intelligence, while both sectors, industry and academia are actively engaged in harnessing AI's potential, they differ in their domain focus. Academia, often constrained by limited funding, faces challenges in training and development. However, despite these obstacles, academic institutions are employing AI to enhance student learning experiences. Through the development of chatbots, students can now access immediate assistance with homework, seek guidance, and receive timely feedback, thereby facilitating a more efficient and supportive learning environment.

Despite the remarkable progress AI has made, there exist apprehensions and concerns regarding its widespread adoption. Issues such as job displacement, security vulnerabilities, misinformation, and the potential for fostering laziness are prevalent. However, the focus is not on restricting the use of AI but on finding intelligent ways to leverage its capabilities. By addressing these concerns and optimizing its application, we can ensure that AI serves as a transformative tool for positive change.

In summary, the current landscape of AI is marked by rapid evolution and transformative potential. By embracing innovation and adopting a strategic approach to utilization, we can navigate the challenges and uncertainties while maximizing the benefits of AI for all.


Transcript coming soon!


Back to Top



Kyle Lierer on AI and the Cloud


To learn more about generative AI is used in the scope of data, security, and the Cloud, Drew Laikin and Brie Merritt from Cohort 11 interview Kyle Lierer, a Cloud Engineer at Slalom, to see how he uses AI in his role.


[Music]

[Drew] Hello my name is Drew Laikin and I'm with Cohort 11 of the Lily Leadership Institute. Today, I'm joined with Brie and Kyle to talk about embracing AI for knowledge workers and business professionals. Kyle is an associate consultant at slalom and specializes in Cloud engineering and AWS. So let's get into it. Kyle, you have a lot of experience with Cloud engineering. Can you tell us a little bit about how Cloud is integrating AI tools and technologies and how you might be using AI on a day-to-day basis in your job?

[Kyle] So I would say the first big thing is probably the use of AI assistance I think generally across the board when it comes to you know developers and just IT professionals in general the use of AI assistants are kind of becoming prevalent everywhere. With that with that being said there's there's an importance and focus on right now AI assistants that take into account like data governance. Roughly a year ago if you will when chat GPT came out it was pretty quickly adopted by just like Cloud engineers, software engineers, IT Professionals for things like writing simple scripts and algorithms and even tools like GitHub co-pilot are being frequently used in the Cloud space some of the the recent AI news has surrounded and specifically with like AWS is what is it called? It's called like AWS Q it's like a little AI assistant specifically designed in the cloud space to help with like Cloud related tasks so that's probably the biggest thing that that that I've seen as far as AI in the cloud space and like when it comes to helping with with workflows and such outside of that it's pretty disconnected to a certain degree but assistance right now when it comes to development is is almost a musthave to use like it helps streamline development efforts so so much and it's even better if it's customized to the code base that you're working with because something like chat GPT only it only goes so far unfortunately if you're working with like a really large code base and complex complex Cloud architecture it'll often give suggestions that are that are blatantly wrong and incorrect versus something that's that's customized and trained off of off of your code base like AWS Q is capable of doing.

[Brie] So that leads into our next question - what are the benefits of integrating AI into cloud-based systems and are there any potential risks to that?

[Kyle] The biggest point of consideration is going to be security in general when it comes to development efforts security is is going to be paramount when it comes to code like you don't want to be asking you know you don't want to take a snippet of code and and go and ask an AI a question and all of a sudden that snippet of code is now showing up in other people's responses which has been a really really big concern that a lot of corporate companies have had in regards to AI like you do not want a tool that you ask a question to and you personally you potentially expose like PII information or something along those lines and now that's showing up in other responses so when it comes to development effort and in particular the cloud a lot of effort has been focused into to making sure that the models used don't expose data and there's some pretty fun architecture that's going on that makes that makes that happen and allows for companies to customize those models further for like their specific data and guaranteeing that there's no data leakage.

[Drew] Yeah, that actually segways right into what we wanted to hear about next and you touched on it a little bit there but when it comes to like separating data in like a shared architecture so that you know one person can't write a prompt that gets someone else's information or something like that have you can you tell us a little bit about any solutions you've heard floating around for those kind of issues?

[Kyle] Actually heard I've heard quite a few about those those recently recently attended AWS reinvent which is Amazon's big cloud conference um where they unveil a lot of fancy new technology and one of the big big tools that has been talked about recently is AWS Bedrock. Bedrock is there is a fun tool because it allows a developer to access kind of a single front end if you will it's it's not really a front end but for the without going too technical into it we'll say it's like it's one door to multiple AI models and what's really cool about that is it provides a lot of customization for how those models work behind the scenes so, if you're accessing you know if you're online you go to something like just chat GPT and you start answer, you know, putting stuff into it you can't really assume that that information is you know that information could potentially be used by others and you don't really know how that model is being used but behind the scenes. In a cloud environment if you're building a you know say your company who's building an AI assistant and you're like we don't want our employees to use chat GPT, we want to have our own in-house AI model, what you can use is a tool like AWS Bedrock that allows you to essentially create a copy of what is refer referred to as a foundational model and once you copy this it is guaranteed that that model you only have access to it so as you work with it and as you do things with it you put inputs into it only you have access to it anything that it learns from people interacting with it that's only between you know that's owned by your corporate organization. Additionally it has a bunch of other stuff that say you're a large organization that has thousands and thousands of documents and you want to get a model spun up really really fast that knows about that you can provide data to AWS Bedrock. It will fine-tune the model based on that data after creating a copy from a foundational model that again only you have access to so, it's really really cool deck, it solves that that that problem and it makes it really really easy for an organization to have their own their own fine-tune model that they're employees can use that's at times can be smarter than something like chat GPT.

[Brie] So let's take a bit of a step back what skills do Cloud Engineers need when working with AI and we can broaden the question to what skills do you in your role or people that you work with need when working with AI? So you can take whichever route.

[Kyle] Let me think about that for a moment. I think the big thing is having a good understanding of at least at a surface level of all of the tools in play, again if you're asking an AI assistant and you're working with that on a daily basis let's be fair, they're not most of the time they're not always correct there's there's a strong chance at least right now that they can hallucinate and and spout out information that's that's blatantly false and while certain models are getting significantly better with that it it it's not always gonna it's not always going to spout out stuff that's truthful so it's it's nonetheless still important to have a really really strong understanding of the fundamentals when you're working with all of this stuff to understand okay if I ask at something can I evaluate what it gave me back as a response, rather than just you know asking for something copying and pasting whatever it does and moving forward if you're if you're just copying and pasting whatever it's done I don't I don't foresee you being for successful in your efforts, but if you're able to look at that and kind of go back and forth does that make sense does that not make sense do I need to fine tune it it's going to streamline your workflow. At the end of the day, I think it's more it's important that AI makes predictions professionals at the end of the day still aside if you're going to be a professional you need to understand all of that fundamental information in your in your field of choice to be able to make those decisions if you're just doing trying to do prediction like the AI model like a lot of that copy and pasting stuff isn't going to get you far.

[Drew] So have you encountered any, I think you mentioned a few before, but any upcoming AI tools or technologies that either you or the people around you have been getting excited about?

[Kyle] I think some of the tools that that I've seen recently that I'm that I'm excited about are, let's get the language right on this one, there's what's referred to as AI agents so right now when you go and ask like an AI model question or something it's it's pretty like, you know, you ask it a question it gives you a response that's that's kind of as far as it goes right now especially for a lot of the stuff out there. AI agents are cool because it goes a step beyond that, so say you're doing some form of work and you ask it a question like oh, what's my availability look like on a certain day and as far as your calendar is concerned it gives you a response back and then you maybe you're like oh I have availability during this window can you schedule something for me during this window of time and that agent goes and and schedules it. I'm excited about the possibility of that and more so than the possibility of that that's actually something that in particular Amazon has recently announced a tool for that that they've put into I think, I don't think it's got into like general availability yet, but it's in like beta testing so I think it's cool to look at the possibility of tools like that I mean I already think right now the way as as of right now like the tools when it relates to just generative AI in that asking question getting response back haven't reached the full potential of what all the possible solutions built on top of that could look like, but having AI agents to where you can ask it to do something and it goes and does that and potentially integrates with a more complex architecture I think there's a lot of potential for that and a lot of really really cool solutions that could be solved from that with without a lot of initial work.

[Brie] For our next question, how are Cloud engineers in similar roles evolving with AI, and we kind of touched on this a little bit earlier, but how can we prepare our Cloud infrastructure to accommodate advancements in AI?

[Kyle] I'll start on the the architecture side because I think that one's a little bit more more interesting as far as an initial answer and I think the big thing that surrounds that is moving forward it's going to be really really important to think about security and all of the security elements at play. This is already true of like general you know architect like technical architecture period security, but in particular with with AI models it's really really difficult to put something out into the world, you know, for we're seeing like a go to market project where they want to put a model out to the world and then scale it and and start to add security stuff after the fact, it's a lot more difficult to do that after something has gotten out into the world versus like thinking about security upfront. If you're always putting security kind of like prioritizing it last it's always going to be this like battle to catch up, so I think with a lot of AI tools that are being developed security has to be the first thing that's thought about because there's a lot of areas where it can be really difficult to do it after the fact. In general, I would say that's also something that Cloud engineers in general need to be very very conscientious of it no longer can you you know if you're a cloud engineer you can't be like a regular software engineer and maybe not necessarily think about the nitty-gritty security details or networking infrastructure and things like that it's important to have like a baseline level of knowledge across all of these you know development networking and infrastructure security you have to have like a little bit of a knowledge to understand all of that together, so if you're building something you can you know you can ensure that it's both designed and a cost-effective manner but is also really secure and to a certain degree that's independent of the AI tools that are at play, but AI just it's a lot harder to ensure that these things are secure after they've gotten out gotten out into the world.

[Drew] So how do you envision AI changing over the next few years?

[Kyle] Let me think about that. I think I think right now where it where it stands. AI is a little bit in a tug of war if you will. I think there was a research that I think it's like the Pew Research Center put out a nice research piece a few months back that that said that I think it's like 60% of people have significant concerns about AI being used and the vast majority of people are uncomfortable with the idea of a professional in any capacity relying on AI. I think the way that I see it changing is more from a a cultural perspective than a technical perspective, admittedly when it comes from a technical perspective I think a lot of the stuff that we're all of a sudden seeing within the last year is not that it's maybe a little exciting from a technical front, but it's nothing shockingly new you know even by the time that GPT3 came out last summer there was already a GPT2, a 2.5, that was you know generally available to the world and that people could play with but it was GPT3 that people got excited about because it was you know a little bit more accurate enough to where it could be useful for for daily application I think where we're at right now it starts to enter this like diminishing return sort of phase, so as it starts to enter that diminishing return sort of phase we'll start to see other novel solutions that build on it but we'll I think we'll also just see a lot more cultural discussion about when should it be used when should it not be used because there's definitely a lot of instances where while it might be cool to to implement AI in certain areas it's still subject to bias it's still not perfect and there can be edge case scenarios so while those one might improve I think there's going to be a lot more conversation just surrounding like when it when it can be used and when it shouldn't be used but I don't immediately see anything like dramatically big happening within the next the next few years at least from a technical perspective I think it's going to be mostly a cultural perspective.

[Brie] So as someone working in a tech field do you have any advice for business leaders on how they should transform their organizations to help keep up with the advancements of AI?

[Kyle] I think the first big piece is going to fall into just aligning your technical architecture to be to be flexible enough to adopt it period. I think one of the things that you see right now of a large majority of organizations is that they're in one of you know maybe two or three spots and one of those spots is they're not prepared for AI in the sake of their architecture doesn't support the use of it, so you know say you're say you're in the healthcare industry for instance and you want to use an AI assistant to to go through some of your records but you don't have all of those records in a spot to where you could securely give it to an AI assistance without like a potentially becoming a security issue, you know in that case like in order to adopt it and effectively use it you have to make sure that you're everything else is up data enough to where you can do that um you know kind of like data aggregation if you will if you can't aggregate that data you can't really use it for AI. So I think that's that's one way like a business leader could could kind of prefare is just get a lot of your your stuff up to speed with current best practices period. The other thing I think is going to fall on the people's side and that is making sure that you know your everyday employee knows how to use AI in a knows when to use it knows when not to use it knows not to overly rely on it again at the end of the day you know AI predicts professionals to decide. At no point should someone become entirely reliant on it at you know at that point they're not really doing anything they're just copying and pasting from an AI assistant uh so it's still really really important to be constantly learning and staying up to date on current trends in your field because you can't fully rely on a name AI model to do that because they're trained on past data so if you're doing anything novel and cutting edge the AI model is still not going to be the best tool to use so it's it's going to save you time it's going to improve your workflow and that gives you more time to focus on the cutting edge really cool stuff.

[Drew] Fantastic yeah so to wrap things up here what would you say is kind of the bottom line the you know big takeaway from all this that we've been hearing about generative AI?

[Kyle] I think I've said it a few times but AI predicts professionals to decide one of the analogies that I that I've recently heard that I really like when it comes to to gen in general is that you know over the past decade two decades if you will big data has been a pretty pretty hot topic and we're at a point now where there's so much data that it's really really difficult to synthesize information and patterns from that. AI is a really useful tool because it's like a magnet you know if we think of this giant accompany set of data to be this this giant hay pile if you will and and the needle to be the pattern or information that you're looking for AI is the magnet that can find that um that needle in a hay stack, so, you know, it that's not to say it doesn't it shouldn't be the one thing that solves all problems but it should make it easier to solve problems it should make it easier to find patterns but at the end of the day it should still be a professional deciding based on their expertise the AI model it should just be a tool that can be used to assist that that professional and save them time.

[Drew] So it was fantastic to speak with you, we want to thank you for taking the time to speak with us and you really had a lot of great insights into AI especially how it kind of integrates with Cloud architecture and some of the security concerns that we've been talking about so thank you very much and enjoy the rest of your day.

[Kyle] Yeah, thank you.

[Music]


Back to Top



Generative AI and Workforce Collaboration with Bryce Williams


Jessica Gentles and Naomi Maurer from Cohort 11 of the Lilly Leadership Institute interview Bryce Williams from Eli Lilly & Company to gather his insights on generative AI and collaboration in the workplace.


Transcript go here

Video transcript coming soon!

[Music]

[Jessica] Hi my name is Jessie Gentles I am a third-year software engineering major at Miami University of Ohio and I am also a member of cohort 11 of the Lilly Leadership Institute.

[Naomi] And I'm Naomi Maurer. I'm a junior biomedical engineering major at Miami and I'm also in cohort 11 of the Lilly Leadership Institute. Today we are doing an interview for the 2024 Lilly Leadership Institute Embracing AI conference. The goal of this conference and project is to explore how professionals can Embrace AI to work at their full potential and today we are interviewing Bryce Williams. Bryce, can you introduce yourself?

[Bryce] Sure. Great, Thanks for having me. As you mentioned my name is Bryce Williams and I work at Eli Lilly and Company. Specifically my role there is what we call digital capabilities advisor where I pretty much help people within the workforce figure out ways to apply digital collaboration, technology, and things of those likes to get their work done and running our business. And the thing I love about my job is I pretty much get to help any business area inside Lilly. I'm not limited to just a specific segment but anyone who's trying to apply technology to get work done and make them more productive and efficient I get to help them and spend time Consulting with them about the best ways to do so and of course given the today's topic lately a lot of opportunities have come up in terms of how we think about artificial intelligence tools generative AI tools to achieve just that goal right productivity efficiency and getting work done so this is a really great topic and excited to join you guys and talk about it today.

[Bryce] I probably should have also mentioned I'm a Miami grad. Graduated in 1999 from the systems analysis program and I actually came to Lilly right out of Miami so this is actually my 25th year at Lilly starting here on January 1, so I've been working at Lilly longer than more than half of my life now so there's life flashing before you your eyes for you.

[Jessica] Congratulations on 25 years yeah thank you all right I will go ahead and ask our first question of the day so as someone who currently works in technology what are some fears about generative AI that you have heard from the people around you?

[Bryce] Sure, yeah, certainly. Anytime something new emerges particularly something this potentially revolutionary there are fears and there's hesitation and there's reticence. The areas where I'm seeing this come, so most of what I've been working on has been thinking of AI as an assistant to your workday, right? I haven't done a whole lot much with AI completely automating things and people getting out of the way. I'm mostly involved in, as a human getting work done and using different AI capabilities to assist me in doing it better. So I haven't been so much in the spot of, at least in my specific work, fear of being replaced by AI. What I'm really trying to help people focus on is AI to become an assistant that helps you be better, right? That it actually increases your value, increases your reputation, because you figured out how to use it effectively to get through the minutia and the mundane work and get to the real creative and strategic work and the things that are going to bring a lot of value. So that fear of job loss at least in my space is a little bit less than what you hear. For, some people talk about that more kind of automated delivery AI, but even in the things that I'm working on. What other additional fears are there? Certainly, there's always fear of having to learn something new like some people are more intimidated by technology than others, just getting them to use things other than email to communicate is sometimes difficult. So now throw on top of that the opportunity for all these AI capabilities and someone thinking of the term artificial intelligence and large language models and tell them, oh now you have to adopt this on top of everything you have going on in your workday. So just that fear of learning something new and with a lot of the Fanfare around it, I think people might be intimidated by that until they've seen it and had a chance to get used to it. So just that's an initial fear.

[Bryce] I think a big fear for me is putting in the investment and the work to bring this to our workplace and preparing the culture of the workplace to accept it as a major aspect of how we get our work done together. So to get people to start shifting their mindset from the way that I've done things historically to accepting, you know? I did, like if you ask me to write something just know that this isn't 100% my words from scratch. I used AI to get it started and then I revised it to make sure it's accurate and representative. So there might be elements of how it sounds and how it looks that's obvious that occurred, but allowed me to get that done faster and achieve the need. So the acceptance of AI playing a role in the work that we get done and where it's effective and where the human needs to step in. So kind of that preparing the culture for the Readiness of being able to do that and I kind of alluded to it in that previous point but then also the trust of the outputs. One trusting the outputs to the right extent, but then two also being critical of the outputs you get from AI and bringing in your own perspective to make sure they're accurate because not everything's going to be 100% accurate. It's just giving you a starting point. It's giving you an assistant to get past some of the more difficult pieces so that you can get your work done more quickly and bring in insights you might have thought of but still get it to a point where you've put your brain on it to make sure it is accurately reflective of what you want that output to be to make sure you've put your own flare on top of whatever it's presenting for you. But it's just kind of giving you a starting point instead of beginning from a blank page, for example. So that like I said kind, of that, that trying to find the right balance point in a workforce between trust of outputs and criticality of outputs and the level of balance that you bring to adding your own perspective to it.

[Bryce] And then the last thing I'll put on the fear would be in a lot of ways to get value out of AI in a workplace we have to make sure that a lot of things we're doing create a heavier digital footprint. Because the AI is reading digital outputs, so the meetings we have like this one, I would need to make sure that what I'm saying is being recorded and captured so that AI can be used, for example to summarize it for someone who may not be able to attend or to give a list of action items or something from a meeting. I've got to now record that meeting and have text transcription of that meeting. I've got to make sure that things that maybe historically were all done in in-person interactions have some type of digital footprint to benefit from the AI outputs I might want to leverage. So there's a little bit of fear from business context of the propagation of recorded information and content that maybe before was more just spoken in the hallways or spoken in meeting rooms. And how do you mitigate the risk that creates for various downstream reasons so as I work through again the style of how we're trying to bring AI to the workforce, those would be some of the specific concerns that I've heard in the early days.

[Jessica] And you kind of touched a little bit on this but how do you believe these fears can be mitigated and addressed as workplaces move more towards implementing more AI tools?

[Bryce] Yeah, so anytime you're trying to drive a major behavior shift, you have kind of two competing priorities. One is you want to get value out of it quickly so you want people to move fast, and so you want to incentivize them and motivate them on how to use it well and how to kind of bring it to the top of their skill set but at the same time you want to train them about how to use it appropriately, how to use it responsibly. So I think in this era setting examples, providing training, providing very clear to understand and consume resources about ramping up your adoption, but also understanding how to apply these things in a responsible and ethical way. And, I again, I kind of talked about that aspect at least as we're thinking about it, about at least at the moment, keeping the human between the AI and the outcome. Not just making it all like these automated outcomes where humans aren't being the final decision-makers at least. Again in the space that I'm working on, I can mitigate a lot of the fear by having it to be an assistant to the human not a replacement for the human. Keeping that human responsible between what the AI produces and the ultimate work outcome so you can make sure that the responsibility, the training, and the judgment of those humans are still resulting in good outcomes. But allowing that human to have done so with a much higher productivity level and produce more such outputs in a shorter amount of time, and produce a higher return on their own effort, a higher return on investment back to the company because of their ability to get more done through the assistance of that technology. So that kind of one piece of training combined with at least right now in these type is making sure there's some type of human check, I think, is an important one. I also think it's important again, each workforce is different in terms of the layers of one depending on how large they are and what industry they are on the layers of security they might want to apply to something or the layers of behavioral policy they might want to apply something. But at least for me I think for anyone in a workforce, make sure you have any AI capability you're attaching to something, it's a trusted source where you know that the inputs that you're putting into it are safe and they're not exposing anything about your work or your company beyond a point that it should be exposed. Make sure that the results that you're getting back are coming from a trusted source that you know has been vetted to be able to be an effective tool for the purpose that you're trying to apply it to. So whether that's someone who's organizing the AI program for a specific company in their workforce, or whether you're an employee trying to make sure you're doing things at the tip of the early technology scale, but also making sure you're staying responsible with that. What are some secure, trusted, reliable AI services and capabilities that you can make part of your arsenal that you've vetted in that way so that you know that you're experimenting with it and creating results with that, but in a reliable and trusted way. And so, at least you get the reliability and the trustworthiness and the capability out of the back of your mind so then you can focus on creating good outputs and not creating additional problems for yourselves.

[Bryce] So that that's another one. And then that's probably the last thing I would do, would be I kind of talked again about the digital footprint and the additional digital footprint that we're having to create to get AI benefit. Again, this is a lot of times, maybe not at an individual level, but maybe at a company level or workforce level. What are my retention policies about the digital footprint that gets created so that it has value for a certain amount of time after it's captured? That makes sense, but then do I create some type of retention rules or removal rules when it goes beyond its useful life so that I'm not also leaving things that are a risk because they're outdated or they could be misunderstood or misconstrued out of context? And so again, balancing the increase in digital footprint with the appropriate retention and control over those digital. That digital information is being captured at a higher rate as a result and not just the interactions you have that you're running AI on, but also the AI outputs that you're getting back that the human hasn't interacted with yet. Make sure that you're thinking about ways to manage proper retention and removal of those artifacts over time.

[Jessica] Alright, thank you. And kind of getting a little more into the collaboration side, how do you think the use of these AI tools will affect workplace collaboration?

[Bryce] Yeah, so this is actually a perfect marriage, if you will, of the different things that I work on, right? Because I'm, I work heavily in the workforce collaboration space, and now all of a sudden I'm learning how to apply that, apply all these AI tools to how we get work done together. I've alluded to this a few times, whether it's the one that I think has the most value or not, I'm not sure, but I think where I'm seeing the most excitement is the application of AI to meetings. A lot of work cultures, particularly larger companies, have a very heavy meeting culture where people are spending a lot of their days, many hours going to meetings and discussing, you know, various aspects of their project work and their initiatives, making decisions together. And the more of those you have, it can be really taxing on your ability to actually get the real work done. And a value that we're seeing AI bring is if I'm able to generate summaries of meetings that occur so that one, I don't have to be distracted by taking notes during the meeting because I know those are just going to be captured and autogenerated. Two, I don't have to be worried about capturing action items or assignments or to-do lists because, again, I know that those can be autogenerated. I'm more focused during the meeting, so maybe I can have shorter meetings. The other outcome is a lot of times the reason people have so many meetings on their calendar is because they're invited for an FYI purpose only. They're not really actively contributing to the content of the meeting. So think about if all this information is being captured in kind of these short summaries or these autogenerated action items and decision lists, I don't need to attend a meeting anymore as an FYI. I can just be notified that it occurred and get the quick summary. It didn't take someone extra effort to go up and type the notes after it just happened, and so now the number of people that need to attend the meeting can be limited down to just the absolute people who are contributing actively to those decisions and to those action items so I can have shorter meetings, I can have fewer meetings, I can have fewer people in meetings and create a lot more opportunity to get to the real work. So for workforce collaboration in my space, I think there's a lot of opportunity just in the meeting culture impact on the same line of thinking though. And I've even done this a few times now that I have the ability to have a meeting instance that I'm sitting in just autogenerate text of what I spoke or summaries of what I've said. There were times where I was, I needed to write something that someone wanted to consume as a summary of a project and I didn't feel like I had a lot of time to write it really well. So I actually just logged into a meeting all by myself and just spoke for like three minutes and then I just had it say now just summarize what I just said in this meeting and then that became my little summary for my project. I didn't have to sit and write it, I just spoke out loud off the top of my head and then it created a written artifact that I could pass on to meet that need, so that increases the value of even asynchronous video interactions. So instead of synchronous meetings where we're all getting together like the three of us are together right now having this discussion I could I could just send people a video of me and it can serve multiple purposes. One, it can meet some of my needs for generating content or two, if I send that video to someone and they didn't really feel like they had time to watch it they could just say, hey can you summarize the you know what are the three things that I need to know that Bryce just said in that video so it creates a little more value in that style particularly for people of personality types that are less inclined to type a lot or write a lot of sentences Some people prefer that style Some people prefer to talk and so this brings a little bit I think more value to that asynchronous style of collaboration in addition to our synchronous meeting style as well as our heavily written text style of collaboration. So that's one thing I think.

[Bryce] The other piece that's going to be really interesting and how it affects collaboration is if I receive something from one of my co-workers, sometimes it might be hard to tell if they really wrote it or if they used AI to write it, whether that actually matters or not you know, is it their words or not, as long as it's telling me what I need to know. So I'm starting to see that more where I'll get emails or messages from people, I'll be like did they really write that? And again, not that it matters, but it's just a little bit of a new dynamic of interpersonal communications. But actually where I've used it a few times because sometimes these generative AI tools can help you bring a little fun to things and a little creativity. Like a lot of them will you know if you tell it to format something in a poem but tell the tell the story in a poem or write a song that rhymes about you know this work topic that I'm trying to describe to make it a little bit more engaging to someone. These are all examples of things that we've actually done in my work team, and trying to tell stories about the things that we're working on and it's really easy and quick to do that. I don't have to you know, I don't have to tap into my left brain, which I don't use as often to do something creative, but I can still create engaging outputs to kind of bring people to the table a little bit more easily and be a little bit more entertaining to help train and educate people than just the normal type of work that we do. So that's what I've really found interesting in collaboration, is the ability to get a little more creative without as much of the mental hurdle and the effort to get to that stage.

[Bryce] Another one, if you ever deal with it, I tend to be very verbose, not if you could tell based on having this interview, but when I write things to people, I tend to use a lot of language to be very specific. And I've gotten a lot of feedback sometimes like with emails or messages that I've written to people. They're like Bryce, tldr, too long didn't read. I need you to boil it the basics down for me. What are you really trying to say here? Well, now they don't have to. They don't have to ask me that anymore because I can still write exactly what I want or I can generate what I and they can just use AI on their own end to go ahead and boil that down and decide whether they want to dive into the full details of all the knowledge that I just imparted on them, or just ask for the three bullet point summary. So I think that's also going to help the connection of communication styles that maybe prior created some disconnection. Because I may want to be verbose, someone else may want the executive summary and now both of our styles can come together to meet based on our own preferences of that as opposed to what I choose to do in the moment of communicating something.

[Bryce] So speaking of too long didn't read as kind of one of those you know famous cliches out there the other one if you've ever heard this before, where someone reaches out to you and they ask you a question that they very easily just could have searched for and then like they didn't have to ask you. Sometimes there's this-there used to be a site out there, I don't know if it still exists, they called it Let Me Google That for You, where you could just type into Google the search and then send the person the link that actually shows Google searching for them. And so you kind of passive-aggressively say hey instead of asking me, you could have just run a search for that right now. The new version of that that I'm starting to cross in my head is let me AI that for you, right? So you ask me a question, or you come to ask me to do something that very easily could have just been going into an AI chat tool or something and having asked that same question and whether it's the public internet as the source of that summary or that knowledge, you know you didn't have to come to me there, that was a thing you could have done first. So this idea of AI first then heyI so then go to Bryce and say hey I just tried to do this with AI, I still have some questions. Can you help me? But instead of coming straight to an expert or trying to ping an expert directly, think AI first then hey I need your help right with that context. So I think that's another key one in you know we see that a lot in the place the whole ping the expert, and then you know play chat tag or phone tag with the experts until you get the right answer. I think AI can really try and get in front of that again to reduce the distractions that a lot of us are getting if we are known as experts because more knowledge is out there that can be retrieved and pulled in that AI type of behavior. So that's a lot of different impacts that I listed off but those are ones off the top of my head that I think will have a big impact at least in the culture workplace and how people use digital capabilities to get work done together.

[Naomi] Expanding a little on that, how do you think generative AI can be integrated with existing collaboration tools to enhance productivity in the workplace?

[Bryce] Well, I kind of the I think of AI, again I'm kind of coming from this assistant side of it and kind of helping with four things. I'll break it into use cases. One is create, another is ask, the third is summarize, and the fourth is maybe synthesize might be a good word for it. So what do I mean by that? So anytime I'm looking at a blank page to either create something or reply to something, I don't need to do that anymore, right? I can ask gen AI to put something on that blank page to get me started, whether that's a longer form document of something, whether that's an email I'm drafting, whether it's a presentation that I'm creating. The capabilities are there now to just create something from scratch for me with just a little bit of input and using sources that it can search and find to pull to the table to get me something to start with. So that now, instead of the many minutes or hours that it takes for me to get from blank page to something usable, I'm almost instantaneously at something usable, and I'm in the revising and perfecting stage, not the initial creation stage. So that's one piece in terms of my collaboration tools and making sure that that creation draft piece is there for me.

[Bryce] The next one is ask. We capture a lot of knowledge, whether we are doing it purposefully and putting it in static places that are like, this is a knowledge base that I created specifically for this purpose. That's one way of creating knowledge. The other way of creating knowledge is the organic interactions that we have every day through our chats, through our emails, through different threads and forums that we might be interacting with people in. That's capturing knowledge in the act of working instead of having it happen in addition to our work through some specifically targeted knowledge management platform. So, AI can really help and they ask in using that organically created knowledge to come back and find me and give me answers based on things where it's already captured. So collaboration tools are where a lot of that knowledge is being captured in a very informal way. And so how can the ask scenario help me get to what's already been captured again instead of having to interrupt people who've already shared that knowledge over and over again to get back to it again, right? So the AI, the ask piece, helps me get a little bit in between me and the expert like I described earlier by being a capability integrated into pulling from that organic knowledge source.

[Bryce] Summarize is another one I mentioned. So whether I have missed my email for a week because I tried to take a vacation and there was a thread that was 15 reply to alls long, I don't have to go through and read that from the bottom up again. I can just tell an AI tool that's in my email or in my chats, either one, summarize for me the key things that I need to know that I missed or even ask specific questions of it that say where was I mentioned in this, or were there any mentions of my specific project, help me understand out of all this thread of things what pieces I need to know, instead of having to traverse it. The same thing with documents or long pieces of content that might be shared with me: I don't have to read the whole thing if there's something specific I'm looking for. Build it into my capability to let me go in there and start querying questions specific to that piece of content.

[Bryce] I actually have a very specific use case within my work where someone had to review hundreds and hundreds of pages of documents also that were tied to a series of meetings that they had discussing that project work. And it was like eight hours of recorded meetings so they used a combination of AI, both to ask questions of the meeting recordings, as well as ask questions of all the hundreds of pages of documents to rapidly accelerate the outcome of making the decision that they needed to from all those inputs. Whereas if you think before AI existed, allowing them to do that, you're looking at employees who are scouring through a lot of information, watching those recordings over hours to take notes and try and bring it all together. Whereas they were able to use summarization features to get to that outcome much more quickly.

[Bryce] Synthesize is for me what I mean by that one, is kind of give me insights on top of just summarizing something for me or just telling me what it says, but actually add additional context to it by allowing me to converse with you, you being AI sorry, I just personified it I guess, but as I get an answer back, I can continue to ask follow-up questions and say, can you bring more context to this, for example. What if, you know, how does this work in the context of this additional piece of information, and what that becomes is in its conversational manner something that helps trigger better ideas for me because it brings concepts to the table that I might not have otherwise considered. So just being able to kind of sit and converse with an AI helps me be a little bit more creative myself, as opposed to just getting static results back and then figuring out how I need to synthesize it myself. I can, where the art there becomes is how to know what questions to ask the AI. How to prompt it effectively so my skill has to shift from being a good deep reader and synthesizer of information to now knowing how to prompt and ask questions that bring that information out to me in an effective way so that I can get to those answers more quickly and more effectively. So when I talk about the collaboration capabilities that I want, it's the ones that help put all that in front of me so that I can get my work work done more quickly.

[Bryce] Now again, I've kind of talked about keeping the human between the AI and the outputs. There's probably a lot of opportunity there too, to automate more things do. I again, I kind of talked about the ask scenario. This idea I've come up with, that I haven't done anything with, but I'd love to have like a Bryce Bot that like you know just anyone could come to who wants to ask me a question and they could start with Bryce Bot because I've you know made sure that all the things that I might know or have expertise in or think are important are funneling into that. And that could be like where they start, and then if Bryce Bot doesn't help you, then you can come to talk to real Bryce. But now that loses a little bit of you know personality and relationship and network building so there's got to be the right balance in terms of what you would do with something like that. But there's an opportunity there too for you know again allowing people to focus their work time partly on relationships, but also on the task they need to get done. Whereas right now a lot of our work time is spent in the talking to people and answering questions and doing things, and then the actual amount of time to get the real work done is more limited. So how can you shift that balance a little bit to allow people to get more done on the heads-down kind of stuff? So long answer to your question, but.

[Naomi] Thank you so much for meeting with us today, Bryce, and for your amazing insights and thoughts on this new AI topic.

[Bryce] It's my pleasure. It's been fun.

[Music]


Back to Top



Taylor White on the Impact of AI


Jay Vo and Thatcher Lincheck interview Taylor White about the transformative impact of generative AI on various sectors, including his own business.

Taylor graduated from Miami in 2010, and moved to Madison, WI where he worked with Epic to bring innovation to the Healthcare industry. There, he developed tools and machine learning models to optimize hospital logistics and predict census in inpatient units. He led scheduling teams through various initiatives: reducing appointment wait times across the country, onboarding Mayo Clinic, and redesigning an application that more than 2 billion appointments are booked through each year. Taylor now runs an AI-powered software firm, OurGov, supporting trade associations, non-profits, and lobbying firms to work with state governments.

In the interview, Taylor discussed how generative AI has been integrated into his software company to automate tasks and improve efficiency, particularly in categorizing bills and summarizing government testimonies. He highlighted the democratization of AI technologies, making them accessible to a wider audience and changing how businesses operate. Taylor also touched on the broader implications of AI, including potential job displacement and the importance of embracing AI to remain competitive. He suggested future applications of AI beyond chatbots and content generation, like creating personalized meal plans or travel itineraries using AI-driven tools. The interview concluded with Taylor emphasizing the endless possibilities of generative AI and its role in shaping various industries.


[Jay] Hi, everyone. I'm Jay.I'm a computer science major at Miami University and a member of the Lilly Leadership Institute. And I'm joined by Thatcher. Thatcher, can you have a word of introduction?

[Thatcher] Yeah, my name is Thatcher Lincheck. I am a mechanical engineering major at Miami University with a minor in computer science. And I'm also a member of cohort 11 of the Lilly Leadership Institute.

[Jay] Thanks, Thatcher And today we're here with Taylor for our series of professional interviews for our Embrace AI project of the Lilly Leadership Institute at Miami University of Ohio. The goal of this series is to learn more and get insights about generative AI with professionals like Taylor. Taylor, thank you so much for spending your time with us for this interview. And before we begin, Taylor, can you introduce more about yourself, what your current role is? And what's your experience so far with generative AI?

[Taylor] Yeah. Thanks, Jay. Thanks, Thatcher. Thanks for having me. So my name is Taylor White. I am an alumni of Miami University. I was part of the first cohort of the Lilly Leadership Institute. And upon graduating, I spent some time at a large healthcare software company, during which time we did some predictive analytics both around patient search, hospital occupancy. As well as likelihood of discharge for a patient within a unit. And then my current role is chief executive of Our Gov. Which is a small software firm helping trade associations, nonprofits and lobbying firms, we work with state governments. And we're doing quite a bit with both generative AI and other models, some other language processing models there. So that's my background.

[Jay] Cool, OK. So for the first question, what are some changes that you noticed so far, for example in your business, the people around you, and the tech industry since Chat GPT came out and caused like a generative AI boom?

[Taylor] Oh, that's a good question. There have been a lot of changes. So you know you can look at Google results of generative AI or you know language Transformers and you know that it's been an absolute explosion in terms of the number of people that are talking about it. You pretty much see it across like any any industry today. You know just a few probably about eight months ago I was working with a healthcare scheduling company out of New York City. One of the things that we were doing is we were putting up. You know, well authored physician profiles to say, you know, this physician works in the space of diabetes as part of endocrinology specialty and we actually paid, we actually went out and we paid people that actually author these descriptions of each of these positions. And you know have we been doing that project today, 100%, we would have passed that through a ChatGPT engine. So you know, even the last six months it's dramatically changing how folks are working with AI, you know, generating content. And I think it's really been a, it's been a democratizing tool. So I think, you know, six months ago you might have seen a lot more people are intimidated by AI. And I think with ChatGPT and the user interface, you know, your everyday laypersons felt more comfortable and at least gotten to experience what it what it's actually doing.

[Thatcher] Yeah, that's really cool. So going a bit more into like the emerging technologies and what it's like doing to transform society. So in the past, each generation has kind of witnessed these like new technologies such as like social media, like iPads, iPhones and now like kind of generative AI so. While fostering connectivity, this has also bought some harms like increased screen time, mental health issues. How do you envision younger generations, especially students, benefiting from the wide adoption of generative AI, and what potential drawbacks are there?

[Taylor] Yeah. So, you know, I think like with all things similar to what you kind of just mentioned, Thatcher, there's always been innovation. You know. There's been creating like the Internet, you know the adoption to 100 million users that the Internet was significantly fewer, you know, less time than it took to reach 100 million users with TV, which was significantly less time than there was with 100 million users of radio. And it's all you know the march of progress cannot be deterred, right? So I think it's something, you know, the nature of the conference that you guys are putting on calling, embracing AI. You know, it's one of those things that we don't really have a choice to do. You know, it's going to progress whether or not folks, folks do embrace it. So I highly encourage that we figure out a way to embrace it. And in terms of how it's going to benefit society and or in particular young people, you know, I think what's going to be kind of interesting, I was just reading there's a Wall Street Journal article even today that just came out talking about. You know, there's, you know, $200,000 salary jobs for prompt engineering, right. So folks that are writing prompts to generative AI tools to get the kind of output that you know, a firm might be looking for. And you know, the reality is it's just another tool. So, you know, someone might have been creating an Excel report around, you know, some kind of accounting numbers. I think accounting is going to be one of the biggest spaces that's going to be really interesting to see what happens because it's so heuristic based. It's very, very similar to medical professions that, you know, it's really going to allow folks to do significantly more accounts in their accounting profession. So I think as young folks like yourselves are, you know, going through college and graduating college, it's going to allow you to have a much higher output in terms of work, you know, time work per per unit of time. You know, provided you, provided you do embrace it. So I think that's important to think about how it how it is something that you can use to your benefit.

[Jay] And like, what are the potential drawbacks and harms that generative AI can do to this?

[Taylor] Yeah, so I guess kind of alluding to my earlier point around, you know, time for adoption, right. So you know, as you know, we had the industrial revolution, right? And there were folks that were out there spinning, you know, spinning yarn and and weaving and that was a really big industry in the UK. And as we invented machines to automate, you know, you know, the textile industry, you know, all those workers, ultimately those workers. Far fewer workers in the textile industry today than there were back then. And but there was they had time to essentially, you know, find a different profession. And so one thing I think is is to be a concern or something for society to to keep aware of is, you know, there are there's a very large potential for a significant displacement of jobs. And you know, we're going to want to be retraining folks for, for how to adopt, you know, the new tools or you know, find the different professions that are out there. Because you are potentially looking at something where now you know displacing all trucking in the in the United States is not going to be something that's done by generative AI, but you know it's going to be done by an AI system with autonomous vehicles, right. And so you know, the day, the day that software system is deemed proficient, you're looking at 10 million truckers potentially out of their job. So I think that's something that as a society we need to be thinking about, you know, where are the likely displacements and what what can we do as a as a group to help these folks in these different industries find different professions because you know, there's always going to be a growth of jobs. I don't ever believe that there's going to be a time where you know, no one's working and there's nothing to do. But we just need to find, we just need to find a pathway for folks to to continue to grow.

[Jay] Right. I think you touched on this, but feel free to add more. So you said about that people have no choice but to embrace AI. So what are the consequences that may arise if people choose to neglect and not embrace AI?

[Taylor] Yeah. So yeah, this is, it's a little bit of like game theory, right? Like. So Microsoft and and Google and a bunch of the other big AI companies, they actually have an agreement that went back, I think back to the early 2010's where they said, you know if any one of us takes over 50% of the worldwide GDP, right? All of the money that's, you know, 50% of all the money that's made in the world, we will disperse it to everyone across the world. And so the idea was there, you know, we shouldn't be racing against one another to create the, you know, the quickest AI system because that might not be the safest, right? The AI system might not be aligned with our values. But the reality is, like no one we can't. We can't as a society say we're not going to, you know. Rests forth with our AI research because, you know, another institution might do so and and they might not have quite, you know, benevolent viewpoints on how to use the system. And so for folks that are, you know, let's assume that AI is going to progress. I think we all at least on this call seem like we share that belief. You know, if if let's say Thatcher and myself choose to you know, use generative AI in producing our summary reports for the quarter and Jay, you don't, well suddenly like Thatcher and I might be doing three times more reports than you are. And so I think the challenge is going to be not that like you're not going to be. I don't think you'll be necessarily put out of a job, but you're certainly going to have less work to do. It's going to be harder for you to compete in in the industry. And so I I think that's I think that's the biggest concern for folks that aren't that aren't embracing the technologies as they guess they come forward.

[Thatcher] Yeah, I think that's definitely a good point. I think we definitely need to be prepared for that. So you talked a little bit about what you were doing with OurGov. So do you want to go in a bit more and talk about how you have integrated AI into your personal life and your career with OurGov?

[Taylor] Yeah, sure. So, yeah, So in terms of what we're doing with. AI today and OurGov, so we've we've really been, Jay, you were kind of mentioning how have things changed and you know we spent a lot of time manually categorizing bills. So when bill text comes through, we'll actually go through and read the text of a bill. So here's, you know, here's the text of the Assembly Bill 64 here and what we would read through it and manually tag it. And then what we started to do is we started building language processing algorithms. That we would go through and try to suggest tags. And now what we're doing is we're actually using generative AI to to read through the text of a bill, suggest categories and then we'll actually approve them. But we also send various parts of you know, government testimonies. So what we're looking at here, this is a testimony about AB64 and AB131 within the Department of Agriculture or the Committee of Agriculture in the state of Wisconsin. And so what we actually do is we actually look at the, the testimony and the text and statements that were made by individual lobbyists or you know, trade representatives, trade association representatives, and we summarize that. So you know, when you look at this, this text, this testimony is hundreds of pages of long, but you know, someone can actually distill down the information of, you know what Matt Krueger's perspective is. And then we even use generative AI to do our classification modeling too that we previously would have done through models that we developed ourselves. So earlier we were talking about how democratizing open AI's APIs have been. Well, you know, historically you might need your own data scientists are probably the most one of the most expensive engineers that you can hire to to look through data and figure out how to model through categorization algorithms and now we can really just send it through APIs with the right prompts. So a little bit of that prompt engineering profession that's starting to be you're starting to see job opportunities for we're we're doing that here as well. We haven't done anything with chat bots yet. We're right now kind of controlling the interface, but there's lots of opportunity there as well as we start to develop like the ability for our users to create custom agendas when they bring in all of their clients to the state capital. Yeah, so this is, this is kind of just an example where you can see some of that generative AI as it gets produced in real time.

[Jay] For our final question. So for our project Professor Morman like your your previous professor, she said that we need to think of A use case for AI besides just a chat bot or summarizing content. So what are some other application that generative AI or a large language model can achieve beyond what we have seen so far?

[Taylor] Yeah. So I think there's there's tons and tons of opportunities. So I actually sent you guys one that a friend of mine are proposing to a large grocery chain here. I think, I sent you guys a video a while back. But basically, you know, one example would be that you've got an app on your phone, right, And you you want to build your grocery list. So you want to make, you want to make a meal plan for the week and so. You know, you might tell it a certain set of preferences where, hey, I've got. You know, I like Mediterranean food. They might know that I'm a I'm a 25 year old, you know, I'm a 21 year old college student. So I have tons of money, but I want to get like ideally like a certain amount of my diet from Mediterranean style food. And then maybe I like some peanut butter as well. And So what a generative AI bot can do is actually create a table that says, you know, here's a rough meal plan for you for this week, here's the groceries, you can pick them up here, the aisles that they're in. And really pulling from. You know what they've been starting to do with open AIs is they create what are called plugins for open AI language models to actually request data from external third party sources. So you might ask it a math problem. It says hey I know it's a math problem, I'm gonna I'm gonna request a calculator plugin to do the actual operation. So same thing you can do with this tool like that where you create essentially an application that's developing a meal plan for you or like a travel itinerary is another good example. Where you know, you say I wanna go to Belarus and it knows what contents in Belarus based off of. Some of the language statistical statistical modeling that it's been doing and then suggests different events based off of what you described to it. I think the possibilities are quite. Quite endless and kind of going back to my other remarks earlier too, You know folks are starting to, I don't think it's the best tool, but folks are starting to use generative model, generative AI models for problems like classification where previously they would have spent tons of time developing developing those.

[Thatcher] Yeah, I definitely agree. I think there's just so many ways that these large language models can be used. So thank you so much for coming to talk to us, Taylor, and spending your time just having this conversation with us. We certainly hope to talk to you sometime in the near future and just keep learning about Generative AI.

[Taylor] Yeah, definitely. I hope to see you guys next time down on campus.


Back to Top



Professional interview on use cases for AI & Machine Learning with Siva Chittajallu


In this video interview, Siva Chittajallu, the Global Head of Algorithms and Advanced Analytics at Roche Diabetes Care, discusses his role and experiences with artificial intelligence (AI) in healthcare. Siva introduces himself and outlines his responsibilities, including leading a global team to address mathematical needs in research and development (R&D) while staying updated on emerging technologies. He recounts his early exposure to AI at Purdue University and its implementation at Roche, emphasizing its various applications, including internal decision-making and customer-facing tools for diabetes management.

Siva elaborates on the challenges and priorities in developing AI solutions for healthcare, such as ensuring patient safety, device limitations, real-time responsiveness, and data privacy compliance. He explains the use of machine learning techniques like regression, classification, and forecasting for immediate decision-making and data privacy preservation through methods like synthetic data generation and federated learning.

Regarding concerns about AI adoption, Siva discusses the importance of explainability and accuracy in AI models, especially in healthcare, and proposes approaches to address these concerns. He emphasizes the integration of machine learning with traditional statistical methods and the cautious implementation of AI in product-facing applications.

Lastly, Siva offers advice to aspiring professionals in AI, suggesting a strong foundation in statistical machine learning principles before delving into advanced techniques. He encourages understanding the underlying concepts rather than solely relying on tools, emphasizing the need for a comprehensive grasp of AI for meaningful contributions in the evolving field.


[Music]

[Alex] Alright, thank you so much for being here today. I'm joined by Siva from Roche Diagnostics. So could you talk a little bit about your role and introduce yourself to people who are watching?

[Siva] Um, hi hi everybody. My name is Siva Chittajallu. I'm the global head of algorithms and Advanced Analytics function in Roche Diabetes Care. I'm responsible for coordinating a global team to address mathematical needs of R&D. In addition to staying on top of projects that my teams are supporting globally, I spend time on keeping an eye out for emerging technologies and tools, and generally try to facilitate removal of roadblocks for my team in order to do their functions.

[Alex] Okay, wonderful. So once again thank you for joining me today. Um, looking forward to hearing some of your perspectives that you've got. Um, so just to get us kind of started off, what does your experience look like with artificial intelligence? What have you been exposed to in the past?

[Siva] I was first exposed to artificial intelligence in the late nineties while I was a faculty member at Purdue University. I was using neural networks, and that was actually what I brought in as a technology to Roche. That was the reason I was hired about 25+ years ago. I've used machine learning at Roche throughout my tenure for a lot of different use cases, some of them are just for internal purposes, comparing different kinds of measurement methodologies and so on. But not just that internally, but also for creating tools that are customer facing.

[Alex] Okay, wow, wonderful. Okay, so you've had a hand in this for a little while now. Okay, great. So then let's talk a little bit about those. What are some of the things that you have been keeping up with recently?

[Siva] Typically, so, just to give you a background right, the type of projects that we're dealing with, we are dealing with building products for use by diabetics. So these products are pager size devices. Let's just, as a place of frame of reference, pager-size devices on which where we put in the necessary hardware and software in there in order to analyze the data that's coming in and to provide meaningful information so that both diabetics and the doctors can make meaningful decisions about managing their diabetes. So given that context, there are a number of things that are important. First of all, it's a device that is used to make healthcare decisions that mean life and death, so it's very important that in that kind of situation you are absolutely sure that the decision that you are making and recommending to the patient or physician is not going to cause any kind of harm to the person. That's No. 1. No. 2, the footprint of a pager size device with its competition capability and the computational capability and memory is limited in terms of the kinds of technologies that we can use. The third thing is that we need, more often than not, we need real-time kind of information. We cannot take an hour to provide a response to a question that the physician and our patient is asking. And then the last thing is the data privacy aspect, so none of the data can flow in and out of the device, it has to be all resident in the device. So given that, the kinds of technologies that we that the kinds of machine learning that I have been involved in is primarily in terms of using machine learning for making immediate decisions involving regression, classification, forecasting, things like fail safes which are making sure that the device is not operating in a normal fashion off off of the device. If you're talking about just internal within the Roche in-house kind of usage of machine learning, we use them for things that are related to data privacy. There's a variety of, as you may know, there is a regulation called GDPR which that was passed in the EU that governs data privacy. We have many such regulations all over the world and so the technologies that we used to use 25 years ago are not the case. So we have some emerging kinds of things that we can do machine learning in the context of data privacy. So, for example, synthetic data generation, federated learning, these are kinds of methodologies that are very important for us. Synthetic data because you do your analysis on not the real data but use the data that is created to mask the identity of the individual and develop and it's a synthetic data set that you can freely share with other people without revealing the identity of the individual from whom the device comes, when the data comes. So those are some of the kinds of things that we do this. Other things like physics-informed machine learning which is kind of a modelling paradigm that is very much hard in the biomedical industry because a lot of biological systems you don't understand them perfectly, you understand them partially and you need to fill in the gaps of what you don't understand. So the combination of some insight into how the system is behaving along with machine learning to complement it is another kind of methodology that we use. These are some of them.

[Alex] Yeah, great, okay. Thank you very much. Appreciate all the information. Um, you talked a little bit about kind of how data privacy is certainly a big issue. So being in the healthcare industry, there's a lot of talk nowadays about artificial intelligence or machine learning in all industries and so that includes healthcare. But when it comes to healthcare, there is that issue of people are concerned about patient privacy and confidentiality. So how do you approach using novel technologies whether it be AI or anything really while still being able to maintain data privacy?

[Siva] The Google about five years ago came up with this concept type of machine learning called federated learning. And federated learning is a methodology where the learning takes place on the edge, so the data never ever flows from the customer to central server that is located in the company, you do all your learning on the edge. So like what Google Google would do is they would have a component on your cell phone that could learn from what you're doing on your cell phone without the data ever travelling back to Google servers. That's the same kind of thing that we are and medical, essentially biomedical companies are thinking about. That's one that's one way to do it. Okay, another way I already mentioned is a synthetic data generation where yield never flows back to us but a synthesized version of it comes back to us. But that's yeah. So, those are couple of things. The obvious thing is is really methodologies that are not particularly derived from machine learning but derived from just general data security. Um, or you'll hear a lot about homomorphic encryption where the data is actually masked and anybody can look at the data, it's like a public-private key kind of technology. Sounds similar to that.

[Alex] Okay, very neat. Thank you very much. So, with the rise in a lot of these new novel technologies that are automating or increasing the pace of a lot of different tasks across all industries, there's often times a lot of fear among the implementation of AI or any sort of artificial automation. So what are some ways that kind of you've seen that either, you know, that it's beneficial or you've seen ways that help to combat some of those fears and help people understand that these things are beneficial?

[Siva] Talk a little bit about the data privacy, that's one of the major things that we are concerned about. There's two other things that are very important to us and that is, I mean among a slew of things, but explainability is a very important thing. With AI, one of the things as you go to deep neural networks or any kind of a modern machine learning tool that requires more than say 10, 20 parameters, you lose a touch with the relationship between the value of a particular parameter and what its significance is in terms of explaining the response of that network. AI is hyper-aware or hyper-concerned that we get to a point where we build devices that have black box math that is taking place where you get a certain result but you don't know why the result is that, that's one thing. And the other thing is accuracy. When we build an algorithm, we need to be confident that the algorithm result is accurate, it has health care implications. So, let me talk a little bit about explainability. One of the things about explainability is that can be used is to try to combine machine learning with traditional statistical tools. So, you use large volumes of data to make build a machine learning model and use the verification and validation of that machine learning model to give regulators some comfort level that nothing is going wrong and then use very simple explainable components that you can say, okay, well this is a slope, this is an intercept, which is things that regulator would be perfectly comfortable as far as accuracy is concerned. Accuracy is a big deal. So, we tried, we are very conservative in the biomedical industry to at least in our field in our diabetes care that we do not use any machine learning in a product-facing kind of application without seriously thinking about the implications of a problem. One approach that we do is one of the concerns that is always there when you build a machine learning model is that the machine learning model will occasionally produce an habitant value that makes no sense like it predicts the glucose of minus 20 well blood glucose never goes negative it's always positive. So one of the things that we had, we had done at one point is we built a machine learning model but we cap the limits of what it is so if it reports minus 20 then we automatically clip it at 30 for example and you so that way you never have an average value like minus 20 things like that clipping is is a very brute force kind of a way of addressing that accuracy problem.

[Alex] Okay, right, thank you very much. Um, so as we kind of finish out here, I just got one last question for you. Um, for anyone who is looking to learn more about artificial intelligence or trying to be the best professional that they can in an emerging world, um, what sort of lasting advice or summary do you have for them from here on out?

[Siva] So, I think as, as you know Alex, the field of machine learning is changing so rapidly and so in order to really follow that rapid pace of changes or evolution of that machine learning and you have to start with a solid foundation. So, I would always recommend that people don't start out with the current latest and greatest kind of a thing and not really on learn about the statistical foundations of machine learning, for example. Make sure you learn the statistics of machine learning and then start with some ground level kind of machine learning concepts before you go into you don't want to make it essentially be be cautious about trying to be a person that just turns the crank, you know, lot of lot of current tendency would be hey, there's this fancy tool I throw in the data and I come out with the result try to get beyond just using a tool but get to the level of understanding from a statistical machine learning kind of a background to be able to say okay here's okay here's what essentially what's going on you have to have some sense of what's going on or use purely purely as a tool right.

[Alex] Alright, well thank you, that was some fantastic advice and a wonderfully refreshing perspective on artificial intelligence today in the world so thank you very much for joining me today.

[Music]


Back to Top