Transcript go here
Video transcript coming soon!
[Music]
[Jessica] Hi my name is Jessie Gentles I am a third-year software engineering major at Miami University of Ohio and I am also a member of cohort 11 of the Lilly Leadership Institute.
[Naomi] And I'm Naomi Maurer. I'm a junior biomedical engineering major at Miami and I'm also in cohort 11 of the Lilly Leadership Institute. Today we are doing an interview for the 2024 Lilly Leadership Institute Embracing AI conference. The goal of this conference and project is to explore how professionals can Embrace AI to work at their full potential and today we are interviewing Bryce Williams. Bryce, can you introduce yourself?
[Bryce] Sure. Great, Thanks for having me. As you mentioned my name is Bryce Williams and I work at Eli Lilly and Company. Specifically my role there is what we call digital capabilities advisor where I pretty much help people within the workforce figure out ways to apply digital collaboration, technology, and things of those likes to get their work done and running our business. And the thing I love about my job is I pretty much get to help any business area inside Lilly. I'm not limited to just a specific segment but anyone who's trying to apply technology to get work done and make them more productive and efficient I get to help them and spend time Consulting with them about the best ways to do so and of course given the today's topic lately a lot of opportunities have come up in terms of how we think about artificial intelligence tools generative AI tools to achieve just that goal right productivity efficiency and getting work done so this is a really great topic and excited to join you guys and talk about it today.
[Bryce] I probably should have also mentioned I'm a Miami grad. Graduated in 1999 from the systems analysis program and I actually came to Lilly right out of Miami so this is actually my 25th year at Lilly starting here on January 1, so I've been working at Lilly longer than more than half of my life now so there's life flashing before you your eyes for you.
[Jessica] Congratulations on 25 years yeah thank you all right I will go ahead and ask our first question of the day so as someone who currently works in technology what are some fears about generative AI that you have heard from the people around you?
[Bryce] Sure, yeah, certainly. Anytime something new emerges particularly something this potentially revolutionary there are fears and there's hesitation and there's reticence. The areas where I'm seeing this come, so most of what I've been working on has been thinking of AI as an assistant to your workday, right? I haven't done a whole lot much with AI completely automating things and people getting out of the way. I'm mostly involved in, as a human getting work done and using different AI capabilities to assist me in doing it better. So I haven't been so much in the spot of, at least in my specific work, fear of being replaced by AI. What I'm really trying to help people focus on is AI to become an assistant that helps you be better, right? That it actually increases your value, increases your reputation, because you figured out how to use it effectively to get through the minutia and the mundane work and get to the real creative and strategic work and the things that are going to bring a lot of value. So that fear of job loss at least in my space is a little bit less than what you hear. For, some people talk about that more kind of automated delivery AI, but even in the things that I'm working on. What other additional fears are there? Certainly, there's always fear of having to learn something new like some people are more intimidated by technology than others, just getting them to use things other than email to communicate is sometimes difficult. So now throw on top of that the opportunity for all these AI capabilities and someone thinking of the term artificial intelligence and large language models and tell them, oh now you have to adopt this on top of everything you have going on in your workday. So just that fear of learning something new and with a lot of the Fanfare around it, I think people might be intimidated by that until they've seen it and had a chance to get used to it. So just that's an initial fear.
[Bryce] I think a big fear for me is putting in the investment and the work to bring this to our workplace and preparing the culture of the workplace to accept it as a major aspect of how we get our work done together. So to get people to start shifting their mindset from the way that I've done things historically to accepting, you know? I did, like if you ask me to write something just know that this isn't 100% my words from scratch. I used AI to get it started and then I revised it to make sure it's accurate and representative. So there might be elements of how it sounds and how it looks that's obvious that occurred, but allowed me to get that done faster and achieve the need. So the acceptance of AI playing a role in the work that we get done and where it's effective and where the human needs to step in. So kind of that preparing the culture for the Readiness of being able to do that and I kind of alluded to it in that previous point but then also the trust of the outputs. One trusting the outputs to the right extent, but then two also being critical of the outputs you get from AI and bringing in your own perspective to make sure they're accurate because not everything's going to be 100% accurate. It's just giving you a starting point. It's giving you an assistant to get past some of the more difficult pieces so that you can get your work done more quickly and bring in insights you might have thought of but still get it to a point where you've put your brain on it to make sure it is accurately reflective of what you want that output to be to make sure you've put your own flare on top of whatever it's presenting for you. But it's just kind of giving you a starting point instead of beginning from a blank page, for example. So that like I said kind, of that, that trying to find the right balance point in a workforce between trust of outputs and criticality of outputs and the level of balance that you bring to adding your own perspective to it.
[Bryce] And then the last thing I'll put on the fear would be in a lot of ways to get value out of AI in a workplace we have to make sure that a lot of things we're doing create a heavier digital footprint. Because the AI is reading digital outputs, so the meetings we have like this one, I would need to make sure that what I'm saying is being recorded and captured so that AI can be used, for example to summarize it for someone who may not be able to attend or to give a list of action items or something from a meeting. I've got to now record that meeting and have text transcription of that meeting. I've got to make sure that things that maybe historically were all done in in-person interactions have some type of digital footprint to benefit from the AI outputs I might want to leverage. So there's a little bit of fear from business context of the propagation of recorded information and content that maybe before was more just spoken in the hallways or spoken in meeting rooms. And how do you mitigate the risk that creates for various downstream reasons so as I work through again the style of how we're trying to bring AI to the workforce, those would be some of the specific concerns that I've heard in the early days.
[Jessica] And you kind of touched a little bit on this but how do you believe these fears can be mitigated and addressed as workplaces move more towards implementing more AI tools?
[Bryce] Yeah, so anytime you're trying to drive a major behavior shift, you have kind of two competing priorities. One is you want to get value out of it quickly so you want people to move fast, and so you want to incentivize them and motivate them on how to use it well and how to kind of bring it to the top of their skill set but at the same time you want to train them about how to use it appropriately, how to use it responsibly. So I think in this era setting examples, providing training, providing very clear to understand and consume resources about ramping up your adoption, but also understanding how to apply these things in a responsible and ethical way. And, I again, I kind of talked about that aspect at least as we're thinking about it, about at least at the moment, keeping the human between the AI and the outcome. Not just making it all like these automated outcomes where humans aren't being the final decision-makers at least. Again in the space that I'm working on, I can mitigate a lot of the fear by having it to be an assistant to the human not a replacement for the human. Keeping that human responsible between what the AI produces and the ultimate work outcome so you can make sure that the responsibility, the training, and the judgment of those humans are still resulting in good outcomes. But allowing that human to have done so with a much higher productivity level and produce more such outputs in a shorter amount of time, and produce a higher return on their own effort, a higher return on investment back to the company because of their ability to get more done through the assistance of that technology. So that kind of one piece of training combined with at least right now in these type is making sure there's some type of human check, I think, is an important one. I also think it's important again, each workforce is different in terms of the layers of one depending on how large they are and what industry they are on the layers of security they might want to apply to something or the layers of behavioral policy they might want to apply something. But at least for me I think for anyone in a workforce, make sure you have any AI capability you're attaching to something, it's a trusted source where you know that the inputs that you're putting into it are safe and they're not exposing anything about your work or your company beyond a point that it should be exposed. Make sure that the results that you're getting back are coming from a trusted source that you know has been vetted to be able to be an effective tool for the purpose that you're trying to apply it to. So whether that's someone who's organizing the AI program for a specific company in their workforce, or whether you're an employee trying to make sure you're doing things at the tip of the early technology scale, but also making sure you're staying responsible with that. What are some secure, trusted, reliable AI services and capabilities that you can make part of your arsenal that you've vetted in that way so that you know that you're experimenting with it and creating results with that, but in a reliable and trusted way. And so, at least you get the reliability and the trustworthiness and the capability out of the back of your mind so then you can focus on creating good outputs and not creating additional problems for yourselves.
[Bryce] So that that's another one. And then that's probably the last thing I would do, would be I kind of talked again about the digital footprint and the additional digital footprint that we're having to create to get AI benefit. Again, this is a lot of times, maybe not at an individual level, but maybe at a company level or workforce level. What are my retention policies about the digital footprint that gets created so that it has value for a certain amount of time after it's captured? That makes sense, but then do I create some type of retention rules or removal rules when it goes beyond its useful life so that I'm not also leaving things that are a risk because they're outdated or they could be misunderstood or misconstrued out of context? And so again, balancing the increase in digital footprint with the appropriate retention and control over those digital. That digital information is being captured at a higher rate as a result and not just the interactions you have that you're running AI on, but also the AI outputs that you're getting back that the human hasn't interacted with yet. Make sure that you're thinking about ways to manage proper retention and removal of those artifacts over time.
[Jessica] Alright, thank you. And kind of getting a little more into the collaboration side, how do you think the use of these AI tools will affect workplace collaboration?
[Bryce] Yeah, so this is actually a perfect marriage, if you will, of the different things that I work on, right? Because I'm, I work heavily in the workforce collaboration space, and now all of a sudden I'm learning how to apply that, apply all these AI tools to how we get work done together. I've alluded to this a few times, whether it's the one that I think has the most value or not, I'm not sure, but I think where I'm seeing the most excitement is the application of AI to meetings. A lot of work cultures, particularly larger companies, have a very heavy meeting culture where people are spending a lot of their days, many hours going to meetings and discussing, you know, various aspects of their project work and their initiatives, making decisions together. And the more of those you have, it can be really taxing on your ability to actually get the real work done. And a value that we're seeing AI bring is if I'm able to generate summaries of meetings that occur so that one, I don't have to be distracted by taking notes during the meeting because I know those are just going to be captured and autogenerated. Two, I don't have to be worried about capturing action items or assignments or to-do lists because, again, I know that those can be autogenerated. I'm more focused during the meeting, so maybe I can have shorter meetings. The other outcome is a lot of times the reason people have so many meetings on their calendar is because they're invited for an FYI purpose only. They're not really actively contributing to the content of the meeting. So think about if all this information is being captured in kind of these short summaries or these autogenerated action items and decision lists, I don't need to attend a meeting anymore as an FYI. I can just be notified that it occurred and get the quick summary. It didn't take someone extra effort to go up and type the notes after it just happened, and so now the number of people that need to attend the meeting can be limited down to just the absolute people who are contributing actively to those decisions and to those action items so I can have shorter meetings, I can have fewer meetings, I can have fewer people in meetings and create a lot more opportunity to get to the real work. So for workforce collaboration in my space, I think there's a lot of opportunity just in the meeting culture impact on the same line of thinking though. And I've even done this a few times now that I have the ability to have a meeting instance that I'm sitting in just autogenerate text of what I spoke or summaries of what I've said. There were times where I was, I needed to write something that someone wanted to consume as a summary of a project and I didn't feel like I had a lot of time to write it really well. So I actually just logged into a meeting all by myself and just spoke for like three minutes and then I just had it say now just summarize what I just said in this meeting and then that became my little summary for my project. I didn't have to sit and write it, I just spoke out loud off the top of my head and then it created a written artifact that I could pass on to meet that need, so that increases the value of even asynchronous video interactions. So instead of synchronous meetings where we're all getting together like the three of us are together right now having this discussion I could I could just send people a video of me and it can serve multiple purposes. One, it can meet some of my needs for generating content or two, if I send that video to someone and they didn't really feel like they had time to watch it they could just say, hey can you summarize the you know what are the three things that I need to know that Bryce just said in that video so it creates a little more value in that style particularly for people of personality types that are less inclined to type a lot or write a lot of sentences Some people prefer that style Some people prefer to talk and so this brings a little bit I think more value to that asynchronous style of collaboration in addition to our synchronous meeting style as well as our heavily written text style of collaboration. So that's one thing I think.
[Bryce] The other piece that's going to be really interesting and how it affects collaboration is if I receive something from one of my co-workers, sometimes it might be hard to tell if they really wrote it or if they used AI to write it, whether that actually matters or not you know, is it their words or not, as long as it's telling me what I need to know. So I'm starting to see that more where I'll get emails or messages from people, I'll be like did they really write that? And again, not that it matters, but it's just a little bit of a new dynamic of interpersonal communications. But actually where I've used it a few times because sometimes these generative AI tools can help you bring a little fun to things and a little creativity. Like a lot of them will you know if you tell it to format something in a poem but tell the tell the story in a poem or write a song that rhymes about you know this work topic that I'm trying to describe to make it a little bit more engaging to someone. These are all examples of things that we've actually done in my work team, and trying to tell stories about the things that we're working on and it's really easy and quick to do that. I don't have to you know, I don't have to tap into my left brain, which I don't use as often to do something creative, but I can still create engaging outputs to kind of bring people to the table a little bit more easily and be a little bit more entertaining to help train and educate people than just the normal type of work that we do. So that's what I've really found interesting in collaboration, is the ability to get a little more creative without as much of the mental hurdle and the effort to get to that stage.
[Bryce] Another one, if you ever deal with it, I tend to be very verbose, not if you could tell based on having this interview, but when I write things to people, I tend to use a lot of language to be very specific. And I've gotten a lot of feedback sometimes like with emails or messages that I've written to people. They're like Bryce, tldr, too long didn't read. I need you to boil it the basics down for me. What are you really trying to say here? Well, now they don't have to. They don't have to ask me that anymore because I can still write exactly what I want or I can generate what I and they can just use AI on their own end to go ahead and boil that down and decide whether they want to dive into the full details of all the knowledge that I just imparted on them, or just ask for the three bullet point summary. So I think that's also going to help the connection of communication styles that maybe prior created some disconnection. Because I may want to be verbose, someone else may want the executive summary and now both of our styles can come together to meet based on our own preferences of that as opposed to what I choose to do in the moment of communicating something.
[Bryce] So speaking of too long didn't read as kind of one of those you know famous cliches out there the other one if you've ever heard this before, where someone reaches out to you and they ask you a question that they very easily just could have searched for and then like they didn't have to ask you. Sometimes there's this-there used to be a site out there, I don't know if it still exists, they called it Let Me Google That for You, where you could just type into Google the search and then send the person the link that actually shows Google searching for them. And so you kind of passive-aggressively say hey instead of asking me, you could have just run a search for that right now. The new version of that that I'm starting to cross in my head is let me AI that for you, right? So you ask me a question, or you come to ask me to do something that very easily could have just been going into an AI chat tool or something and having asked that same question and whether it's the public internet as the source of that summary or that knowledge, you know you didn't have to come to me there, that was a thing you could have done first. So this idea of AI first then heyI so then go to Bryce and say hey I just tried to do this with AI, I still have some questions. Can you help me? But instead of coming straight to an expert or trying to ping an expert directly, think AI first then hey I need your help right with that context. So I think that's another key one in you know we see that a lot in the place the whole ping the expert, and then you know play chat tag or phone tag with the experts until you get the right answer. I think AI can really try and get in front of that again to reduce the distractions that a lot of us are getting if we are known as experts because more knowledge is out there that can be retrieved and pulled in that AI type of behavior. So that's a lot of different impacts that I listed off but those are ones off the top of my head that I think will have a big impact at least in the culture workplace and how people use digital capabilities to get work done together.
[Naomi] Expanding a little on that, how do you think generative AI can be integrated with existing collaboration tools to enhance productivity in the workplace?
[Bryce] Well, I kind of the I think of AI, again I'm kind of coming from this assistant side of it and kind of helping with four things. I'll break it into use cases. One is create, another is ask, the third is summarize, and the fourth is maybe synthesize might be a good word for it. So what do I mean by that? So anytime I'm looking at a blank page to either create something or reply to something, I don't need to do that anymore, right? I can ask gen AI to put something on that blank page to get me started, whether that's a longer form document of something, whether that's an email I'm drafting, whether it's a presentation that I'm creating. The capabilities are there now to just create something from scratch for me with just a little bit of input and using sources that it can search and find to pull to the table to get me something to start with. So that now, instead of the many minutes or hours that it takes for me to get from blank page to something usable, I'm almost instantaneously at something usable, and I'm in the revising and perfecting stage, not the initial creation stage. So that's one piece in terms of my collaboration tools and making sure that that creation draft piece is there for me.
[Bryce] The next one is ask. We capture a lot of knowledge, whether we are doing it purposefully and putting it in static places that are like, this is a knowledge base that I created specifically for this purpose. That's one way of creating knowledge. The other way of creating knowledge is the organic interactions that we have every day through our chats, through our emails, through different threads and forums that we might be interacting with people in. That's capturing knowledge in the act of working instead of having it happen in addition to our work through some specifically targeted knowledge management platform. So, AI can really help and they ask in using that organically created knowledge to come back and find me and give me answers based on things where it's already captured. So collaboration tools are where a lot of that knowledge is being captured in a very informal way. And so how can the ask scenario help me get to what's already been captured again instead of having to interrupt people who've already shared that knowledge over and over again to get back to it again, right? So the AI, the ask piece, helps me get a little bit in between me and the expert like I described earlier by being a capability integrated into pulling from that organic knowledge source.
[Bryce] Summarize is another one I mentioned. So whether I have missed my email for a week because I tried to take a vacation and there was a thread that was 15 reply to alls long, I don't have to go through and read that from the bottom up again. I can just tell an AI tool that's in my email or in my chats, either one, summarize for me the key things that I need to know that I missed or even ask specific questions of it that say where was I mentioned in this, or were there any mentions of my specific project, help me understand out of all this thread of things what pieces I need to know, instead of having to traverse it. The same thing with documents or long pieces of content that might be shared with me: I don't have to read the whole thing if there's something specific I'm looking for. Build it into my capability to let me go in there and start querying questions specific to that piece of content.
[Bryce] I actually have a very specific use case within my work where someone had to review hundreds and hundreds of pages of documents also that were tied to a series of meetings that they had discussing that project work. And it was like eight hours of recorded meetings so they used a combination of AI, both to ask questions of the meeting recordings, as well as ask questions of all the hundreds of pages of documents to rapidly accelerate the outcome of making the decision that they needed to from all those inputs. Whereas if you think before AI existed, allowing them to do that, you're looking at employees who are scouring through a lot of information, watching those recordings over hours to take notes and try and bring it all together. Whereas they were able to use summarization features to get to that outcome much more quickly.
[Bryce] Synthesize is for me what I mean by that one, is kind of give me insights on top of just summarizing something for me or just telling me what it says, but actually add additional context to it by allowing me to converse with you, you being AI sorry, I just personified it I guess, but as I get an answer back, I can continue to ask follow-up questions and say, can you bring more context to this, for example. What if, you know, how does this work in the context of this additional piece of information, and what that becomes is in its conversational manner something that helps trigger better ideas for me because it brings concepts to the table that I might not have otherwise considered. So just being able to kind of sit and converse with an AI helps me be a little bit more creative myself, as opposed to just getting static results back and then figuring out how I need to synthesize it myself. I can, where the art there becomes is how to know what questions to ask the AI. How to prompt it effectively so my skill has to shift from being a good deep reader and synthesizer of information to now knowing how to prompt and ask questions that bring that information out to me in an effective way so that I can get to those answers more quickly and more effectively. So when I talk about the collaboration capabilities that I want, it's the ones that help put all that in front of me so that I can get my work work done more quickly.
[Bryce] Now again, I've kind of talked about keeping the human between the AI and the outputs. There's probably a lot of opportunity there too, to automate more things do. I again, I kind of talked about the ask scenario. This idea I've come up with, that I haven't done anything with, but I'd love to have like a Bryce Bot that like you know just anyone could come to who wants to ask me a question and they could start with Bryce Bot because I've you know made sure that all the things that I might know or have expertise in or think are important are funneling into that. And that could be like where they start, and then if Bryce Bot doesn't help you, then you can come to talk to real Bryce. But now that loses a little bit of you know personality and relationship and network building so there's got to be the right balance in terms of what you would do with something like that. But there's an opportunity there too for you know again allowing people to focus their work time partly on relationships, but also on the task they need to get done. Whereas right now a lot of our work time is spent in the talking to people and answering questions and doing things, and then the actual amount of time to get the real work done is more limited. So how can you shift that balance a little bit to allow people to get more done on the heads-down kind of stuff? So long answer to your question, but.
[Naomi] Thank you so much for meeting with us today, Bryce, and for your amazing insights and thoughts on this new AI topic.
[Bryce] It's my pleasure. It's been fun.
[Music]