Host Yolanda Fintschenko, executive director of Daybreak Labs and i-GATE Innovation Hub, home of the Startup Tri-Valley (STV) Initiative, talks with Brian Spears, PhD, director of AI3, the AI Innovation Incubator at Lawrence Livermore National Labs (LLNL), and one of our 2024 Tri-Valley AI Summit panelists.
Brian is passionate about developing deep learning for applied science, especially inertial confinement fusion. A proud Tri-Valley resident, as director of AI3, he focuses on building public-private partnerships to accelerate the development and application of AI. The goal of AI3 is to advance AI for applied science at scale. The AI3 expands LLNL’s capabilities through industry collaborations, establishes visible leadership in AI for applied science, develops informed strategies for mission-driven AI investments, and coordinates investments focused on exploring and developing AI. Watch on YouTube .
Below are links to sources of information referenced in this podcast:
- Learn more about AI3 and AI at LLNL by visiting https://ai.llnl.gov
- Visit the LLNL Careers page to find open positions at LLNL
Read the Episode Transcript
Startup Tri-Valley Podcast – Brian Spears
Yolanda
This is the Startup Tri Valley podcast, featuring in depth conversations with the leaders who are making the Tri Valley the go to ecosystem for science-based startups. I’m Yolanda Fintschenko from Startup Tri Valley.
Today, I’m pleased to welcome the Director of the Artificial Intelligence Incubator from Lawrence Livermore National Labs. And before I hand the microphone over to, uh, to Brian Spears, so much for joining us. I’d like to tell you a little bit about how we met, which is through our AI summit, the Startup Tri-Valley AI Summit that we hosted in August of 2024 on transparency, bias, guardrails and privacy and we were fortunate enough to get the director Brian Spears of the AI incubator from Lawrence Livermore on our panel where he talked about a wide ranging elements of sort of what’s technically possible what’s needed and we were thrilled when he accepted our invitation to come on the pod so welcome Brian.
Brian
Yeah, thank you so much, Yolanda. It’s a pleasure to be here.
So why don’t you kick off with telling us a little bit about yourself, your role and what the Lawrence Livermore AI incubator is trying to accomplish?
Yeah, so I’m by training an engineer, scientist, and I’ve worked for a couple of decades. Uh, trying to get fusion ignition at our National Ignition Facility, and that gave me a chance to build out a wide range of data science tools and techniques for doing really challenging national security science missions.
And then currently, my goal is to help set the strategy for our National Laboratory at Lawrence Livermore National Lab. And one of the key tools that we use there is AI3, or Artificial Intelligence Innovation Incubator. And the goal there is to look at what’s going on in the outside world where AI is transforming things at enormous scale and enormous pace and then make impact both inside our laboratory for our national science missions and also outside the laboratory where we’re trying to build out Techno economic capability for the United States writ large, and then in our conversation today, there are a couple of scales that we’re focusing on.
There’s the very large scale, the NVIDIAs, the OpenAIs, the giant players out there in the world, and then there are the new smaller players that have some very deep niche technology that we want to learn from, and then we want to feed back on and drive forward.
Yolanda
Great. So you’re, if I hear you correctly, it sounds like what you’re trying to do is from a national security standpoint, wrap your arms around this incredibly impactful, but rapidly evolving technology in a few different axes, right?
From the actual security standpoint, from the economic security standpoint, and then also how to make this, uh, not just a public make. Yeah. Conversation but public private partnership with the large scale innovators as well as smaller scale innovators and users and all the pieces of the ecosystem from what’s become a really large part of our economy.
Brian
Yeah, that’s exactly right then. That’s right. I mean to put it in perspective you know no technology has moved this quickly at this scale across the span of the time we’ve been thinking about science for national security. So since the Second World War, our national laboratories have been building out capabilities and scanning the horizon to see what’s coming and understand what’s going on.
And this is a singular transformation, both in the opportunity that it presents, because we can go do amazing things for, Fusion energy for medicine, for health technologies, you name it, but it also comes with threats as well. There’s a chance that the world is going to evolve very quickly at an enormous scale.
And we have to have ready made technological solutions from very big transformational things to the small critical pieces that will be bottlenecks if they’re not solved against that larger ecosystem.
Yolanda
Right. And at the same time, trying to continue to allow that. All the innovation that’s happening within the industry and as a result of the industry.
Brian
That’s right. That’s absolutely right.
Yolanda
Huge, huge, huge challenge.
Brian
Yeah. It is a challenge. And that’s why we have the part of the reason that we have AI3 so that we can connect the multiple axes that you’re talking about. So there’s what we understand inside the laboratory. There are the large players, there are the smaller players, and we’re trying to build out a healthy collective ecosystem where all of that can go forward.
at pace and at scale. So we’re reimagining public private partnerships and the way that they will operate as something not small and not necessarily driven by any one player, but the collective that’s at relatively large scale.
Yolanda
Great. So before we dive into exactly how you do that, how you operationalize that through the, uh, the AI3, I want to just take one step back and have you define How, how are you defining artificial intelligence and any other important term of art for the space that you’re working in?
Brian
Yeah, so AI is, uh, very specific and very general all at the same time. I think that the best term, the best way to think about it is that there are jobs that we as humans have done for a very long time that have required our intelligence, our sort of, uh, Cognitive labor is a useful term and artificial intelligence is a set of software tools and algorithms that can help us do that cognitive labor without humans having to be involved.
So the best analogy is that during the industrial revolution, humans basically built out machines for doing physical labor so that then we could steer those machines to move mass and energy in the real world at large scale without us having to physically do that. Now we’re at the phase where we’re thinking about the cognitive labor that we do so that we can build out machines, large computers that can help do the kinds of thinking that we would do 24/7 all day and an incredible computational speed if we were able.
We’re not, but we can design systems to do that for us. We will still be the drivers and the pilots of those systems. So AI are those tools that will help us think about things and can fundamentally answer questions for us. So the commercial products that are out there in the world are a good example.
The, uh, cloud models from Anthropic and the chat GPTs from open AI, but there are a whole host of other tools like that that will be integrated with physical systems. To run experiments and fetch information from what we would call the edge or the laboratory space and pull it back into those models to think at a rate that humans would like to be able to do, but we just haven’t in history.
And right now is the moment that we think we’re reaching the place where these tools are. It may not be artificial general intelligence, meaning that’s a term that’s also used, meaning a replacement for some level of human capability. But they are capable enough that we can offload some routine tasks that humans would do and make them automated so that they can run all the time without us having to do them in some repetitive fashion.
Yolanda
Thank you. That’s a great, I think, I think a great, very accessible explanation that’s based on the operations and from which is perfect. And it does lead me to a question that I had actually, even in your introduction, when you were talking about how you came from NIF and looking at your background, you, you know, you’re talking about having this great cognitive tool that can be applied to all kinds of things.
So it’s applied in the software space, but it can be applied to things that people write, like with large language model applications, or it can be even applied in three dimensional space to how tools get developed, how tools get used in three dimensional space. And you have a PhD in both mechanical engineering and data science, right?
Brian
Yeah, it’s in mechanical engineering. My emphasis was in applied math. Applied math.
Yolanda
Okay.
And so, again, you mentioned how this has spun out of what was necessary to develop NIF, and so I’m really, I’m very curious about your background and what got you to the point of leading this kind of, uh, venture.
Brian
Okay, sure.
Well, one thing that I’ll mention is that, that, that story is all tied up with the Tri Valley itself, actually. So I’m a 22 year resident of the Tri Valley, or longer, actually longer. I first arrived in 2000. The way that I come to this is through coming to graduate school at Berkeley.
And even when I was in Berkeley, I actually lived in Dublin doing my Ph. D. And when I moved from having done my graduate work to coming to Lawrence Livermore, there was this enormous challenge, which was how do you take energy from a laser, put it on a fusion target and get more energy out than what you put in with the laser.
It was a huge grand challenge and the way that the A. I. And the M. L. comes onto the stage is that in these really challenging problems, There are often decisions that you have to make. If I want to make this laser, cause this target to undergo fusion, where it produces more energy than what we put into the system, there are all kinds of parameters that I have to adjust.
I have to make a choice about, Do I turn this knob up? Do I turn that knob down? Should the laser get brighter now or should it get brighter then? Should the target be slightly bigger, slightly smaller, slightly thicker? And all of those dimensions are things, going back to our earlier argument about cognitive labor, they’re things that we know how to do in our heads.
But when you have 200 decisions to make and they all interact with one another, it’s very difficult to do that as a human by ourselves. So we have teams of people who can help us do experiments to get answers to what happens when I change this knob or what happens when I change that one. We have teams of physicists who are running simulation codes.
Big numerical digital twins of what happens in the real world that helps us set those knobs and then enter the third pillar, which is A. I. And M. L. Which are tools that can look at all of that data and understand how to pull all this stuff together. So my path was really sitting as a mechanical engineer as an undergraduate looking out at the world and saying, It’s really complicated.
I want to go do big things. I’m going to get some more education. And that brought me into the Tri Valley to go to Berkeley. But we orbited around Dublin for the entire time that I was there. And then you turn those skills that you learn onto enormous problems that matter for the country, which is what Lawrence Livermore was doing.
And then you find out that there’s the state of the art and there are these new tools on the horizon that can probably take it further. And that just puts everything in perspective. And you recognize that the combination of a little bit of ingenuity on the AI and ML side, some serious effort and some nation scale capabilities can really transform things.
And so that, that brought me into the Tri Valley area, that has kept me in the Tri-Valley area because we have. A set of singular resources here, there’s the National Ignition Facility, which is, for those who don’t know, it’s a three and a half billion dollar laser that’s the size of a football stadium at the National Lab here in Livermore, but you also have the environment that’s here in the Tri Valley, there are the people that I work with at the laboratory, there’s the amazing environment of just dynamic thinking and folks building startups working for fantastic companies I mean I walk down the block in my neighborhood and have conversations with people That ordinarily I would have to go to conferences to have a chat with so that’s that’s part of what?
Sort of brings us here into this sort of current moment.
Yolanda
That’s pretty exciting and it’s really It’s It’s very helpful to hear how just the region itself has played a role in your, your own evolution as a scientist and as an engineer and, and in terms of how you’re thinking technically, but also in terms of your career, that’s, that’s really cool.
So how did you, uh, get from NIF to AI3? Like what’s that evolution like? Cause that seems like a pretty big jump.
Brian
Yeah, it, it, it turns out it’s actually kind of a small jump. But it’s big in its ambition. So the problem at NIF is we have tons of data and it’s very difficult to analyze it.
We need tools to go solve that. And when you sit down to solve that with only subject matter experts in fusion, you recognize that you need folks that understand computer science and the things you can do with large computers and with AI algorithms. So you have to expand the scope a little further. And then you can start doing that with a public team, with the people who work at the national laboratories.
And as AI and ML techniques have accelerated really quickly, it became clear that we were missing strong collaboration with all the folks that are doing great work in the private sector, in the commercial sphere. So that needs to be added on. And then the same thing happens. You look out in the commercial sphere and ask, well, who do I need to partner with to make my effort at the national lab better?
Well, I need computer vendors. I need folks like NVIDIA and AMD to help me build out hardware specifically for my mission. And I don’t just need hardware. I also need software. So I need partners from the open AIs, the Anthropics, the Metas, the X AIs to come in and help us build large language models and associated tools.
But I don’t just need that. I also need all those that are working in the fusion space and initial fusion energy startups to say, You have these capabilities at nation scale, but we’re thinking about commercializing this. And so it sort of wraps around back to the subject matter expertise. Then staring at that, we recognize we need all these players in the same conversation.
We need to have the hardware vendors, the software vendors, the subject matter expert companies that are applying this all in some kind of ecosystem where we can together build things. spin that back into the national security space where we can do things for the nation, but also spin it back out into the larger economy to make sure that we’re competitive and that no nation on the planet gets there first or before us.
And so AI3 is sort of born out of that recognition that it’s the science across the broad spectrum that we need to attack it. It’s the public partnership with the private partnership, and then somebody has got to spark that and we’re trying to do that.
Yolanda
That’s amazing. So, how have you operationalized this?
So we’ve started, uh, we’re in the process. So we’re sort of two years old. We’ve got a variety of dynamic partnerships, the way that we’ve, I guess, our operating principle, really at the very fundamental, the easiest thing to do is make it really easy to enter the partnership. So we have very low barriers to entry.
We start off with just an agreement to play nice together on exactly the same problem. So we can have a conversation where people make contributions in kind. We don’t have to worry about signing very complicated, heavy legal agreements. We can just come in, have conversations on a regular basis, learn what each other’s strengths are, and then decide, you know, what we’re. Is there something that we need to do together?
Often that’s the first step, right?
Brian
Once you can see a clear problem, then it’s usually a really quick acceleration. Like, oh, we’ve got a capability in a particular area of material science, and we have a deep need there as well. We find a partner who’s got a computational capability, building out data sets around those.
And then we partner together and we run as quickly as we can. And then there are a couple of routes that we can take, because it’s easy to enter with sort of low legal barriers. But we want to protect intellectual property for the folks who are in it. So as soon as we run quickly enough that we can see something novel coming out of it Then that’s time to protect that in some kind of cooperative agreement where we can Describe what the long term work is going forward, and it can be a couple of ways.
It could be what we call strategic partnerships, where the partner recognizes they want to pay the laboratory to do something because there’s a capability that they can build. It can be the other way around, where we want a cooperative research and development agreement because we need something from the public sphere, and we want to work with them to co design it.
Or it can be literal just co design where we think we’ve got an idea to build a thing that’s never been seen before and we’re going to drive forward on that conversation. So we have a variety of partnerships in flight that have different aspects of all those flavors.
Yolanda
And so it’s so you have a variety of ways to work together and it sounds like in building capabilities it can be very streamlined at the beginning and then as you start to see that there’s there’s a there there and you can reasonably expect intellectual property or products that that’s when you go in for different types of agreements to make sure that gets protected in such a way that every partner is satisfied with the outcome and there aren’t questions later down the road.
Brian
That’s right.
In terms of how those relationships get started, it sounds to me like right now it’s mainly, Lawrence Livermore reaching out to partners? Is there, is there a vision where the partners are reaching into Lawrence Livermore or is that already happening?
Brian
Yeah, there’s in reach from the outside.
We were very aggressive in wanting to do these things. We also are super targeted in, in who we work with. So we do have sort of an initial screening where we take a look at what the partner can do and what we can do and have an initial feel of Do we have an unmet need that we think the partner is going to going to meet that sort of one criterion for us, and then we want to be really clear about what our goals and ambitions are on the public side so that the private partner can say, Yeah, actually, that’s going to do something for me to advance my effort.
The things that we want to avoid are a place where, uh, The external partner feels like they’re doing work in the public interest that’s not directly involved with accelerating the thing that they need to do, right? So we need a sensitivity to be clearly aligned at the start and we want to make good use of people’s time.
And we also want to make sure that what the private partner is bringing is something that the government hasn’t already put together so that we can identify a real tangible gap in our capability and say, okay, that partner can go, can go forward and work on things. So we’ve, we’ve, We’re operating at a couple of scales with partners right now.
Most companies, uh, at the top level scale, folks will recognize. So the NVIDIA and the OpenAI is and folks like that. There’s another level, uh, that’s slightly smaller down folks like scale AI, uh, we’re, we have discussions with them to understand how we can, we can drive things forward. And then we’re looking for partners at.
What I’ll call the smaller scale, smaller only in the size of their market capitalization or how big their, their effort is right now, because we, when we look across the market, we think there’s some very specific, uh, capabilities that are being built out for very, very niche things that we may not want to, you know, Build out inside the laboratory and we so this is a little bit arrogant of us, but we like to say at the laboratory we can do anything, but we can’t do everything.
What we mean is we can have the full weight of the nation behind doing these efforts. And we’ve got, uh, you know, almost 10, 000 employees on a square mile and something like a third of those are PhDs. We have the brainpower to do lots of things, but when someone else already has a fantastic solution, we want to make sure that we don’t reinvent that wheel.
So the early process is really making sure that we understand for those players at all of the stages from the largest players to the to the earliest and smaller ones, that they’re getting a value that we’re getting a value. And that’s just a conversation at the beginning. It’s a low barrier to entry.
And then there’s a plan to protect those things in the future.
So, so it sounds like, go ahead,
actually the, the, so the, so folks can reach into us as part of having, AI three, we announced in the federal register, with the government to say that, that that actually exists and you can reach into Lawrence Livermore through that.
We will, we will feel the interest from, from all comers, everyone who’s coming. But we are selective about who we engage with. So there are a lot of conversations that are like, yeah, that’s interesting, but it’s not well aligned. Or that’s really great, but we’ve met that need or what you’re doing is fantastic.
But the direction we’re going to go, we don’t think is going to accelerate you. But where is the door? But the phone line is open, so to speak, folks can call into the lab and make contact.
Yolanda
That’s great. Yeah. And that’s what I was wondering is if there was a call for proposals or something like that.
And it sounds like it’s the registration with the federal register that that the announcement in the federal register, if people want to go to that, they can. They can see and I’ll get the link and we can put that in the show notes. So people who hear the pod and are interested can find you and they can find this easily at ai.llnl.gov, that’ll get you to AI3 and then you can make connections from there.
Brian
That’s perfect. That’s neat, that is easy.
Yolanda
Yeah and so I noticed there were a wide variety of fields And you mentioned them earlier in our conversation and on the website. And of course, I’m very interested in a couple of the fields that align with what we’ve identified through i-Gate and Startup Tri-Valley as areas that we’re trying to grow these ecosystems and two of them include the so there’s AI itself, obviously, and then you also have in their sort of medical and biological applications and as well as climate resilience, which I think, you know, taps into two of our areas, which is life sciences and and climate technology.
And I’m curious, where, where are you on those kinds of projects? And again, like what, what are some of the goals that you have? And what are the kinds of companies or partners you’re looking for in those areas?
Brian
Yeah. So let me, let me talk a little bit about the bio piece and then the climate, the climate piece, that let’s take the bio first and maybe only that one, cause it’s a pretty clear testbed for what we’re talking about.
We have ambitions across a range of. biological activities. So that includes therapeutics. And so below that there are proteins and antibody designs. There are small molecules for therapeutics. There are bioengineered materials for solutions in the health space. And otherwise there are interests in biological devices that include implantable things, wearable things, other kinds of measurement technologies.
Those pieces bring us to a place that I’m centrally interested in growing around automation and autonomy where we can make measurements at the edge. You can do things with, uh, robotic and automated operations driven by AI technologies that I’ll show connects us back to the earlier discovery piece around molecules.
So proteins or small molecules. Increasingly, robotic chemistry is a thing. It’s a capability. There are companies that we’re partnering with here in the Tri-Valley area, like Unchained Labs, for example. Those kinds of technologies are things that we need to grow. And they go, it’s really important to think, in the bio example, How would I come up with a new idea?
What molecule am I going to try to invent that maybe starts as we would say in silico, it starts on the computational side where you use a physics and chemistry simulation to imagine what, what you want to do. So there’s a piece of the local industry that we’d like to build up around the capability to predict what molecules you want to grow all the way to the other end, which is, okay, I’ve got this idea for a molecule that I need to produce, but how do I do that at scale and quickly?
How do I automate that? How do I take AI as the driving thread that goes from having an idea to testing an idea in the laboratory space. doing all of that with a lead time or a looping time that is an order of magnitude or two or three faster than what we can do in the past. So those are, those are parts of the ecosystem that we would really like to grow.
Think of it as this sort of– there’s the discovery phase where we’re trying to find what we want to do. There’s the design phase where we’re trying to build out the system that we would use that for, whether it’s a medicine or it’s a material that we’re going to build with the bio process. There’s a manufacturing phase where we want to grow that automated capability to go very quickly.
And then there’s a deployment phase where we want to understand both, can we do this in relative real time? And on the long horizon, will these systems have the outputs and the effects that we want? So, in medicines, we want to make sure there are no long term side effects. If we’re building systems, they ought to stand up to the really harsh environments that we plan to put them in.
So, if you look at the sort of i-GATE world, the things that we want to emphasize are the discovery of ideas, the design and manufacture of those ideas and the deployment of them with an emphasis on AI throughout. And that includes the software piece, but also increasingly the connection to the edge, to automation and robotics and moving mass and energy in the real world.
Yolanda
Wow. Okay. So that’s very ambitious. And it sounds like you already have partners and local partners in, in doing that. You mentioned Unchained Labs. Yeah.
Brian
They’re not part of AI3 formally, but they’re in laboratories across Lawrence Livermore National Laboratory. And so they’re part of our, they’re part of what inspires, inspires us.
We have had conversations and if Unchained is listening, we’re, we’re here and they know that we’re, we’re having conversations.
Yolanda
Right now. Understood. Okay. So you’re taking some existing tools like some produced here in the Tri-Valley and really exploring what you can do with AI and the edge, which is the edge where AI, I assume the edge meets where AI meets robotics and – What – meatspace?
Brian
Yeah, that’s right. Yeah.
That’s another term that we can define. The edge means lots of things to people, but I think the easiest thing to imagine is that there’s someplace that is far back and removed. That is the computational space where there’s an enormous supercomputer or somebody is thinking about the idea.
And then you push out to the edge, which is where we meet moving things around in the real world. So pouring liquids, mixing powders,making a measurement and cutting a piece, printing a thing and with an advanced manufacturing printer and importantly capturing data out there at the edge and then pulling it from that edge back into the computational system to say, Oh, this is the thing I meant to build.
This is the thing I actually built. But they’re different. What should I do? Go back and re engineer. Can I correct that in real time with an advanced manufacturing method? How does this world actually play? So part of our goal in AI3 is to support that ecosystem. So I can give it a name for everybody. I mentioned there’s the discovery, the design, the manufacturer deployment phase.
DDMD. This is our framework for how we think about AI being injected to and transforming all phases of scientific discovery. So a great outcome for AI3, uh, especially looking at the sort of local Tri-Valley idea is that we do and build partnerships up that support that full range of DDMD from the discover all the way to the deploy so that we can loop things from idea to test to failure really quickly back to redesign and then get to some kind of novel discovery. Whether that is a system, it’s a physical phenomenon, it’s a chemistry, a chemical or a material the idea is to get to that product or system that’s going to go change the world.
Yolanda
DDMD. So you heard it here. Yeah.
This is the framework for, for AI and specifically AI3 and, and working with, uh, the edge. So, and I love that. Uh, you know, I’m. I’m really glad you defined it in the way that you did, because I was thinking, I found myself thinking about the sort of computational piece and the real world piece as a very one way, uh, stream.
There, you’re going to have this computational component, and then it somehow materializes to do something in the real world. And the data collection and return loop, I think, is a really important piece that I’m glad you highlighted, because it’s I feel like so often we get focused on the sort of AI helps you, you know, is this cognitive tool that gets you to something in the real space and it can be easy to forget how much the data return to the AI just creates this really virtuous cycle that makes it a much stronger relationship and tool.
Brian
Yeah. Yeah. That’s the vision that we want to support as well. And it’s a place where I think all of the companies or businesses that are interested in this space can do themselves a service by recognizing that no matter what they’re doing out at the edge, they’re generating an asset and capturing every bit of data that you’re producing in an intelligent and well designed format is an asset that’s going to grow over time.
Even if you can’t use it today or right now, you will be able to push it back as you were saying from the edge to the design loop in the future and use these emerging tools to capitalize on it. So I think there’s no amount of effort that is wasted for defining what is your critical data, what format should it be captured in, how long should it survive, where should you save it, and then the really important thing, ultimately, what are we going to do with that data and how does it benefit us?
And the other thing that I think folks maybe are not thinking about is that data. Even if you’re capturing too much of it and you can’t do something with it yourself, it might be useful to another partner. So we have examples from, uh, of our own within the national laboratory space where there’s data that we’ve been capturing that we Didn’t think of as particularly valuable. With the onset of large language models and other AI tools, we now recognize that the totality of those data sets are actually critical.
There are things that we didn’t know how to do with them that now that we have it, we’re going to go do things in the future. So there’s a, this is a, this is my personal request to anybody who’s thinking about building their own business. That does not think of the data as anything other than a product, uh, because it absolutely is.
It’s a product for advancing what you’re doing inside the business and the mission space. And it’s potentially something you can externalize and get value from even if you’re not capitalizing on it yourself. And as a, as a nation state for the United States, we need all of our businesses and our efforts to be capturing that data in a way that we can automate these things, close the loop that you were now seeing going to the edge and back to the computational space and do that faster for innovation with more impact than anybody else on the planet.
And it all starts with making sure that data doesn’t get left on the manufacturing floor or the cutting room floor.
Yolanda
Okay. So collect your data, save your data, save your data. No, that’s great advice. That’s it’s, it’s a, it’s a form of documentation that we don’t think about. And, you know, doc for any scientist, documenting your work is really important at whatever level you’re working at.
Brian
And, and you’re right. So often people leave the data on the table and if, if data is not what they consider to be their product. And the fact of the matter is with AI data in any field, in any business, your data is a product that you can be. benefiting from either indirectly in terms of your or directly in terms of your operations, or in a sort of revenue generating model in terms of providing that data.
Yolanda
That’s right. To create these, these closed loops of AI and edge and That’s amazing. That’s great advice.
Brian
Yeah. Well, we’re, uh, we’re trying to heed it ourselves. We recognize that after seven decades at Lawrence Livermore, that some data is really well curated and some of it is not. And so it’s a heavy lift to go back and re-engineer it.
So to all of those just starting, you’re in a great place if you do it now, don’t try to do it later.
Yolanda
That’s such a good point. And we are starting to see that, especially in life sciences and, and you, you mentioned an answer. This was the original question: what are you trying to do in life sciences and climate resilience?
So you meant you said you tackled life sciences cause it’s the easier one. So what’s, I have to ask, what about climate resilience and why is it harder and not the first thing you tackle?
Brian
I guess it’s, it’s harder because the way to produce, uh, systems and the way to interact with the climate is so much bigger and more nebulous and more complex.
But we have climate efforts that go all the way from making very strong predictions about what’s going to happen, our computational resources are superior. We just dedicated two exaflop computers. It’s the fastest high performance computing system on the planet. And one of the things that we can do with that is very detailed climate predictions.
Predictions. But in the climate space, we can go all the way to producing climate mitigation, mitigating technologies. So there are ideas for reactors that are going to siphon carbon out of the atmosphere and some that are, uh, less obvious. Like the 3D manufacturing of reusable pellets that can capture CO2 from a building or office space.
So it has the benefit of one, taking carbon out of the environment and two, improving your indoor air quality so that it’s a better place for you to live and breathe, and to, to feel more alert inside your building. So there is everything from the. Predicting what the demand for climate solutions is going to be to building out climate solutions for carbon removal.
There’s an entire plan for being more efficient. So it couples back to computing, uh, where we’re thinking about the production of greenhouse gases through our compute capabilities. We have large efforts and understanding how to be more efficient with compute that we work on through public private partnership with, uh, with industry.
And then one thing that makes the climate piece a little more complicated is that you have to have a clear discussion about what you’re doing to mitigate climate challenges and how much you’re producing them at the same time. So in the AI space, we are aware that there are gigawatt data centers that are coming.
So to calibrate the audience, what does it mean to have a gigawatt data center? Well, the city of Livermore here in the Tri-Valley uses about 10 10 megawatts as the mayor has pointed out to me in a recent conversation at the dedication of our El Capitan supercomputer. Our El Capitan supercomputer uses about 30 or 35 megawatts of power.
So we use in one computational system just to do math about three times as much power as the entire city of Livermore uses.
Okay.
AI. Now, across companies like Meta and X and others has this demand for not tens of megawatts, but a thousand megawatts, a gigawatts to do this. And so coming back to the climate idea, the thing that makes AI for climate a complicated discussion is that there’s this both, both this forcing function that says we’re going to go off and do new things, but it’s going to put tons of carbon in the atmosphere.
Also, we promise the solutions that come out of it are going to be great for mitigating those risks. So it’s very interesting, there’s a cart and a horse that we got to get ordered in the right way to have that conversation.
Yolanda
Right, because of the carbon cost due to the power requirements.
Brian
That’s right. So there’s a place where you could be thinking about very carbon costly solutions to reducing carbon production or to decarbonizing the atmosphere once you’ve already injected it in there. And we have to be very careful about the way that we balance those things. Our climate groups at the lab are excellent at thinking about that, helping to set plans at the national level, the state level, and the local level.
Yolanda
That’s fantastic. That’s such a, that’s a good, such a good distinguishing factor to put in is that you, that there is this element of carbon use, which really points to if you’re going to deploy AI. For a climate problem, then the scale of the problem you’re solving needs to be pretty big.
Brian
That’s right.
Yolanda
Yeah To justify the the application
Brian
if you’re gonna throw a ton of compute at it Yeah, so that’s something that the people are thinking about these days is how capable is the supercomputer? What is its environmental footprint? What does it cost me to do this computation in terms of energy time? Carbon that we’ve produced
Yolanda
Right.
Brian
And so that really brings me back to again, uh, you know, you talk about the criteria for partnership and the alignment. And one of the things we didn’t talk about is what are the unique, uh, tools and advantages that Lawrence Livermore, uh, the AI3 partnership offers, offers your partners. So, you know, I’m, I’m guessing El Capitan is on the table.
Brian
Well, so resources like El Capitan are on the table. El Capitan itself is a national security machine. And so that goes into classified spaces and most people will never, never see it again. We do have an unclassified counterpart called Tuolumne, which is, uh, which is smaller, but it’s, we think of it as the sort of test system that we use for open science.
It’s still the 10th largest supercomputer on the planet. So that is out there for, for us to use. But I’d say there, there’s a, there’s a top level issue that Livermore brings in their engagement with the private sector, which is that through our national security missions, we bring. a collection of interdisciplinary teams to attack a problem.
So when you’re, when you’re working with us, we’ve usually brought in high end research staff from physics, from engineering, from math, from chemistry, from biology, all working together with a team of engineers who are themselves world class to go reach a final product. And the ability to have access to all of those expertise, centers and specialties, you In one location with a common awareness of the problem that you’re trying to solve is unique.
I think that only happens in the national laboratory spaces. Uh, so that’s, that’s the thing that we bring. So if a partner is interested in the problem that we’re talking about, often what they get out of the conversation is they come in one of those slices of expertise, they are computer science experts or they are biology experts.
And when they partner with the laboratory, they’re suddenly plugged into that whole wide distribution of, of expertise. And then we were talking about sort of virtuous cycles. There’s one that happens within those teams where the computer scientists mentions the challenge, which triggers an idea with the chemist, the chemist sits down with the biologist.
Now there’s a computational solution and everybody moves forward at a pace that they couldn’t otherwise. So, That’s a version of what we think of as scale, the scale across disciplines that can help folks advance. There’s another notion of scale, which is that we just have the biggest and best toys on the planet.
So if you were trying to do computing, we have the best scientific computing facility on the planet. You get access to that if you’re, if you’re working with us when it comes to experimental facilities, we have things like a three and a half billion dollar laser that we can go do if we need to understand the way that a material is going to stand up in a radiation rich environment because somebody is thinking about building a fusion reactor for inertial fusion energy in the future.
We have the facilities that can show whether that is a real idea, whether it’s science fiction, and you can’t really get that anywhere else. So I really capture it at scale. So scale across disciplines, scale and compute, scale in the experimental, uh, capabilities. And then there’s another aspect that I would think of on a small scale, which is when you have access to all of those large tools, the way that we engage is very personal.
We have, we have, and it’s a bunch of people who are now all in the same working group that are passionate about tackling the same problem. And so that small scale interaction with access to large scale things is something that I think is a real advantage for the partners that come to meet with us.
Yolanda
That’s an enormous advantage, and I like that it’s the sort of large scale innovation on a human scale. Right. And that is, and that’s the, the interesting piece of what you’re trying to do with AI, because it is, The AI piece, the actual, the in physical space tools piece, all of that is really nucleating on these very small human to human interactions within these teams.
And it sounds like that’s the unfair advantage you’re providing your partners.
Brian
And that is, I liked your term, unfair advantage. That’s exactly what we’re trying to do. We want, in the most moral and ethical way, we want to provide an overwhelming advantage to the U. S. techno economic ecosystem that doesn’t exist on the rest of the planet.
We want to do that from nation scale efforts across the Department of Energy, and we want part of what we do in AI3 and the rest of our laboratory business to be the spark for doing that inside. So, we want to succeed in such a way that it’s, it’s so overwhelming that it appears unfair. That would be, that would be perfect, because that’s all for the good.
It’s all for our advantage in the end.
Yolanda
That’s wonderful. So, this, and this, this conversation around scale, and I love the framework that you have created in this conversation about, uh, Scale and how to leverage that kind of bringing this we have this enormously impactful national lab national tool and you talked a little bit at the beginning about the sort of human scale of being in Dublin and and and how just physically being in the Tri-Valley allowed you to become aware of these, uh, organizations that could help you solve problems. Large scale, impactful problems and, and satisfy also intellectual curiosity for yourself. So I’m curious about how you feel being located here in the Tri Valley for Lawrence Livermore National Labs? How do you feel that has positioned your effort for AI3 in terms of what you’re trying to do, what you’re trying to build?
Brian
Oh, it’s, it’s, it’s, it’s, it’s, it’s, It’s it’s fantastic for a couple of reasons. One, it’s personally rewarding to me because things we do motivated the nation scale can have impacts at the local scale, which being a local resident, I think it’s just fantastic. It also goes the other direction because the Tri Valley is such an innovative and exciting space.
When we are looking for external partners to solve really critical national security missions. We can look locally sometimes and find those answers. I mentioned a couple of you know Unchained is an example like you there’s we wanted a critical capability Turns out some of that is offered here in the Tri valley.
There are a number of others that That exist as well. So just having that ecosystem where you can Lean in and look out and see what’s going on while also knowing that doing your job is going to build out the rest of what’s going on in the Tri Valley. It’s this reciprocity that is, that is fantastic.
And I would say that doesn’t exist everywhere. Our sister laboratory, Los Alamos, in New Mexico does not have the advantage that we do of being, you know, in the Bay Area of being in the Tri-Valley. They were, they chose their historical location to be literally in the middle of nowhere. So no one knew about the Manhattan Project.
We have the advantage of working on the very same mission space, but we got the, uh, the environment that is full of targets of opportunity, which allows us to think and do things. So there are, there are clear things that we could not do if we were not located here in the Tri-Valley.
Yolanda
That’s wonderful.
So nation scale. Work that is enabled by local scale, scale, talent, innovation, mindset is what it sounds like. That’s right. I’m very curious. This is a new effort. You said you’ve been working on it for about two years. So it’s a startup in a way within Lawrence Livermore National Labs. What has surprised you the most about your current role?
Brian
I guess two things. One is it’s all about that scale of personal interaction that everything starts with, with a relationship and to be successful in building these things out and getting these things going. There are a million balls to put in the air at the same time for me personally and for any organization, because you want to have a relationship with every business and every group that comes to the door.
The other challenge is once you’ve built those relationships, it’s incredible the pace at which things can advance. It’s very difficult to track if you’ve got, you know, we’ve got, well, I don’t even know exactly how many relationships we have going on. It’s something that I actually should know as the director of this, of this effort, but we’ve got something on the order of a dozen critical relationships that are, that are going on.
The demand for people to be able to keep up with the pace of innovation is really surprising. So we can spark ideas with a private partner. We can identify a goal that we have to go after in that early phase of starting up the collaboration. And then the amount of human effort that you could put to drive these things forward is just really shocking.
So that actually brings me to a thing that we need. We need a complete reimagining of public private partnerships. So we are imagining a world where we will rally around Big transformational moonshots that are part of nation state goals. So let’s imagine Bringing fusion power to the grid in a decade something something like that to do that We need partners on the local scale and the very large scale We need partners on the computational space and in the actual subject matter expertise space And we need funding that matches the pace of evolution, uh, of the demands.
So the demand for AI solutions to understand how to transform a space like fusion, it’s very fast. Like, those technologies are changing so quickly that if another nation state captures that capability before we do, we will be Permanently at a disadvantage to that to that nation state, and there are others that are trying to do it So the opportunity is to accelerate and go as hard as you can as fast as you can inside this new idea of a public Private partnership where it takes scale again in terms of dollars to go do something So as a laboratory we can influence the ten million dollar scale relatively easily Our own sort of internal S& T startups range in that area and If we want to do something at nation state scale, so say the 2 billion area, we can do that, but it takes a half decade to get that idea pushed through Congress to get it funded in between is what can I do in a year to two at the 10 million to 100 million or billion dollar scale that we can’t do today?
And that’s a new vision of public private partnership. So what we’re working to do is build out a set of value propositions from both public and private partners to say, This is why that niche needs to be filled. So the private partner needs to see that one, they want to contribute to a national security mission to that.
They’re going to get reciprocal value out in a way that advances their mission so that they are potentially willing to pay into that process through effort or through funding. Same thing for the public side. We’ve got to see that we can get that out through either Contributions of effort contributions of resources actual capital that we put into the system So what’s what has surprised me besides you got to build out lots of relationships because they have to come into this ecosystem They take lots of effort because we can advance these things quickly is that they all need to converge into this new notion of a public private partnership that is funded by In advance of what the government can do that is funded at a scale that is outside what we can do as a laboratory or what most of these companies can do by themselves.
And then there’s a unifying approach that helps us maintain all of those relationships, advance the effort, according to the appetite that the technology has, but meet the pace of the mission so that you’re not developing something after someone else in the world has already done it. And this notion that There needs to be a transformed idea of public private partnerships is both exciting and it’s a little bit scary because we don’t have, we will not get this moment in the world to understand how to both invent AI and use it To do transformational things for the first time ever again, someone’s going to figure out how to use that A.
I. tool to develop fusion systems or to develop a battery storage system or to develop something else in the bio space to manufacture material that we couldn’t otherwise. And once that advantage has been seized, it’s going to be very hard to catch up.
Yolanda
Right? So clearly what I hear you saying is that we’re at an inflection point.
In terms of applying AI, for example, to the fusion space, and that if this is something we want to own, not just as a, you know, a local innovation economy, but as a nation, that we have to be able to apply, uh, partnering. And specifically, The problem here is basically time is the enemy, right? We don’t, we know we have to get there before somewhere, someone else.
And we don’t, we don’t really necessarily know when we’re, we’re the goal here is as fast as you can.
Brian
That’s it. That’s the goal. That’s
right. And then, we can understand how fast fast is right now. So if you look at what’s going on in the AI industry, We like to say that the sort of doubling time for advances is something like six months.
So if you’re paying attention carefully to the industry, sometimes three months, sometimes seven months, but sort of every six months, plus or minus a couple, something comes out that fundamentally changes personally, the way I strategically think about using AI tools to do things in the physical world.
So what we have to do is set out a challenge that we have to solve like fusion. That’s our fixed point. It doesn’t change very frequently, but the path we take to get there, we’re having to re rack and re-stack every three to six months to get there. And the entity that can advance on that three month, six month time scale and throw off the idea that was good, but it’s already solved and now it’s time to do something next and move toward that fixed point, which is like fusion.
That’s the entity that’s going to go really quickly and we want that for the Tri-Valley We want that for Lawrence Livermore. We want that for the US, right?
But we don’t have very long to do it because the number of doubling times that you have in a three year period That’s six. That’s a lot of advance, that you can’t miss out on.
So there’s both this feeling of we’ve got to do something really different than what we’ve done before and we have to do it yesterday.
Yolanda
Right.
And that’s exciting and it’s a little scary and a lot of pressure there.
Brian
Yes, and that’s what’s motivating this sort of new type of public private funding partnership in the more like the ten to a hundred million dollar range.
Because of the time it takes literally to get through. The 2 billion through Congress, that’s right. Yeah.
And that’s not because the United States is not paying attention.
Right.
That we have a careful and measured system for making sure that we do the right thing. And there is a process to get it through.
It was not designed for the current moment.
Right. And so those of us who can think about it need to step up and make it happen quickly. And then when we’ve shown value, the government can pick it up and take it from there.
Right. That’s really exciting. And as you put it, it’s also a little bit scary.
Yolanda
It is.
Yeah. So looking at, at this kind of daunting goal, what has been the most helpful to you and for your AI incubators growth?
Brian
Really the most helpful thing is just recognizing how much value there is in the external partner and recognizing that if you have the goal of making the partner better, you get better so much faster.
So there, I think there’s a lot of, maybe it’s, it comes from the science world, but you’re always looking to solve the problem and you’re looking for what you can get out of this situation so you can advance the problem. But if you turn it around and try to understand what you can bring to the table to advance the partner.
Yeah. Then you find out all the things that they can do better than you can faster than you can, and the acceleration is just that much greater. So there’s this sort of, be super selfish about what you’re doing for your partner. And in that way, you make yourself better and faster. And that’s something I think, you know, as a scientist and engineer, not that we’re selfish by, by training or by nature, but there’s the focus on the problem.
And it turns out that the way that you can put more effort and force multiplication on the problem is to make the team stronger. So make your partner better and you advance quicker, more quickly on the goal.
Yolanda
That’s amazing. So what, what I think I hear you saying is the, one of the things that has allowed you to, that you found the most helpful and it comes back to, relationships and, and really putting your partners in your relationships first.
Your partner’s needs, making sure their needs get met. And then what the reason in part that that has been such an accelerator for you is because When you focus on your partner, you find out things about your partner and things that you, if you had just been narrowly focused on your needs, you may not have, it wouldn’t have even come up in a meeting or, and now suddenly they’re telling you about themselves and you realize, wow, I think you, You thought you solved this problem for this, but it turns out I think that solution might work for the other.
Brian
That’s exactly right.
Yolanda
And now suddenly we don’t have to develop it. We can just ask you about it and, and see how you guys can help us with this other thing.
Brian
That’s right. And, and in fact, one thing that is a corollary to that is that I’ve, I’ve started talking to our partners and asking them from the very earliest engagement, Please be as selfish as you possibly can in your articulation of what you want to get out of this that because What doesn’t move us quickly is when someone comes in and says well We want to we want to advance your national security mission We want to work on the things that are really important to you and we think we’ll get things out of it that helps But it’s much better if someone comes in and says Look, I’ve got a dying need for this kind of material.
I need a database of solutions that I can train an AI model on, and I need those things to be done with some kind of computational fidelity I can trust. Because those actually light up all kinds of things on my scientist dashboard. I’m like, Oh, well, we’ve got that. We can go do that. That’s a place where we can be generous with our partner.
And then we recognize that they can run so quickly. They go get a thing done and it’s useful to us in exactly the way you described. So weirdly, there’s a, there’s an element of. Being very generous by being self interested. If you’re one of the partners, that’s, that is, that’s not obvious and
I, I, my most successful partnerships are ones where I’ve said to them upfront, I want you to make sure that you’ve articulated what you need to get out of this because that also makes the partnership stronger.
If they feel from step zero that they’re going to get something out of their relationship, they can be invested in it. With sort of laser focus rather than it’s sort of a peripheral nice to have. And so it’s weird that being altruistic and saying, Oh, I kind of want to solve your mission. That always makes it a priority.
Yolanda
Number two kind of thing is saying, I want to solve my mission and I want to do it with you. That makes it a priority. One thing. And then everybody can see what’s going to happen. So, so, and that comes back to what you’re talking about, that tight alignment where it turns out solving the problem together is on both of your critical paths.
And so that way you get the same. It’s not just that you’re working together. You get the same prioritization.
Brian
That’s right. And I’ve had, I’ve had that conversation with Microsoft and I’ve had that conversation with employees one, two, and three.
Yolanda
That’s yeah, that is great advice. And, and it comes down to, I think this is a lot of advice you hear for startups.
There’s, even if you, even if you have a generally applicable tool, you get, you make so much more progress. If you solve a specific problem and it’s the specificity that actually leads to the innovation.
Brian
That’s right. And that’s right. And so I did that. That’s true for all our partnerships. We will, you won’t really ever hear us partner on building a capability like that’s in search. You know, we don’t want to be a hammer in search of a nail. So AI for accelerated discovery is not something you will hear us talking about. That’s our central focus, but you will hear us talking about AI for Advancing the direct-write printing of a particular component in a reactor for a climate solution.
Like that is a thing that focuses ideas that we can move forward on. The private partners can focus on building out that technology. We can go solve it. And then once that’s born, there are all the ideas that, Oh, this adds value to that mission as well. There’s a very clear, intellectual market for where you can solve the solution.
And then if you’re a private company, you hope that there’s a financial market for that as well.
Yolanda
That’s an exciting way to think about it. And I, and I think very useful for a lot of the people listening
Brian
Says the guy who comes from the public sphere.
Yolanda
So what opportunities and challenges do you see for your AI innovation incubator as you grow, you know, maybe generally, but also in the next five years, particularly with tri valley companies?
Brian
Well the opportunities are tremendous. There are undone things that we can articulate. Constantly. It would, you know, it would actually be great.
And, and I think we talked about this, but it would be great to have a collection of Tri-Valley companies where we just came together and had an ideas day and said, look, these are the challenges that we can see that that exists now that didn’t exist six months ago in this space, and they need a solution.
And they’re not so big that a nation state has to do it by themselves. They could be a local thing. So there are, there are opportunities for us to go out and do these things collectively for the Tri-Valley. The challenge then is how to manage that process. It’s really not always clear how to fund a problem once you’ve identified it, how to manage a relationship once you’ve started it, how to transfer it and scale it from the initial idea to an actual productive operation.
Some of these are just the problems of startups. Some of them, and we could learn on the public side from how to navigate that from folks who are serial startup kinds of folks who know how to do that. But we can also offer, I think, a catalyst into that ecosystem to say, look, part of our job as a public entity is to identify the problems and to give those ideas out either things that we’ve already started to solve and said, look, here’s a solution path.
This can be picked up by someone who can, or. Sometimes it’s just in the articulation of the problem. Like, this is a very clear, well posed problem that should be able to be solved. We don’t have the bandwidth to do it. We have a need. If somebody else could solve that, it would be great. So, I think that’s what we could do in the Tri Valley, is lay out all those targets of opportunity and say, these are things we could attack.
Yolanda
And then the challenges are, how do you go from recognizing that target to solving it? To actually convert to a solution that’s going to move that ball forward.
Brian
Right.
Yolanda
But those are, those challenges are the work
Brian
That’s right. Exactly.
Brian
You have to start, you have to start down the path and then, and assemble everything that you need to keep going, which can include, as you said, like the challenge of how do you manage funding the solution, but that’s, those are probably, like you said, the problems, the challenges and the problems are the work.
That’s what you do
Yolanda
Yeah, that sounds like a great, great event to have with Startup Tri Valley. Let’s do it. You heard it here first. So it sounds like, you know, you guys are moving at breakneck speed, and I know that Lawrence Livermore National Labs is always hiring, but how can someone, like, there’s sort of two pieces.
How can someone who’s already done, you know, the work to prepare for a professional career in the sorts of things that you hire for connect with you and, you know, find a job and then for those who are preparing for their career, whether it’s an AI and maybe another field that you actually see is very necessary for the future of, of what you’re trying to build with AI three.
You know, how, how can they, what kinds of things should they be doing to build that career?
Brian
Yeah. So let me take it in two pieces first for the folks who are fully minted professionals and ready to flex their muscle at the laboratory. I’ll point you to two websites, careers.llnl.gov is a place to see all the open jobs that we have available.
I’d go to ai.llnl. gov to see the kinds of research that we’re working on. We’re a big government institution. Not everything that’s available is listed on the careers page. So the best thing to do is to have an interest, have a passion about something, and then make a connection through the AI dot LLNL dot gov connections and see that, oh, there are these pieces that are happening there.
Here’s the PI that I should go talk to. And that person can do the air traffic control to, to route people around and say, here’s, here’s what’s actually going on. And then to, to the person that’s looking to build their career or, or add value to themselves. We are trying very hard to build the pipeline of thinkers to come into the national laboratory space.
And there are two things that I think you can think of on that path. One is getting yourself trained up before you come into the laboratory space. So you can, if you are a student, If you’re a high school student, if you’re an undergraduate student, if you’re a PhD student, we have partnerships with universities to help people work at the laboratory over the summer, understand what the environment is about, get a clearer focus on what are the skills that you need to go do the job that you think your future self is going to want to be engaged in.
And so we can work through really academic partners for doing that. If you are already in a laboratory space, either there or in the commercial space, then things like AI3 are places where partnerships can help you understand what’s going on. And then we often like to convert to staff, people who we have relationships with and that we can see going on.
So I would say if you’re, if you’re, Fully baked and you, you understand who you are when you grow up. I still don’t know who I want to be when I grow up still..
But if you already know that you can see our careers and our AI pages and you can come in through that door. If you’re just growing, we have lots of outreach programs.
The lab brings in about 1200 students every summer. So the average age of the population goes way down as the, uh, The teens and 20 somethings come onto the campus and keep us invigorated. So you can look for those things to guide your career whether you’re early or whether you’re a graduate student postdoc.
And then if you’re already in the commercial market space the partnerships that we’ve been talking about are ways to understand how you can navigate that next phase of your career. So you can partner with us, you can see what the opportunities are. The other thing I’ll say actually is you should think about the laboratory as a place to train you for that next phase in your career as well.
So, we have to acknowledge that as a laboratory, we sit here in the Bay Area. There is a thriving technology scene and lots of people that we hire into the laboratory will find exciting things to do outside the laboratory. But we also have a return current of people who have gone to Silicon Valley, they’ve done very interesting things, and they realize that work in the public interest and for the national security mission is compelling, and they come back as well.
So you can think of the laboratory as a destination where you might build yourself through your education, but you can also think of it as you’re furthering education where you can build skills and develop capabilities that are then, I’m sorry to say, very useful to companies across the Silicon Valley so that you can go work in that location as well.
There’s another reason that being in the Tri-Valley is pretty great because there’s a large component of remote work these days. The commute to Silicon Valley is really, you know, You know, not great, but it’s not bad a couple of days a week. And so the opportunity to start a career at Lawrence Livermore and go work somewhere else and then recognize that Livermore is the best place on the planet and come back, that’s a, that’s a path that everybody can pursue.
Yolanda
Or even work for some of the companies here in the Tri-Valley. Even in the Tri-Valley. Or start your own. That’s right. I love that. Yeah, that’s a great answer with a, with a twist, which is Lawrence Livermore National Labs as part of your actual professional development on, on your way to your, your next thing.
Brian
That’s right.
Yolanda
And, before we wrap up, I’m curious, what didn’t we ask that that didn’t I ask that you’d like to talk about maybe?
Brian
I don’t know. You did such a great job covering so many of the things that I care about. One thing we didn’t talk about is, are some of the, what I’ll call threats in the AI space.
So there’s the, there’s the safety conversation, the security conversation and the threats of adversaries advancing faster than the United States. And in both of those, that’s one thing that the businesses that are starting up in the Tri Valley can shape that conversation and make sure that it’s balanced.
We’ve done a good job in the United States about talking about threats and how we will go forward safely with AI technologies. I don’t think we’ve had as fulsome a conversation on the opportunity side at the national level of these are the things that we’re going to go do. This is the transformation that we have to build.
This is why the government and the private sector together have to go out and achieve these positive and productive things. So I guess what I would like to say is let’s be really balanced. Conversation that says not only we’re going to do these things safely and securely, but these are the things that we absolutely must go do so that we have them for ourselves.
And so that we’re not beaten to the punch by, by an adversary. And having that conversation is something that we can sort of lead and drive forward. from the National Laboratory perspective, because we’re looking at those adversaries and at the global goals that we’re trying to accomplish. And we’re keeping our eye on the safety and security conversation because it’s our central mission.
But we need external partners to be thinking about articulating both that opportunity and the threat that comes with it, and how to prioritize those things so we do the most good with the least risk that we can. That’s not zero risk. We’ve got to understand where to stand. So we need external partners to think about that.
Yolanda
That’s fantastic. That’s a, I think that’s a great, great way to, to end the podcast and, uh, emphasizing once again that in every step of what, what you’re trying to build with AI3, you are looking for the unfair advantage of external partners.
Brian
That’s right. That is absolutely right. We should put that on a card.
Yolanda
Fantastic. Well, thank you so much, Brian, for being here today as it was. It was great to have you.
Brian
Yeah, it’s great fun. Thank you. I appreciate it.