Video: Securing and scaling your cloud environment to support AI transformation | Duration: 3876s | Summary: Securing and scaling your cloud environment to support AI transformation | Chapters: Introduction to Webinar (44.925s), Introducing AI Cloud (92.655s), AI Native Cloud (164.97s), AI Native Cloud (449.09s), Smart AI Applications (616.33997s), Serverless AI Platforms (824.37994s), AI-Native Cloud Strategies (1021.065s), Cloud Scaling Challenges (1300.76s), Serverless and Scale (1407.9701s), AI Security Considerations (1525.21s), Multi-Cloud AI Challenges (1924.675s), Security Skills Evolution (2136s), Concluding Security Insights (2325.645s)
Transcript for "Securing and scaling your cloud environment to support AI transformation":
Everyone, thanks so much for joining us today for our webinar on securing and scaling your cloud environment to support AI transformation. My name is Martin Sanchez. I'm a principal solutions marketing manager here at CloudFlare focusing on our platform and AI stories. And we're really, really lucky to be joined today by Lee Sester from Forrester Research who's gonna talk us through some recent trends and that he's been observing around, AI transformation and the AI native cloud. Really quickly, what we'll cover today, is Lee's gonna open us off with a discussion of some of those trends. After that, he and I will have a conversation exploring some of those trends a little bit further, and then I'll wrap things up by discussing the CloudFlare perspective. So you'll hear more from me in a little bit. But for now, Lee, thank you again so much for joining us, and take it away. Well well well, thanks, Martin, and thanks everybody at the CloudFlare team for for bringing me here. Really glad to be part of this discussion. Looking forward to the next few minutes and exchanging some ideas about, key trends in the industry. A couple of things about me, I cover public cloud for Forrester, which is, you know, the broad readnet of public cloud, you know, in terms of the the major cloud providers, the cloud to edge providers, and also cloud strategy, architecture, modernization, AI native cloud, which we'll be talking quite a bit about today as a a focus for Forrester's enterprise clients. And I have a a major focus on cloud native and Kubernetes as well. I come from a security background. I I hold on to my security certifications, which is always helpful, but particularly when you're crossing a big technological threshold like the AI native cloud. So going on from there, the rise of AI cloud. This is, yes, an AI created cloud, fairly obviously. Although, I think, these days you could find something would be very difficult to differentiate from the real thing. This is something that we see have seen emerge at Forrester for some time, and I'm gonna take you through it. Now, yes, there have been AI services in cloud for some time. Right? There have been, GPUs available, custom silicon available. There's been high performance computing with an AI, related component around that. There's been data science platform teams, stood up on on cloud technologies and multi cloud container platforms for quite some time now. But when we talk about the AI native cloud, we're talking about something that's actually physically, qualitatively new. Right? We're talking about a big transformation that you can you can see across a lot of different infrastructures, and it's actually something that's played out over time with some initial breakthroughs just to get to cloud scale, like the s three storage, for example. That's a with a a a obviously fundamental one. The rise of cloud native technologies based on, Kubernetes, the NVIDIA GPU, and the impact that that had, being rolled out at scale becoming quickly the the the focal point for for, you know, rules based AI, but longer for generative AI. And predictive AI was taking shape, and also low code was taking shape as well. And AI informed operations or AI ops was also germinating right around the time of the of the pandemic in 2020 and 2021 when, most enterprises had to stand up a lot of changes and a lot of scale in a hurry, and they couldn't get that done anywhere else but going to cloud. And that provided another big wave of investment around cloud technologies, generally around databases, translitical and multimodal. Some and as GPUs became more, prevalent, RDMA networking and GPU direct to kind of enhance the, GPU capabilities, custom silicon, gather momentum. So right around that time of, you know, around the when chat g p t emerged and other large language models and foundation models start to emerge, you really had a takeoff point. And very quickly, it became obvious that the conventional cloud providers as we've known them with, you know, commodity cloud services based on virtualization that was in some way very similar to the data center on x 86 and Arm Technologies with some GPUs on the side. That that simply couldn't cut it. You know, you're going to have to have qualitatively, physically new infrastructure, to do that. You have to pour new concrete, new power, new cooling, and obviously, the GPUs at the center of that. And that in turn, as, as that's matured, created a space for open source and then managed services version of of things like, supervised fine tuning for model training and reinforcement learning, of course, rag to, accommodate the the enterprise needs of data quality and so forth. And now, of course, we're talking about agentic as the next phase of that. It's hard to believe that, you know, large language models is almost there, in the technical debt categories and fashion, but that's where we are. Right? But and and custom silicon is there. And to take a little bit of a closer look at this, if you go through the compute factor, you know, NVIDIA's impact has really changed the dynamics all around the data center. With storage, we can we can look ahead to see how, object storage has, you know, been around for a while, but something like GPU Direct gives that a whole new capability and AI era. High bandwidth networking, obviously, low latency is is, is is critical for these applications. Data infrastructure was already kind of taking shape based around cloud scale. Now that's being marshaled in the service of the AI native cloud, which in turn creates the need for greater data management or greater rigor. You know, that's been percolating for some time with data warehouses and data lakes, and now we're talking about data management and data fabric as being a precursor not just for, generative AI and agentic, but really, you know, the modern enterprise overall. And AI services have really, left ahead instead of from the spoke to foundation models, fine tuning, the ability to create AI applications by enterprise class customers themselves is obviously taking shape. I mean, yes, it's being delivered by SaaS. It's being injected in all kinds of services that people have had for some time, but cloud, you know, cloud providers are handing also the capabilities, as well as the open source community is handing the capabilities to do this, for enterprises on their own if if they're inclined to do that outside of managed services. And so you you have platform operations as well-being, transformed in a key way based on the automation, of the cloud, the AI of the cloud. And and we're just really gonna transform operations in the in the in the months and years ahead. One more look at this, close-up. If you if you would look at this side by side, I mentioned the virtualization versus containerization characterizing the old, x 86 and ARM versus GPU, new new technology around networking, new translatable, databases, conversational BI. And, really, if you look at the data management and AI layers here, you start to see that this is the take off point for the next phase that we're getting into with the Genentech in terms of data fabric and it which is an integrated use case driven conversational BI, and the foundation models of prop engineering. All these are being delivered sometimes, you know, through the open source community where people can kinda shape their own, fate if they're so inclined or also through managed services. And increasingly serverless agentic is gonna be a key part of that, landscape as well. And I mentioned operations overall and what Forrester calls Turing bots, you know, that's got a lot of headlines, AI assisted code development, but the whole software development transformation, is is going a pace with live coding and elsewhere. And one of the key things that I'd like to, focus on here, is before we go ahead is how this is taking place based on the presumption of a mature, open source AI ecosystem. I won't go into all the details here, but but my colleague, Charlie, Diane, and I did a couple of reports, last year in in preparation for our reports on the AI native cloud. It went through this, systematically. And we we interviewed a lot of different, open source communities and technology vendors about what they were working on, what their key what their key, projects were, and we kind of distill those in the in the discrete domains, which have varying levels of maturity. You know, some some of them have have are kind of tried and true around the infrastructure in terms of cloud data and Kubernetes. Some of them are newer, that that are in terms of AI model development, for example, that kind of changes around, ML ops that have taken shape. But this open source foundation, even if you're consuming AI services through some a man by an, in a managed fashion from a cloud provider, this ecosystem and its maturity is really foundational to the rise of AI native cloud just as much as the hardware innovations and technology innovations that have come out at scale that we talked about just a few minutes ago. Moving ahead, if you look at the, key differentiation here, what are we talking about? Yeah. I mean, it's not just more and bigger. It's different. Right? You're talking about how intelligent automation is gonna transform infrastructure and applications management instead of having, an often, you know, disjointed observability or AI apps type of platform over here and your apps over here trying to map that together for system health and performance and and and troubleshooting and and developing technology roadmaps. These are now gonna be much more integrated as a result. You're gonna be able to forecast and auto scale resources. You're gonna, be able to have all kinds of routine tasks that still require a fair amount of heavy lifting from different IT teams are are are going to basically be built into the infrastructure, which then paves the way for the platform teams to to really focus on what is differentiated from the, from the AI native community and AI native cloud providers. And the intelligent development that comes out of that, again, as I mentioned earlier, you've got a lot, happening in the way of Turing bots, try to democratize and have focused on low, natural language, streamlined development, enhanced productivity, and so forth. And now we're getting into the smart applications. You know, we've we've had some, you know, AI, API, predictive analytics, personalized experience that's been developing for some time. It's being delivered through SaaS. But this is gonna be where a lot of the agentic initiatives start to to to move in as well. And they're gonna adapt over time. They're going to enable some intuitive and context aware services. They're gonna drive different kinds of engagements innovation, and company will use them to seek a competitive edge. So this the idea of a smart application, it's it's getting beyond, you know, just what the what the the embedded AI processes are in a given capability that the the foundation models have been able to provide. I think, you know, so the the the better chat bots and so forth, we're talking about something that's more dynamic, more intuitive, and more more fused to what a particular use case is. AI is a technology of specificity. That's where we're arriving at with AI native cloud. Now having said, I've been speaking of AI native cloud as this one thing. And, yes, there there's common technological, foundations for it, but it's actually available in multiple different ways. Now the open source AI ecosystem that I talked about a few minutes ago is vibrant. It's moving. A lot of it is it is coherent around the Cloud Native Computing Foundation, which brought Kubernetes and related projects, which has both vendor sponsors and members, but also enterprise, users of that those technology. That is still proceeding. There's a lot of investment, a lot of interest in that that's taking shape. I'm I'm seeing something what I'm calling the AI centric Neopass, and, essentially, that's an effort by some technology providers to step in and provide a a past type experience where they abstract away, the infrastructure below what the AI developers need and actually deliver that in a more turnkey fashion. So the platform team doesn't have to build that from scratch each time, to deliver that up to a developer that could actually be, be made available in some kind of modular fashion. It's kind of a halfway house between the open source do it yourself and the managed services that are coming from the public cloud. All the big public clouds have some variety of of managed services. I'm sure we're we're all familiar with them. The there's AI data cloud platforms. This is a group of of companies that pretty much presume the existence of a hyperscaler technology that they run on top of as a platform. They take care of all the integrations deep into the infrastructure. They provide their customers with the data and analytics capabilities and AI development capabilities and increasingly, Agenca capabilities that, they they will then use as a way to, organize and orchestrate their next phase of of AI, native development. And we also, of course, have a a category of AI infrastructure cloud platforms, also known as NeoClouds. Those are organizations that have come onto the scene in the last couple years. Some some existing cloud, organizations or technology providers have kind of reoriented themselves to to move into this space. And these are organizations which are going straight to the GPU. They're not looking at commodity cloud service. It's not a go to market for them. They're focusing on simply delivering the the key AI services and and usually leaning on the, open source AI ecosystem to be able to deliver that because their focus is is is a kind of a broad GPU power as opposed to some specifics. I wanna talk a little bit now about the the role of serverless here because I think it's very important. Because if you think about some of the complexity that's still involved in a lot of the, efforts that have been underway, it's considerable. And, essentially, we're, building something to have the capabilities almost as fast as they can kind of come to fruition and open source and be, brought onto a cloud platform and so forth. There's still a fair amount of complexity associated with that for a lot of different use cases If especially if if you if your use case is not actually conducive to doing it on a centralized major cloud hyperscaler and you're looking for a more decentralized or distributed approach, serverless plays a role in a couple of ways on that because it does allow, some of the, serverless agent run times, that will, move things ahead much more quickly than they otherwise would. You've got open source communities that can deliver that, like open source, FaaS and open source cloud, AI ecosystem. You've got these, AI eccentric Neopass that I mentioned. They're focused a lot on serverless as well. Their whole, their whole priority is to differentiate themselves in the market by moving this serverless type of direction. And, of course, the public cloud providers and cloud edge providers, AI service providers offer a lot of these capabilities increasingly as serverless. And the AI data cloud platforms, obviously, have have moved away from infrastructure management as well with a serverless focus. And the AI cloud platforms are are focused on AI enablement in which, the the serverless element of that. They're not interested again in giving you broad access to kinda crew to shape your own commodity cloud service. They're focusing on the key GPU, capabilities that they wanna provide and serverless is is core to that. In terms of the the way this is playing out, more specifically, you know, inferencing needs low latency concurrency and the cloud edge providers have specialized inferencing services. They are in a unique place to be able to do that. That is their, calling card in this in this, fast changing market, and it has attention. The automation of focused data ingestion and preprocessing for edge scenarios, You know, a lot of if if you think it think it through, if AI is a technology of specificity, obviously, inferencing at the edge close to the customer, resolving some of the latency challenges is is a critical task. So automation of some of these that that that could be done through a on a serverless platform is very important. And they also offer some low complexity AI development. You know, you can get going quickly. Right? You you have intelligent edge. You don't have to have the big AI ML data pipelines, which, you know, have their place. They're they're at scale. They're very powerful. They don't fit everything, and the low complexity AI app in a serverless context can be very important. You know, I won't go through this detail, but, you know, you can see that the way we at Forrester think about serverless, it's it's far beyond the kind of function as a service. It's a whole way of developing and building and delivering applications over time, and it's increasingly capable, drawing upon a lot of supporting tech as well. And our numbers interestingly have shown at Forrester that the the ramp up of AI is taking place simultaneously with a ramp up of serverless. And we think of these things as mutually reinforcing that you, want to move away from technology complexity when you can avoid it. So there's plenty plenty complexity to do with AI even if you're consuming the most abstracted and serverless function of that as well. Let's focus on that, the argument goes, on on a growing number of platform teams. Let's focus on that and and differentiate where we need to. And if we can draw upon serverless, we can draw upon functions as a serverless and and prebuilt platforms or edge oriented platforms. That's better for us because that's what that allows us to then focus on making these AI capabilities something that's differentiating for us as an organization. Those are the numbers, that I described. Now the common challenges for every AI cloud strategy is that you're you're gonna have, networking bandwidth. You're gonna have, sovereignty concerns. They're complicated by regulation and localization requirements. You're gonna have security. This could be challenging due to model provenance and supply chain risk, and there's some unauthorized access and privilege escalations. Those are something that has to be tackled along the way for for any organization. So security has to be, front and center in this. My last slide is, here. This is based on a report that came out a couple of years ago on the future of cloud. And in it, I think you can see the framework around which we think of the AI native cloud. We have a Kubernetes infrastructure ops to the left on that access. And, essentially, what you what we're seeing is a common open source Kubernetes substrate enabling vertical stacks, which are increasingly shaped by AI. So you have a series of domains, many, if not most of which in this, in this, chart are, the the themselves the subject of AI based innovation, and there's a number of them, whether it's autonomous worker operation or automation fabric, infrastructure as code supercharged by AI, Turing bots, of course, Android plus DevOps, you know, with with, another new order of of automation driven by AI. That's all taking shape. And this is something that we, you know, we we think it's, not an optional extra. We think it's something that every organization of size has to, step into evaluate, not for everything. Commodity cloud still has its place. But if if you fail to take note of this rapid technological change, there's a substantial risk that you'll miss opportunities that your organization would otherwise benefit from. And that's really what what we try to focus on at Forrester. It's where people can go. That is it for me. I wanna thank you again, Martin and the CloudFlare team for having me here. Well, thank you so much, Lee. That was super interesting, and I think, you know, I'm sure everyone who's listening is thinking really hard about how do they, you know, make the right cloud investment to power their own AI goals. And so this is, I think, really helpful to just understand how the market is changing, how some of the the different ways that some of these big providers are trying to deliver on the, you know, kind of emerging needs and the the kind of future promise. So yeah. Thank you so much. You know, I I I think, like I said, all of these changes that you're talking about are clearly driven driven by a changing selection of user needs around development, connectivity, security, etcetera. And so, you know, I have a few questions that are focusing a little bit more on some of those core needs because I think that, that could also be helpful for people, you know, trying to think about, ultimately, where should I invest? What sort of cloud environment? What sort of foundation should I be building for my future AI goals? So, first question that came to mind for me was, you know, when people think about cloud and building on the cloud, you know, the the focus is often around, like, that kind of build question. You know, what models do you have access to? You know, are you, you know, thinking about things in a serverless way? But once you build the thing, you still gotta make sure it scales, and that's certainly true with AI as well. So what are some common challenges that you hear about organizations running into when it comes to scaling their new AI services rather than just building them? Yeah. I think it's we're just starting to see that now, and I think that's why there's a greater interest in some of these serverless capabilities that are there. They're they're we expect them to be increasingly available, agentic, I should say, increasingly available as serverless. That's gonna be a a way in which a lot of organizations shape their their own. So the the challenge I see is that organizations, have the cloud providers are a big last model problem. Right? They don't necessarily go all the way the edge. They don't necessarily go deep into what all the customers need in in detail. So the customers have stepped into that in various ways over time with different kinds of edge and IoT solutions and, CDNs and, a panoply of solutions. Now with Agentik, the question is the question of scale becomes even more pronounced about how we're going to do this in a way that makes sense for us. So the there's a one way that I see this happening is people look at partners such as Global Systems Integrators to try to help them translate their business logic into AgenTek. But that may that phase may be getting way to one in which these services and capabilities are becoming more readily available and a platform team that can kinda put behind it some some of the technical details that they used to have to to wrestle with for productivity came out focused much more on a Genentech, and they can move their remit from focusing on a centralized cloud provider to a more holistic and dynamic, edge inclusive view. So I think that's where the scale comes in. It's I think right now we're almost at the architectural and design element of that and a kinda search for search for solutions. You know, who's out there, who can step in and do this for us? Or, you know, is it gonna be one vendor or two? What do we have? It would be is it an ecosystem that we have to help knit together? I think those are some of the questions that face people right now. Well, I think that thank you. That that makes sense, and that that definitely resonates with what we're hearing, I think, too is that, you know, while there's obviously a lot of pressure for organizations to, you know, have a robust AI strategy yesterday, ideally, if not right now, there's still a lot that they're figuring out in terms of design and architecture and even some some use cases. And I and I guess speaking of use cases, you know, despite the fact that the the label AI gets thrown around a lot very broadly, let's say, obviously, not all AI use cases are created equal, and, it's really important to think about, you know, the both the specific business need and then also the specific context of who's gonna be using it and why and how often. So, you know, you know, you you were talking you've talked about edge computing serverless. Like, what are some specific AI use cases or context that you are hearing about when you feel that edge compute becomes really, really important to to keep in mind? What do you think about that? Yeah. I think some of them are really about customer facing scenarios. You know? One of my colleagues made a point that if if you're in a retail store and you want to, flash to someone that, an item they want is on sale as they walk by, there's no real time to ship that back off to a cloud and back. You know, the the customer has already moved on. So the question is, how is that gonna be located? And, you know, we've had IoT for a long time now. A lot of it's on over difficult to upgrade technology. So there's going to be a technology refresh and some investments there, but I think there's also some capabilities that are built in into the into the environment by the cloud providers, by by people who come from the the the kind of, CDN and edge space already there. And people are now taking a look at this and saying, okay. How does if we need a more decentralized approach to AI and if we're sitting at the edge, how can that be delivered? I think, you know, the telcos have something to say about this. So it's a it's a different ecosystem that some enterprise IT organizations might be used to thinking about, but they are thinking about it now. This is a the folks so we at Forrester have always, made the point that edge is a distinct category. It is not the same as cloud. It is not a variation of cloud. It is distinct. Having said that, there's greater and greater inter operations in your interpenetration, but there are distinctions. And if you you need to have that distinction in mind in order to make sound architectural decisions well before you get to a vendor selection process or an implementation. Yeah. That makes sense. And I think the the distinction you're drawing also between, like, customer facing AI applications or, you know, use cases, I guess, versus non is a good one to keep in mind. Just, you know and and you talked about, you know, the the these kind of new types of environments emerging and something that I think comes to mind there is security. You know? You obviously come from security background. So, sure you've thought about this a lot, but, like, you know, what what are you you you and you mentioned security briefly, but, like, what are some of those most kind of pressing security implications that you see, you know, with these new types of AI environments? And can you just elaborate on that a little bit in terms of what organizations should be thinking about as they start building their own AI services? Some of the edge I think some of them are general, whether it's Azure cloud where else. There's the the the data privacy and leakage issue. And Edge AI, you know, in a remote location, you may not have the same physical security around that because you might have it around a data center. So so you have to have some a partner who's can demonstrate that they have this capability. And I think that's just something important for people who say, well, let's just put this in a branch office here here and here. You know? Yes. May maybe that can work, maybe not. And I think some of them are more general that everybody's been talking about for some time around, model integrity and and tampering, that have been there and comp injection and jailbreaking. These are kind of generic, GenAI and the GenTech issues. And then, of course, you have decision risk if you're if you're looking at some kind of autonomous agents, in network. So I think those are all the typical threat, analysis that anybody would already take already, and I think that they're adding into that. And my colleagues at Forrester have come out with a a new, AI, security framework which we call Aegis to tackle just some of these questions, which is I think it's quite important reading for anyone who's concerned about these kind types of issues, and we all should be. Yeah. Well, thank you for the recommendation. I think, yeah, a lot of people here will definitely be looking forward to seeing that. You know, you you talked about data leakage and in in the realm of AI, I think that inevitably makes me think of agents. So I guess I'm just curious whether agents in particular are presenting any kind of or or or what what may maybe what I'm asking is, like, what what do you think are some of the most pressing security considerations for organizations, building agents in particular, and are you seeing some of these cloud providers response to those needs in some way? Yeah. I I I think it has to do with the what kind of autonomy is granted and intended and and what actually is there. And, you know, if if a workflow is well understood and the range of the actions is highly predictable and you can map an existing workflow exactly to what that agent is, you're gonna be you know, you you have a reasonable chance that it's going to be early on, you know, kinda moving in the right direction. Right? You then have to test that and see how that's going. Because the the more variance you build into that, the more options are are trying to be, you know, baked in, the more challenging it's it's gonna become. So I think where people have been most successful it is when they've already got the kind of workflows and plans that they have. Maybe they wanna take something that was pre excuse me, previously in an enterprise software and try to bring that more into an, a nagenda framework. And, again, I think the other ones are are more, generic to there's several there's more generic to, AI as we've already discussed. There there may be more of a a networking and API abuse type of angle around the GenTIC that might be Yeah. More prevalent than than people would have anticipated, just a just a few months ago when they were trying to stand up some of the foundation model use cases. Yeah. Thank you. And, again, just maybe another way of thinking about, like, the, you know, kind of API and interconnectivity question is multi cloud because a lot you know, some organizations that might be exploring AI might not just be coming from having a a single cloud provider. So, you know, how does a multi cloud approach affect organizations who are thinking about making these new types of investments? You know, is there any new types of complexity or challenge that you see that, you know, introducing with AI in particular? Yeah. I think we're that's where some of the interesting debate about, you know, which which protocol is gonna exist. Is it gonna be model context protocol or agent to agent or something else? And there was interesting debate about that at the Cloud Native Computing Foundation meeting in in, London, earlier in the year when, people on the panel debated and said, look, we all know in open source world, the thing that's sort of best and cleanest on the whiteboard doesn't always make it. Like, the the first mover can often make it. And, you know, I don't have a formal position. I don't think far fortunately, there's a formal position on which which should be the winner. But the question is if if those can take hold in some kind of standardized way, then the question of multi cloud, agentic operations, although challenging, is certainly solvable. I think Yep. The there so I think, again, we'll have the choice. Just as there's a rich open source cloud AI ecosystem that has kind of engendered this whole effort, that that then is on on one hand is funneled into more proprietary approaches and managed services, which definitely has a market that people like. And I and then in other ways, it follows towards an open a an open source ecosystem in which something much closer to open source is kinda brought together. I think a lot of the new clouds are moving in that direction. I think AgenTic has, some of that same potential. You know, if it can be delivered in ways that are easily consumable for well understood use cases by by the people who used to provide it as part of an enterprise app, maybe they're now providing it with more agenda and customizable capabilities. Maybe that can take off in in some fashion. But for organizations that really put a premium on multi cloud and, some degree of autonomy from any particular cloud provider, technology vendor generally. They're gonna be looking for ways to express their their their business logic and their priorities, with AgenTek technologies in in ways that don't necessarily limit them to a particular cloud or or edge provider. Rather, they would see that as a way of orchestrating some of the connectivity they have. And, you know, multi cloud integration is difficult. You know, you can you know, there's there's a you can kinda find a third party networking to integrate them. You can or you can manage some or Kubernetes clusters from a cloud to another. Or you can have, you know, an an app be active active, but it's it's a kind of a blunt instrument. You know, agentic may be the way in which multi cloud can really start to come together in a more dynamic as needed way as opposed to I'm making this big strategic decision where it'd be multi cloud, and they're gonna connect this way, this way, and this way, you know, of the stack. And if a more lighter weight, use case driven way of arriving in multi cloud and some of the surrounding problems around data and egress and so on can be can be managed, I could see that taking off. Well, I think that's a really interesting point, and I feel like a common thread that I'm hearing from you is around the importance of just keeping the ultimate kind of business logic as the as the guiding force rather than making some sort of arcane or abstract, you know, architectural decision, which I think is is so important. And I know that, you know, we hear from a lot of customers who are just now getting started with their multi cloud journey, who are, you know, for, you know, various reasons. And I think that's really good advice for for everyone who's thinking about, like, that to keep in mind. Final question for me, you know, we could probably spend another entire half an hour talking about this, but I want to at least gesture towards the the people or the organizational side of some of these shifts you're seeing. You know, as organizations start adopting some of these new, AI native cloud models that you're talking about, you you know, what are some new networking or security skills that you see them needing to acquire during that process or to really be able to take advantage of it? Yeah. And in terms of security, a lot of the familiar issues are there. They're just gonna be done differently. So software provenance has been an issue for some time, of course. Bringing that down to model becomes, a more specialist knowledge. And there's always a problem in security of a one to many. There's always far more developers and there are people who who are tasked with application security. The same for networking. What I do see happening, is and I've seen this with with cloud generally, is that there's a lot of self learning availability, from various technology vendors. And obviously, the the the the better, AI assistance can can help as well. People are really upscaling these and researching these in systematic ways. You know, there's there are new security and audit certifications that are taking shape around these questions. They're new, you know, but they're they're getting up, but there's a demand for them and people are moving in those directions. So I think that's that's very important. The the threat models are, I would say, fairly well understood. I mean, we do have the OWASP, LLM top 10. We've got some others, with some, important security capabilities. That's still, I think, in the phase of assimilation, digestion, and and articulation. And the Forrester effort, I would say, is is one of the important contributions in that effort. Certainly not the only one. Our community is very engaged in that. But it's but it's far from the finished product and it's something that everybody does have to take seriously. And I think that's where the question of a trusted technology vendor partner comes in. You know, yes, there are some things you're gonna tackle on your own, but you're but if an organization has a track record around security and and quality that you're familiar with or that the market has validated, then you can be more confident in looking to them for some of the assistance that you need. Because that's where you can find some reference architectures, models, pure pure references, and so forth It can help you, you know, where you don't have to do everything yourself. You know, you can get some context and set priorities on that, input. Yeah. I I I think that's such a good point. I think you've made this point a couple times that, like, when it comes to security and AI or security and agents, it's not necessarily that there's this whole new category, like, conceptual framework of of risk to keep in mind. It's like the same questions of, like, what is connected to what and who has access to what. And, you know, how are you monitoring data and information as it flows between those things, you know, that, like you said, those are, you know, conceptual frameworks that have existed for a long time or there's even technology that exists to to deal with those things for a long time. And so it comes down to that both, you know, simple and complicated question of just, like, are you able to do those things with low error rates and efficiently and at scale? So yeah. I I just think that's such a such a good thing to keep in mind and I think has informed a lot of the way that we've approached that at CloudFlare as well. Well, Lee, look, that's the end of my questions, but thank you so much for all of these insights. Really appreciate you sharing, you know, your initial thoughts with us and then, subject submitting my curling a little bit too. So, really, thank you so much. Thank you again for having me. My pleasure. Yeah. So falls to me just to wrap us up really quickly by sharing the Cloudflare perspective on some of this stuff. And, you know, really, you know, I think what we feel is lines up super well with what sounds like Leah's hearing as well is that, you know, AI is this new thing. And then often, you know, organizations have this real need to build AI applications that are fast and scalable and cost effective across a really global user base. You know? I think it's that need that is driving a lot of the architectural changes that, you know, Lee described when it comes to the AI native cloud. And, you know, those changes are clearly happening, and there's clearly a lot of effort happening, to to make them happen as quickly as possible. But, you know, from the CloudFront perspective, something we feel is that we kinda provide a way of of jumping ahead on some of that transformation. You know, our platform, which we call a connectivity cloud, you know, really has all the tools that you need to build and connect and protect AI applications and agents, you know, faster with better scale and with, you know, more efficient security. You know, essentially, we, you know, we see ourselves as kind of accomplishing a lot of promise of of the AI native cloud based on some of these architectural decisions that we've been making for a long time. You know, and I'll I'll talk about some of that those decisions in a sec. But just really quickly, for context, you you know, here are just some of those services we provide, to to help you build AI apps and agents. So on the storage side, you know, you store training data, vectors, user generated content. You know, we also let you run, you know, over 50 models across our global network. I'll talk a little bit about I'll talk a lot more about that in a sec. You know, we also support with, you know, retrieval augmented generation. You know, we have a kind of prebuilt rag pipeline. And then when it comes to agents, again, providing a lot of the storage compute, the interconnection that, you know, that we was talking about a little bit and authentication as well on the security side. So just kind of from from a nuts and bolts perspective, that's a lot of what we're a lot of what we're providing. But, here's just some of the way of looking about that looking at that compared to, you know, some of the more familiar cloud public cloud models that you think of that you may have heard about. I'm not gonna go down the list of every single one of these things, but, you know, just for folks who are might maybe wanna refer to this later. It can be helpful to kind of talk about some of this or think about some of this, you know, in some of these more familiar categories. So encourage everyone to take a look at that when they have a chance. But, you know, you know, Lee talked a lot about serverless and this idea of, you know, trying, you know, edge compute distributed databases. And this is really where, you know, CloudFlare feels that we allow organizations to shine and, you know, kind of jump ahead of a lot of that growth, from an AI perspective. And, again, this kind of comes down to the the integration of our connectivity cloud platform and really the integration and uniformity across the network that supports it. So that network operates in more than 330 cities around the world, and then over, you know, 200 of those cities can run AI inference. And that number is growing rapidly. You know, and, you know, all of the the the locations of that network, you know, can deliver a massive range of developer and security and connectivity services, you know, including some of the ones I just mentioned that are more AI focused in particular. You know, with a few exceptions, you know, every single one of those network locations can run every cloud or service, like serverless compute or traffic inspection and acceleration, caching policy enforcement, and more. You know, one of those exceptions is AI inference. But, again, as I said, you know, we can already run that from over 200 cities in that network. So really right away, that's making us, you know, a a great platform for AI applications that are super performance sensitive, They're super, you know, customer facing like Lee was talking about, and, you know, really trying to, you know, run. And, you know, we we've been rather, we've been building that entire platform in a very kind of serverless way, I guess, from its inception. You know, really what kind of a foundational way of how we think about building and delivering services rather than just kind of an isolated, way of running code. But, you know, it's not just that network scale that we think makes CloudFlare a really good place to build AI. It's also, you know, look, some of those more granular architectural decisions that we've made. Now really quickly, I'll without getting too into the terminology, I'll just define a CloudFlare worker, which is like, a lightweight isolated execution environment for your code or, like, a serverless function, essentially. So if you're familiar with AWS Lambda or Azure functions, it's the same same thing. But, you know, while Lambda and and Azure typically run on containers or virtual machines, you know, Cloudplay Workers utilizes like a like a v eight isolate architecture that are, you know, lightweight and secure and, and really fast kind of in the same way that, you know, some places will run, like, browser isolation, for example. So this means that we have real like, pretty much zero or, you know, instant cold starts, being able to spin up in milliseconds, you know, automatic scaling and then cost efficiency, which we'll talk more about in a sec. Just being able to scale up automatically to handle really massive traffic spikes thanks to the size of our network. And then also, really importantly, being able to scale down if necessary, they're not in use. And then also, you know, because our the network is globally distributed, you know, your workers can automatically deploy and execute at the ideal locations everywhere in the world and and are that some of the intelligent routing features that we have on the back end, you know, really add to that, you know, latency minimization and performance boosts for your end users. So, again, really important for those, like, customer facing AI applications. So, really, you know, all of this kind of speaking to some of the, you know, automation elements that Lee was talking about earlier. You know, I I do wanna dig in quickly on that scaling up and down question because, you know, that's super important for AI and for agents in particular because AI compute is really expensive, or scale just in general and then compared to other types of applications as well. And, unfortunately, a lot of the kind of traditional hyperscalers pricing models will complicate, that because on a micro level, they may be charging for wall clock time, which can really make your, you know, inference and a a cost spike. And then also, you know, on on the macro level, like, the, you know, the hyper scale is might charge for log blocks I'm sorry, for large blocks of compute time, whether or not all of that is actually used. And so, again, for agentic apps, the the answer is often no because they might spend a lot of time waiting on a response, for example. So CloudFlare provides kind of an answer for both of those. No. We we don't make organizations pay for unused GPUs or for the time agents spend waiting for, like, to hear back from an NCP server or something. So, because of our ability to kind of auto scale up and down, so that can really help you just to experiment and scale AI with a lot more confidence and flexibility. Few more things I'll cover. You know, we we talked a little bit of you know, alluded to at least this issue of velocity, of of development of AI and just in general. And so, you know, we have a lot of developer resources that can help organizations just explore some of these things faster, you know, and, a software development kit for agents that really makes it easier to build agents at scale. And then also we give people the ability to run MCP servers globally, again, across across that entire network in a very serverless way, which, you know, both simplifies a lot of those, you know, kind of where should this thing live questions, and then also just makes performance kind of an afterthought rather than a major consideration. Another interesting thing, you know, we didn't really talk about this too much, but I know it's of concern for a lot of, like, global organizations or organizations who are thinking about expanding is, like, privacy and data locality. And because we are running in such a kind of serverless anything can run anywhere way, that gives Cloudflare's connectivity cloud some really interesting abilities to, or rather people who are using our connectivity cloud to control where those some of those serverless functions are operating. And if you are an organization that's subject to the GDPR, that becomes super important. And our data localization suite allows you to do that with more than just serverless compute, with traffic inspection, policy enforcement, caching. Again, kind of like the the whole range of things under the hood of our platform. But, you know, we do know that, there hasn't been too much conversation publicly around, like, privacy and AI or data locality and AI yet, but we're certainly expecting it to come to the fore more as AI expands and just enter the public consciousness even more, the regulatory consciousness even more. So, fortunately, that's something we can already help, organizations with. And then on the security side, you know, we, again, could spend an entire hour talking about this already, but all of these same sort of security services that we can use for your, web applications, for your users who are using AI or who are, you know, using app corporate applications. Those same things apply for AI. So letting you protect AI apps from external attacks or managing how they connect to different AI models, managing their permissions, etcetera, etcetera. All of those things are built into our connectivity cloud and, again, running everywhere on earth, which just makes these questions performance of deployment of data locality much, much easier. So So just to summarize really quickly, you know you know, I think a lot of the people who are listening here are probably curious, like, how should I be investing? What are the right types of cloud choices for different AI use cases? And so some of these different shares I've been talking about in terms of network scale and our unique architecture, you know, the the the kind of cost efficient easy scaling up and down model that I talked about, the built in security and privacy, and then some of those developer tools and services, you know, we feel make us a really good fit for performance sensitive AI applications for AI, you know, apps that might be growing really quickly. Really useful for multi cloud to have a kind of single management layer that connects to, you know, your workloads wherever they are. And then again, you know, make those really useful for agents because performance, interconnectivity, and, you know, management of permissions become really important. And then also organizations who are, again, trying to work really quickly, just, you know, the auto scaling, the developer services, the kind of built in performance just give you a few less things to worry about as you are launching some of your, AI apps globally. So final thing I'll say is, you know, we have a lot more to say on this. Probably not surprising. So I would encourage people to, you know, go read some of our recent articles on the net around scaling agents, scaling AI. That's our digital magazine. We also have some ebooks available that I think you'll be followed up with, talking about scaling AI, securing AI, speaking into a lot of these same things. So, yeah, look for that and, check it out on cloudplayer.com if you're curious. But, yeah, again, I just wanna, again, thank Lee so much for sharing all of these insights. Really, really interesting. I know I learned a lot, and I hope everyone else did as well. So, Lee, again, again, thank you. Thank you. And everyone else, thanks for spending your time with us, and look forward looking forward to talking to you all again soon.