Video: A farewell to legacy firewall helpers: Getting the protection your network deserves | Duration: 2288s | Summary: A farewell to legacy firewall helpers: Getting the protection your network deserves | Chapters: Cloudflare Digital Modernization (28.71s), Farewell Legacy Firewalls (81.15s), Conclusion and Q&A (2124.0298s)
Transcript for "A farewell to legacy firewall helpers: Getting the protection your network deserves":
You need to accelerate digital modernization to stay competitive, but complex tech stacks limit your modernization initiatives, forcing security and performance trade offs and driving up costs. Use Cloudflare to consolidate, simplify, and modernize your applications, network, and cybersecurity. Cloudflare modernizes infrastructure, replaces inefficient point solutions, and connects IT domains with a unified unified interface so you can streamline operations, shorten incident response times, and accelerate time to market all while driving down TCO. Visit us today to learn how Cloudflare can help you achieve the full value of your digital efforts. Alright. Let's go ahead and get started. Hi. My name is Brian Tokayoshi. I'm director of network services here at Cloudflare. And today's topic is farewell to legacy firewall helpers getting the protection your network deserves. Now, as a refresher for those of you that may be new to this series, this network modernization journey is really about talking about how organizations can take the traditional enterprise network design and move towards a more agile cloud delivered networking service. Now, if you've joined us through previous sessions about this series that you we talked a little bit about network simplification, about how to make the network simpler overall by taking out a lot of the complexity. We talked, of course, a lot about what the outbound direction, which is covered by SASE and being able to talk about how organizations have the users connect to applications. We're going to be talking in the future a little bit about how you do your East West traffic and about how you connect cloud applications. But today's topic is specifically about inbound traffic, About how do you protect your network infrastructure from, the public facing network infrastructure from all the things that are possibly out there on the Internet. Well, being able to philosophically look at that problem space and talk a little bit about the design and technology about how we approach that is the goal of today's session. Now, if you think about this, when we talk about public facing infrastructure, there's a lot of different types of public facing infrastructure that still exist in today. Some of it's in the DMZ, some of it's in the public cloud. But in essence, that they have different functions. You have your gateways and your DMZ to reach applications in the data center, so many types of servers exist. So these are products that are already, manufactured by a particular vendor, and they're deployed in your DMZ so that you can gain access to your data center. But as you know that these have their own risks as many of them have web interfaces, for example, and they can be exploited as such because that when vulnerabilities are found, an exploit can be sent because all of these servers are public facing. There are actual servers that are in the d d m c that are related to how you provide functionality, IT services, and these conclude include, like, your on prem directory or nail servers, DNS services, and so forth. You have the applications themselves in the DMZ where you're actually hosting applications. And then all of this has been, underlying the notion of how organizations are taking many of these services and moving them to the public cloud. And many of you are probably somewhere in between on that journey, that many organizations started completely in Dennis' data center. There's some digital native organizations that are completely in the public cloud, but most organizations are somewhere in between, where they have both on prem applications and on prem data centers, as well as public cloud data centers. So, public cloud applications. Now, with that in mind, the spectrum of mitigations for public facing threats usually depends starts with a firewall, and then it has a series of firewall helpers that provide functionalities on top of the firewall. And what I mean by that is that, of course, every organization have firewalls at the perimeter because it delineates between the untrusted network and your trusted network, and it's a perfectly valid reason for having a firewall. Now, firewalls are also called upon to filter traffic that exchanges between those two, areas of your of the untrusted network and the trusted network. So as organizations require functionality to do that, some of it is based on just blocking ports, which is a firewall's perfectly, a logical appliance to do that. But there's also other capabilities that are also needed when you have to think about what goes beyond just the network scanning and the things that are allowed inside your organization. Sometimes, you have to think about it from the perspective of not just the amount of scanning that it can do, just being able to inspect traffic. There are just limited resources that are in a firewall. So how much of the capability can you use if it's being consumed by taking out garbage that never really needed to be that never should have been inspected in the first place because it has no valid reason. And there's perfectly logical ways that this has been abused in the past because that organizations, threat organizations are dealing with threats such as threat actors sending things such as DDoS attacks or just traffic to scan the network that are hitting different layers that the firewall can't address. And what I mean by that is that, if you're talking about just pure volume, just being able to overwhelm the traffic that the firewall can process, well, firewall can only it can only process what it's capable of of delivering in in the appliance itself. It doesn't have any flexible resources to call upon additional protections and scale when needed. So organizations supplement the firewall with ISP filtering or scrubbing centers. They have their own problems, which I'll talk about in a minute. There are some DDoS protections in a firewall, such as being able to do protocol, l three, l four, inspection. So if you have things such as SYN flood, every firewall has, like, SYN flood protections. But, again, it's only capable of dropping the traffic that it can process. And after which that lost resources are consumed, it can't do anything about that. So organization supplement the firewall with a helper, ISP filtering, and scrubbing centers again. There are other types of attacks that are at the l seven layer, which can include things such as the actual abuse of the application itself, which is typically handled by a WAF. There are things such as l seven DDoS attacks, which are typically built with a DDoS appliance, or in some cases with some some of the capabilities in the scrubbing center. And then there's just abuses of the APIs themselves, which requires another stack of application security, which is the a b API security. So all of these things, when you think about it, you have your firewall plus all these additional technology choices that are added to supplement capabilities on top of the firewall, and all of these things added together add complexity to the environment. When you look out about it, how it's all stacked together, in effect, your public facing infrastructure has attackers that are sending traffic through the first firewall helper, which is a scrubbing center. And you have to use something that can protect it from the l seven, abuses of l seven. So WAP, WAP, whichever your preferred acronyms might be. The ISP is the choke point for all of this traffic, so it can produce some types of filtering on its own. But all of this thing all of this stuff is hitting your perimeter firewall, and all these things layered on top of each other are not necessarily a good thing. Why? Because things like scrubbing centers are not necessarily optimized for being able to address the dynamics of what people have today. It most scrubbing centers have a limited number of sites deployed worldwide. So, in effect, depending on where your user is located and where your data center is located, it ends up being that scrubbing centers add latency because it has to trombone a hop through one of the scrubbing centers to look at the traffic in order to process it. And since it only does scrubbing from the sense of do do DDoS, and any other security capability requires another proxy chain or something, another hop to another center, it adds yet more latency. And in fact, what a lot of organizations do is that they use scrubbing centers as a last resort, like, when they're actually under attack, and they might switch to, the traffic to their scrubbing center because they don't wanna necessarily add latency for valid traffic. But then you need instrumentation to detect when an attack is happening to do that switch. So it's an in inelegant solution for how people have been using scrubbing centers. ISP filtering, I'll I'll speak about WAP and WAP as a separate topic because those are inspections at l seven. But ISP filtering though, the problem with ISP filtering is all that traffic in the world continues to hit that singular singular point where it's hitting your ISP and it, again, has finite capabilities. When the ISP's capabilities are exhausted, then your valid traffic is not passing through. So then with this being delivered to your primary firewall, some element of how organizations run their firewall you know, most organizations don't like to have their firewalls running at 100%. It just means there's no more headroom. But anything less than 100% means that you're using a less than optimal configuration, And many organizations are using less than a 100% because that, of course, you're buying firewalls for the period of time for which the appliance is depreciated. Well, you need capacity for, let's say, the five year license five year lifespan of a firewall, while you need capacity for the fifth year if you hope to make your investment last that long, but you're getting suboptimal use of resources in year one and year two if you don't have the traffic to justify. Again, firewalls are inelastic, so that means that the processing and capabilities that it does have are still limited because that you only have the capacity that's in the box itself. Nothing can scale beyond what the box can actually do. So as organizations think about how you can do this in a different way, the concept of what you can do with Cloudflare or Connectivity Cloud as the front door to a public facing infrastructure means that all traffic, by hitting a cloud delivered network for your inbound traffic, makes it so that all the data centers around the world from Cloudflare can act as front doors for the traffic for that's passed to your public facing infrastructure. So, in effect, by having the traffic pass through Cloudflare and making every data center identical to one another, and then any casting all of these data centers so that they look identical to one another, in effect, what you have is that Cloudflare acts as, the Cloudflare network absorbs the attack because that each attacker thinks that the front door is in the location that's nearest to them. What I mean by that is that instead of having everything go through the funnel and through the ISP and the choke point or hitting your data center, what you have is that every attacker, every member of the botnet sees a local Cloudflare data center as the front door because it's anti casting the, what with BGP, the address that it appears that it's that they their their traffic should route to one of traffic cloud providers closest to them. This makes it so that when organizations are now protected, that they are absorb they're they're getting the attack, the volume of the DDoS attacks, the scans, the malicious traffic. They're all being hit through Cloudflare first before any traffic is passed and filtered before it goes to the organization's public facing infrastructure. And that makes your own firewall, your own infrastructure run more efficiently. Because now, you're not processing traffic that it never needed to process in the first place. It's garbage. It's trash because that it is stuff that is supposed to be dropped. If the firewall is inspected, it would drop it, but inspecting, of course, inspecting traffic with an inelastic compute means that a firewall itself, if it was performing this job, would have, resources consumed. And that's why Cloudflare's network makes it more efficient to do this kind of work. It makes us so that it can provide these capabilities through a infrastructure that's far larger than anything that, any organization could build for themselves. In fact, Cloudflare's network consists of 330 cities around the world, that includes a 120 countries, including Mainland China. And one of the things to note as a side note is that we've added GPUs to a 150 of these cities already, and we're continuing to add them at a very aggressive pace. This is providing the, the support for AI capabilities that are both delivered to customers as part of their ability to build applications of Cloudflare, as well as support our, our desire to add more AI capabilities within our, protections themselves. So AI power protections driven by this by these data centers. We are fifty milliseconds from 95% of the world's Internet connect Internet connected populations. We are verifiably one of the largest networks because that of the extent that we are connected to other networks, our open peering policy, our ability to connect to these different networks is verifiable through, public resources such as hurricane. And then we have the 296 terabits of network capacity and it grows. It continuously grows at an aggressive pace. This is all built on top of the data centers that we have deployed around the world, as well as adding backbone capabilities, the private backbone for taking traffic from one location of the world to another. This is all part of the Cloudflare network, but what it delivers is the connectivity cloud. And what's more what's important about that is that let me explain connectivity cloud as a concept, and then let me explain why it's useful as in two separate things. So being able to kinda digest those in two different, two different slides so that we can kinda walk through them. So connectivity cloud is what we do with our Cloudflare network and our data centers. And for network protection, specifically, when you use the Cloudflare network to do this, you can eliminate the operational headaches of multiple different types of technologies. This includes the firewall helpers, which I mentioned, and the right side of the table that we were looking at previously, all the things such as the on prem appliances that are supplementing the firewall, the services just scrubbing centers, the build ISP filtering, all of those types of things are operational headaches because that they are different technology stacks. But even these capabilities these capabilities also exist in the public cloud as well. And those create their own operational headaches because that each public cloud does them slightly differently. What we do with the connectivity cloud is bring centralized enforcement of all policies because that the construct for the protections is the same everywhere. And with that construct, it makes it so that you can make sure that when you want centralized enforcement of policy, you can do it in one place through Cloudflare rather than having to implement something for your public cloud that's different from your data center that's different the way that the ISP put the protections in place. It also gives you flexibility by making the underlying network something that's, non visible to the public. So that makes it so that when you change the underlying network, it doesn't change anything in in a public facing manner. What? We'll talk about that in a second. And it also benefits from the operational aspects from Cloudflare because that when policy changes happen, they happen in seconds. It makes it so that you can deploy a change to a firewall policy and it's pushed around the world in seconds. It's not like the traditional firewall architectures where, traditionally, if you may have changed a firewall policy, you have to set up for, outage, you have to, set your commit. You spend twenty minutes on the commit and then hope that the other firewalls come up. And if it doesn't come up, then you have to go back and start think about how you're gonna roll back policies. And that can take lots and lots of time. Whereas, like, when you're doing what you're doing with Cloudflare is you're pushing global policy changes in seconds. So being able to do all of this is based on how Cloudflare's data centers deliver a couple of course concepts. The first off is that every data center is autonomous from the standpoint that every data center is running all services. So all services means that every time that there there's multiple reasons for this. But the first is that you don't necessarily want the additional latency of having to make multiple hops like other vendors do, where you're taking the traffic that is taking from a point of presence and then taking it to a data center later. Every Cloudflare data center is running all services around the world. So that means that they are autonomous from one another, and it doesn't require that second hop to conduct the additional services that you may enable. But also related to this is that we have not all these data centers are working in concert with one another. So that when attacks are happening in one part of the world, they are fed information so that the other data centers are pick up knowledge of what's happening. So that makes it so that if you think about what happens with ISP filtering, it doesn't actually know an attack is happening until you are actually getting hit. You're in the middle of attack and trying to solve for the problem. Whereas with Cloudflare, it's taking the signaling from all of our data centers and coordinating them together. So even if it's not your specific data center that is being hit, it is detected through the Cloudflare network and picking up that intelligence that there's something going on. Then we deliver the actual protections themselves, whether they are being able to apply our protections from magic transit, which are is our l three protections, our l four protections, or, from Spectrum. And those protections are delivered as composable services. And what I mean by composable services means that when you deploy with Cloudflare, you can enable these capabilities as you are ready to use them. So it's not like traditional appliance insertion where you bring the network down in order to insert a new appliance, to get new functionality. The connection to Cloudflare remains the same, and then you just add services by turning them on to get the additional protections. And, of course, all of this is underlined by the firewall protections from Magic Firewall, which provides the ability to implement the protections for blocking traffic, and taking out that traffic that doesn't need any further inspections. You know that you don't need them at all. So then you could just drop that traffic right away. And, again, it makes us so that it takes the traffic out of the of the stop the traffic you never needed at all so that it in this diagram, your public facing infrastructure is labeled here with the, the the customer origin where all of that is getting the cleaned traffic, not the stuff that is being from that's anybody in the world chooses to send. Now, being able to insert the traffic means being able to, interact the traffic to Cloudflare first. Couple ways of doing that. Now, being able to do this is that means that we can one of the first ways that we do this is that if you want to take your prefix and there's a minimum prefix length of slash 24 to do this, you can take your prefix and advertise it through Cloudflare. So that way that the BGP announcement through Cloudflare presents the front door around the world as being one of the Cloudflare data centers. And, again, any cast means that every data center is doing this. So no matter where the user or an attacker is, they see the BGP announcement of your prefix coming from every data center, every Cloudflare data center so that it is attracted to the first one that it sees. And that makes it so that that's how Cloudflare does the absorption of larger attacks by using the benefit of the network to absorb the portion that any particular region might be distributing rather than having it all hit a singular point. All of these delivered again, we talked about the full stack of services. So the composable services is the firewalls as a service or DDoS protections, load balancing, traffic acceleration. These are all delivered through the Cloudflare data center. And then you maintain a connection between Cloudflare and your public facing infrastructure through an IP sec tunnel or GRE tunnel. Now, one other thing about this is that with the ability to do slash, doing, advertising your prefix, we do an asynchronous, traffic flow where the return traffic goes directly back to the user. So that makes us so that the return traffic from your data center back to the user is a direct path. Now, what happens if you want to use a different option? What if you don't have a slash 24, for example? That's where Cloudflare can be used with a leased IP. Now, with this is that Cloudflare already has an address block that's already being advertised, and you can lease them. And what happens here is that with your IP addresses, you connect to Cloudflare. And then by benefit of doing this, you can then use Anycast, GRE, IPsec tunnels, or, Cloudflare CNI. I I know I didn't mention that earlier, but that's a direct connection between Cloudflare, and the customer's network so that you can have a if if you are connected in the same data center, then you can use a partner or a direct connection to a Cloudflare data center to make the transit to make, to reduce the number of steps to get to your network to Cloudflare. But by virtue of doing this, this supports a different way to attract traffic and it brings the traffic back through the Cloudflare, steers through Cloudflare so that, both the inbound and egress traffic are passing through Cloudflare in this configuration. Now, in the bigger picture of things, I wanted to not necessarily talk about this from the specifics, but just give you, some of the ideas, some concepts here. This is part of the reference architectures that are available. So we have a lot of public facing documentation architectures for those of you that are interested in digging in. But the concept that I wanted to share here is that over here on the left hand side, you see the BGP announcement. That is how, it's attracting traffic to Cloudflare. But what you see is that the underlying addressing is going to three destinations. One is a cloud, cloud provider x. The other is cloud provider y, and the other is a cloud, your your own data center. And by virtue of doing this, that these address these, these address spaces makes it so that you can do the steering specifically for the parts of the traffic to the specific data center by controlling policy from the Cloudflare Edge. So this provides you a lot of flexibility because that as you take the actual architecture of where your applications are running, you can now intermix them or change them, or you can move from data center to the cloud or move cloud to cloud or, you know, adopt a multi cloud, however you see fit, without having to worry about the external addressing for any of this. You're also gaining consistency because that, again, what we talked about is that if you were doing this cloud provider specific protections or using different technology stacks to do these functionalities that are provided by firewall helpers, you would have three different places to do policy changes at a minimum. But with Cloudflare, as you can see, you do changes at the Cloudflare Edge to implement protections across all of your application stacks, and that's how you get consistency, and that's how you restore some sanity back into operations for how organizations approach network protection. Now, if we look one step further into all this that we're talking about with this inbound traffic, what about getting visibility into it? The great thing here is that because that all traffic for all destinations through your public cloud and your public facing infrastructure and your data centers, all of this is passing through Cloudflare. So you get exposure to the traffic patterns through the Cloudflare tools. So you gain visibility that had not been achievable without some type of process to unify the logs from disparate protections that had existed in the past. That's because that as Cloudflare as the Connectivity Cloud layer takes in all of this traffic, we present the traffic the reporting back to you so that you can get insights into the volumes of traffic, the types of attacks, the type of attack methodologies, the size of the attacks, the duration of the attacks. All this information is available from your Cloudflare dashboard. So, again, that now you're getting consistent views across all of your public facing infrastructure and not just the piecemeal views that you see through the independent types of of reports that you would get from the different public clouds or the different products that are used as firewall helpers. So with all of that, it's, these are kind of like the baselines for how organizations adopt Cloudflare's network protection. It is something that you can deploy with the Cloudflare's connectivity cloud with our products such as Magic Transit, Magic Firewall, and being able to do those things as part of your network modernization journey. We have a couple of, documents that I think that you'll find interesting. One is a white paper for developing a strategy for your network modernization. This particular paper is a broad view at the network modernization journey. And as we talked about at the beginning of today's call, the ability to take inbound traffic, outbound traffic, East west traffic, and looking at all these directions and applying network modernization to them. That's where, a great place to start if you want to if you're interested in the larger network modernization journey. And if you're specifically interested in some of the diagrams that I showed you today and how to implement protections with, with the different models that are available, take a look at the reference architecture for Magic Transit. Take a look at that. It's a lot of technical information to dig into, a lot of ways that you can get more detail about how to implement some of the concepts that we talked about today. So with that, we have time for a question or two. Just wanted to see if there's any questions from the audience. If you have any questions, feel free to go ahead and type them in the q and a box. It looks like we had a couple of questions come in when I was talking. So let's take a look at them. What if you want to eliminate the DMZ? I love this question, because, like, you know, I think that the DMZ is bit of an antiquated concept in some ways. In effect, you take a network zone that's partitioned off your firewall, you carve it out to make it public facing, and then you put, use that network segment so that, that's where you host servers and gateways and things of that sort. But as you know, like, there have been exploits against both of those. Right? As soon as, like, the the attacks on VPN infrastructure, for example. The the attacks that have been, levied against any as soon as the vulnerability found in operating systems or application software or the application themselves. So the thing is is that DMZ as an antiquated concept means that what I mean by that is that I think that DMZs actually have a couple of different structures. One is that you have a role of DMZs for your customers and that's, for customer facing apps, customers like a web server or a customer facing app. Those kinds of things have largely been replaced by public cloud today. So I think that, you know, a lot of those use cases may have, already been moved to public cloud. Then you have apps for, contractors or people that are employees that need that may be working external to the organization. And those apps have largely been replaced by SaaS and some public cloud, but, through a VPC, so because it's a virtual private cloud. So I think that in those cases that you have some aspect of application modernization that's moving applications to the cloud. And then some of that that remains the private applications that can be accessed through a VPC, which is used which through which is accessed through protections through such as z t n a, zero trust network access, rather than having to use a VPN through the organization's DMZ. So I see, like, the role of the DMZ diminishing in many cases. And if you think about all the infrastructure it takes to run a DMZ, maybe that's a good thing. Because all of the stuff that you would normally have to do to monitor that the DMC had been attacked, all the stuff that have to do with load balancing and setting up and operating a DMC, if you can offload that and, one, move applications to the cloud. Two, use, the some of the protections we talked about today for your customer facing applications, for public facing infrastructure. And three, using zero trust network access for the private access to the applications, it gives you a path to move away from the DNC model entirely. Then the question here is that, do you have a option for replacing the firewalls? So I do tend to think about what Cloudflare does is we have magic firewall, which is a complement to your firewall, meaning that it takes out the stuff that filters out the large scale amount of traffic that your firewall doesn't need to process, so it runs more efficiently. So I think of them as being complements to one another rather than, like, in a wholesale adoption of a a replacement model. As organizations continue to evolve towards a cloud first model, that's gonna continue to change. Because, like, I see that in the long term, the role of a cloud based firewall is plays, an important role for as organizations go multi cloud or full public cloud because that then the, the role of a public a public facing, hardware firewall, a perimeter firewall, or a data center firewall starts to diffuse. It's diverging. And and then I think that the role that you see as you move towards more of a fully cloud model, that will be largely, out the the areas where I see more adoption of cloud based firewall services. That said, like, I I don't tell people to start, a wholesale turning off all of your firewalls because, like, they do play an important role for delineating, again, the untrusted and trusted zones of your network. And for the functions that as organizations that are more conservative in their transition, then you can take that adoption as you go from fully on prem to fully cloud based in the steps that you want rather than having to say it's all in one basket or all in the other. This other, we have another question. Is our long term strategy is building applications in public cloud. Can't we use the cloud provider's protections? We already talked a bit about this, but, again, like, if you take a singular cloud environment, like, let's say you're doing AWS and you use only the protections in AWS, you are having a as in effect, one set one cluster of operations around one set of protections, which is fundamentally going to be different from all of the protections, the same protections in GCP or Azure. So as organizations go multi cloud, it becomes harder and harder to rationalize how you would use one particular hypervisor's protections over another, especially when the application elements, the multi cloud, components may be distributed through different places in different cloud environments. So that's why I think that using, in effect, the connectivity cloud as the first stop for all traffic, again, as the front door, as the metaphor, makes it so that it's a much more logical place to put consistent protections because that they are consistent across all cloud environments. They're delivered in a place that can be managed from one place. They maintain visibility across all traffic from one place, and it makes it so that you can change the underlying cloud components without having to disrupt the rest of your environment. Well, that looks like all the questions that we have for today. I really appreciate your time. I really appreciate all of you for joining us for today. That concludes today's session, and I hope that you can check out one of these, one of these papers for being able to provide more information about how to use network protection with Cloudflare. Thank you.