Perimeter Security

Security perimeter is no more as attack surface continues to expand

For a long time, security teams have been able to mostly rely on the safety of a security perimeter, but with things like IoT, embedded development, and now remote and hybrid work, this notion of a defensible perimeter is totally gone.

Having all of these connected devices that don’t live under one network expands the attack surface that security teams need to worry about. This is especially true when you’re talking about remote or hybrid work, explained Ev Kontsevoy, CEO of Teleport, which is a company that provides tooling that enables users to remotely access computing resources.

Kontsevoy explained the perimeters in internet and application security terms are breaking apart completely, in two major ways. One is the type of perimeter that exists around your data center, where your equipment like servers or computers actually live, and the second type of perimeter is the office itself, which is where all the employees who work there sit and need access to data and applications. This is where technology like firewalls come in, Kontsevoy explained.

“That’s the traditional approach that now makes no sense whatsoever,” said Kontsevoy. “And the reason why it doesn’t make sense is because computers themselves are not in the same data center anymore. So we’re now doing computing globally.”

Kontsevoy used the example of Tesla. What is Tesla’s perimeter? Tesla deploys code to each of its charging stations, data centers, and cars. “Tesla deploys into planet Earth … Most organizations, they’re moving into the same direction. So computing itself is now becoming more and more global. So the notion of a perimeter makes no sense in a data center,” said Kontsevoy.

Conversely, no one is sitting in an office anymore. “Now, we have engineers, contractors, auditors, and interns, all sitting in different parts of the world, using computers that might not necessarily be company computers,” said Kontsevoy. “They can borrow an iPad from their partner to do a production deployment, for example. For that reason, traditional security and access solutions are just no longer applicable.”

According to Jeff Williams, chief technology officer at application security company Contrast Security, this idea of a perimeter had been dismantled long before COVID. In fact, he says people had a misguided sense of security in a perimeter that didn’t actually exist.

“Once any one computer inside the perimeter gets compromised then there’s what’s called the soft, chewy center where there’s nothing inside to prevent an attacker from moving around and doing whatever they want,” said Williams. “So the best strategy for a long time — since way before COVID — has been to really sort of consider your internal infrastructure as the same as your external infrastructure and lock it down.”

According to Williams, development machines are traditionally not very locked down and developers generally have the privileges to download any tools they need.

“They’re running, honestly, thousands of pieces of software that come from anywhere on their machines, all the libraries that they use run locally, all the tools that they use run locally, typically with privilege, and any of that code could potentially compromise the security of that company’s applications. So it’s something that DevSecOps programs really need to focus on,” said Williams.”

Williams also believes the current speed at which DevOps teams want to move isn’t really compatible with the old way of doing security. For example, scanning tools, which have been around for over a decade, aren’t very accurate, don’t run very quickly, and don’t really work well with modern applications because they don’t work on things like APIs or serverless.

In order to move fast, companies will need to abandon these older tools and move on to the new ones, if they haven’t already. Interactive Application Security Testing (IAST) and Runtime Application Self Protection (RASP) are two newer technologies that work fast and are part of developers’ normal pipelines.

“As the developers write their code, they can get instant accurate feedback on what they’re writing,” said Williams. “And that allows them to make those fixes very quickly and inexpensively, so that the software that comes at the end of the pipeline is secure, even if they’re moving at very high speed.”

Lack of automation and integration becomes even more problematic 

The act of actually working remotely doesn’t seem to make it harder for DevSecOps teams to work together. According to software supply chain security company Sonatype’s CTO Brian Fox, certainly, companies need to get tools that will make collaboration easier in a distributed setting, but he believes the core of DevSecOps remains the same.

However, when a company goes remote, one of the first things that happens is the touch points that could cover up a lack of automation no longer exist, Sandy Carielli, principal analyst at Forrester explained.

“You don’t have those situations where you can walk to the next cube over and get a sign off from someone on the security or legal team … So as you started to have more people forced to go remote, the importance of having better integration of security tools into the CI/CD pipeline had better automation and better handoffs so that everything was integrated, and you could have sign offs in tool stage gates, all of that becomes a lot more important,” she said.

According to Carielli, implementing tools that enable automation and integration between different security tools is a high priority.

Asynchronous DevSecOps

A new thing that has sprung up for remote teams is the notion of asynchronous communication, where individuals are not necessarily communicating in real time with their coworkers. They might send someone a message and then have to wait a little bit for a response.

DevSecOps is also becoming a bit asynchronous, according to Guy Eisenkot, VP of product and co-founder of Bridgecrew by Prisma Cloud, which provides security automation.

“I think three years ago, we may have not even had the tooling, but now we can just ping each other on Slack,” said Eisenkot. You know, ask the developer, ‘Hey, did you intentionally commit this password? Or this access key into your code repository? Was that intentional?’ And the response can come in in a conversational manner and come in at any hour of the day. So I think the position for security has changed pretty drastically with how well connected we are and how we’re much better at async communication.”

Now there’s a much stronger emphasis on when you should be available and when you’re expected to be responsive.

Remote-first mindset tooling helps developers think about security

The tooling that companies have had to invest in to stay successful when remote has also had benefits for security, according to Eisenkot.

Employers and managers have been much more deliberate about the type of tooling they put on developers’ machines, allowing for more control of the linting and securing tooling they have locally, Eisenkot explained.

“Not only are we kind of protecting them with remote endpoint detection, but we can also now force them to use or enforce the usage of security tooling directly on the employees endpoint, which is something that I think was expedited by the fact that we’re no longer in the office and everybody had to now apply to the same type of corporate policy on their on their work computers,” said Eisenkot.

Embedding security into development tooling is now easier than ever

In addition to the fact that remote tooling is making it easier to enforce security, there’s also something to be said about the fact that it’s getting easier and easier to embed controls into the development pipeline.

As an example, Eisenkot explained that both its source control management and shipping pipelines are more accessible than they used to be and are controlled remotely using publicly accessible APIs.

He believes development organizations should now find it much easier to incorporate things like secret scanning, open source package scanning, image scanning, and code scanning directly into the developer’s initial commit review process.

“Some of these in the past were just not accessible. So the fact that this tooling was much cheaper, most of it is actually open source, but much more accessible through those public APIs. I think that’s where I would start by scanning either directly on developers’ individual workstations, that would be through extensions and IDs, and then implement stronger and stricter controls on source control management,” said Eisenkot.

The fact that it’s easier than ever to place security controls on developers’ machines is extra important these days, since supply chain attacks are becoming more and more common. According to Sonatype’s Fox, attackers no longer want to get their malware into a shipped product, they want to get it into part of the development infrastructure.

“And once you understand that, you can’t look at perimeter defense in terms of application security the same way anymore because it moves all the way left into development,” said Fox.

Security as coaches to developers rather than ultimate authority

Another interesting thing that’s been happening in DevSecOps is that the role of security is changing. In the past security was more like a bottleneck, something that stood in the way of developers writing and pushing out code fast, but now they’re more like coaches that are empowering the developers to build code and do security themselves, said Contrast Security’s Williams.

It used to be that the Sec part of DevSecOps was like the central authority, or the judge. If they determined code wasn’t secure, it got sent back to the development team to fix.

“DevSecOps, when you do it right, is bringing development and security together so that they can have a common goal. They can work and they can sort of agree on what the definition of done is. And then they can work together on achieving that goal together,” said Williams.

When DevSecOps is done wrong, it’s more like trying to fit a square peg into a round hole, Williams said. Companies try to take their existing tools, like scanners that take a long time to run, and put them into their already existing DevOps pipelines, and it just doesn’t work.

“Usually, it doesn’t produce very good results. It’s trying to take your existing scanners that take a long time to run and don’t have very good results, and just kind of wedge them in or maybe automate them a little bit. But it’s not really DevSecOps; it’s really just trying to shove traditional security into a deficit DevOps pipeline,” said Williams.

According to Williams, there are three key processes that companies need to have in place in order to have a successful DevSecOps organization. First, they need a process around code hygiene to make sure that the code the developers are writing is actually secure. Second, they need a process around the software supply chain in order to make sure that the libraries and frameworks that are being used are secure. Third, they need a process to detect and respond to attacks in production.

“If development and security can come together on those three processes and say ‘hey, let’s figure out how we can work together on those things. Let’s get some tools that are a little more compatible with the way that we build software,’ that will help get them moving quickly in development,” said Williams. “And then in the production environment get some monitoring, that’s a little more up to date than just something like a WAF, which is a kind of firewall that you have to keep tailoring and tuning all the time.”

Traditional challenges to DevSecOps remain

According to Sonatype’s Fox, the main challenge companies are facing when it comes to DevSecOps is understanding the components in their software. Log4j is a great example of this, since if you look at the download statistics from Maven Central, around 40% of the downloads are still of the vulnerable version.

“And that can’t be explained,” said Fox. “A lot of times, you can explain why people are not upgrading or doing things because well, the vulnerability doesn’t apply to them. Maybe they have mitigation controls in place, maybe they didn’t know about it otherwise, and so they didn’t know they needed to upgrade. For the most part, none of those things apply to the Log4j situation. And yet, we still see companies continuing to consume the vulnerable versions. The only explanation for that is they don’t even know they’re using it.”

This proves that many companies are still struggling with the basics of understanding what components are in their software.

According to Fox, automation is important in providing this understanding.

“You need a set of tools, a platform that can help you precisely understand what’s inside your software and can provide policy controls over that, because what is good in one piece of software might be terrible in another piece of software,” said Fox. “If you think about license implications,  something that’s distributed can trigger copyright clauses and certain types of licenses. Similar things happen with security vulnerabilities. Something run in a bunker doesn’t have the same connectivity as a consumer app, so policy controls to then have an opinion about whether the components that have been discovered are okay in their given context is important. Being able to provide visibility and feedback to the developer so they can make the right choices up front is even more important.”

According to Bridgecrew by Prisma Cloud’s Eisenkot, if you look back on the big supply chain-related security incidents over the last six to eight month, it’s apparent that companies have not properly configured the correct code ownership or code review process in their source control management.

He explained that those two things would make any source code much more secure, even in small development organizations.

Developer education is key

Eisenkot emphasized that developer education and outreach is still one of the most crucial points of DevSecOps, at the end of the day.

It’s important to implement controls and checkpoints in the tooling, but he also believes the tooling should be thought-provoking in a way that it will empower developers to do out and educate themselves on security best practices.

“Eventually, lots of tooling can point to a vulnerable package or a potentially exploitable query parameter,” said Eisenkot. “But not every tool will be able to provide actionable advice, whether that’s a documentation page or an automatically generated piece of code that will save the developer the time needed to now learn the basic fundamentals of SQL injection as an example.”

Related posts

Next Generation Fiber-Optic Intrusion Detection for Fences: Fiber Defender® FD7104™

admin

Teledyne FLIR to Showcase Perimeter Security Product Portfolio at ISC West 2022

admin

Enhancing Manufacturing Operations through Unified Physical Security

admin

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More