The continuing and unstoppable rise of cloud computing, IaaS (Infrastructure-as-a-Service) and SaaS (Software-as-a-Service) has meant that a wider range of solutions than ever before are available to those responsible for delivering business improvements and transformation through applications.
The possibilities are endless, and the infrastructure necessary to deploy them is more affordable and scalable than ever before – but the problems faced by many security professionals (from analyst to CISO) is no longer just a technological one, it is fast becoming a problem of visibility – how can SECOPS teams be expected to effectively analyse, monitor, and improve an organisation’s security posture that extends to multiple environments, without proper data visibility being sent to existing tools?
10 Minute Podcast
Introducing Cloud Visibility & Security
Q&A with Darran Clare (Accelerate) & James Easton (Gigamon)
Data visibility is one of those concepts that appears at first glance, simple to understand and capture by drawing up all sources on a whiteboard or mapping out application flows and data locations, but it is much harder to pull off in practice and commonly creates blind spots.
Networks over the last 10 years have sprawled out from being centralised, hierarchical creations where at least the hardware was able to be physically inspected. These days, the functionality that allows businesses to serve their customers and expand their operations globally are found across a multitude of globally dispersed Physical, Virtualised, Cloud and Containerised infrastructures and applications. This is where our problem starts to become clear.
You need to know exactly what you have at your disposal and a lack of visibility in the public cloud is not only a business risk in terms of security, compliance and governance, but it can also cost the business money through the effects of poor application performance and user experience. Blind spots in data visibility will hinder your ability to identify the root cause or hide important clues to resolving performance issues quickly.
With the scalability, functionality, and computational power that IaaS and SaaS brings to the table, a corresponding drop in east-west data visibility typically comes with it. With that drop-in data visibility comes a corresponding drop in the accuracy of your reporting. Your reporting is where your critical security decision-making comes from. Bad decisions and the breaches that result from them are usually spawned from inaccurate information and inaccurate information typically arises from the lack of data visibility. Cloud security visibility has moved from a “nice-to-have” to a “must-have” for security analysts and security decision-makers.
Gigamon’s Security platform solves this issue of data visibility across differing application platforms by offering workload level data visibility through several forms of input from physical TAPs for on-premise environments, virtual agents for virtualised environments and cloud-based agents for AWS, Azure and GCP infrastructures. The Gigamon Security platform facilitates both north-south and the desired east-west visibility across your environments by receiving and transforming the data gathered from your network taps and virtual agents and transposing that data into a format that better suits the existing security toolset that your company has deployed – all without overloading your tools with duplicate or excess traffic they cannot process or are not licenced to process.
This allows you to leverage a few major gains – increased data visibility by capturing previously invisible parts of your IT estate, increased accuracy of reporting and intelligent use of IT security budgets by only sizing your security tools for the traffic they care about. Overall the advantages of a single data visibility platform allows for truer trend analysis and therefore security decision-making based on better information.
AWS, Azure, and other cloud providers have all developed their own visibility solutions, but dependent on the provider, either restrict visibility to North South traffic (AWS) or have deprecated their solutions (Azure) in favour of 3rd party solutions to provide the visibility required across the cloud tenant.
Gigamon allows you to build a single architecture and assign ownership that underpins your data visibility across any environment in any location, offering back your NETOPS teams intelligent control of all network data through a single and common management interface without having to learn new skills and manage disjointed data rules for multiple public cloud platforms.
As discussed earlier, the ideal scenario would be to visualise all your data sources by whiteboarding or mapping out application flows, but sensibly this can only be achieved by using a single visibility layer through Gigamon’s visibility platform that understands what data you have and where you need to send it.
So, it is understandable that complex and fragmented environments have compounded the visibility challenge for IT Security teams, underpinned by differing technologies and management, but also the sheer number of security tools available or required to deliver the visibility needed. But should these dispersed environments also dictate that organisations duplicate security monitoring tools or deploy multiple vendor data capture agents to present better visibility in the public cloud?
Deploying more tools does not necessarily equate to improved security. In many cases, repetitive data will impair your SECOPS team’s visibility and cause bigger headaches. Gigamon’s solution allows for existing infrastructure and existing security/management tools to be successfully aggregated into what Gigamon calls the Gigamon Visibility Tier.
By deploying a single visibility tier across your entire IT infrastructure estate, it is possible to package up only the visibility data that your tools require, using Meta data instead of raw packets you are able to share and securely distribute all data capture traffic from any environment, back into your security tools where your SECOPS teams reside.
By finally getting proper east-west data visibility from your cloud/IaaS/SaaS solutions as well as your physical and virtual infrastructure and making sure that you are sending that data to your tools in a format that they can properly process – several beneficial things start to occur.
Security Incident triage time drops as you now have greater confidence in your tools, and no longer must the tools work out if this is a duplicate entry produced from incorrect input configuration. Your analysts stop getting alert fatigue and can place more confidence into the alerts that they do get. And finally, security decision makers at the C-Suite level get reporting and analysis with a higher degree of accuracy and confidence than ever before to make programme-level decisions that materially affect your organisation’s security posture.