Written by Roark Pollock and Presented by Ziften CEO Charles Leaver


According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Pretty good when you consider the on-going issues most cloud clients still have regarding data security. Another particularly interesting Gartner finding is the typical practice by cloud clients to contract services to several public cloud providers.

According to Gartner “most businesses are already using a combination of cloud services from different cloud providers”. While the commercial rationale for using multiple suppliers is sound (e.g., preventing vendor lock in), the practice does produce additional intricacy inmonitoring activity across an organization’s significantly fragmented IT landscape.

While some suppliers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies have to comprehend and attend to the visibility issues related to transferring to the cloud irrespective of the cloud supplier or companies they work with.

Unfortunately, the ability to track application and user activity, and networking communications from each VM or endpoint in the cloud is limited.

Irrespective of where computing resources live, companies must address the questions of “Which users, machines, and applications are interacting with each other?” Organizations require visibility across the infrastructure so that they can:

  • Quickly identify and focus on problems
  • Speed root cause analysis and identification
  • Lower the mean time to repair problems for end users
  • Rapidly identify and eliminate security hazards, minimizing overall dwell times.

Alternatively, bad visibility or bad access to visibility data can minimize the efficiency of existing management and security tools.

Businesses that are used to the ease, maturity, and reasonably cheapness of keeping track of physical data centers are sure to be disappointed with their public cloud alternatives.

What has been missing is a basic, common, and stylish solution like NetFlow for public cloud infrastructure.

NetFlow, naturally, has had 20 years approximately to become a de facto standard for network visibility. A typical implementation includes the monitoring of traffic and aggregation of flows where the network chokes, the collection and storage of flow data from multiple collection points, and the analysis of this flow information.

Flows consist of a standard set of destination and source IP addresses and port and protocol info that is usually gathered from a router or switch. Netflow data is relatively cheap and easy to collect and supplies almost ubiquitous network visibility and allows for analysis which is actionable for both network tracking and performance management applications.

Most IT personnel, particularly networking and some security teams are incredibly comfy with the technology.

However NetFlow was developed for solving exactly what has become a rather limited problem in the sense that it only gathers network info and does this at a minimal number of potential places.

To make better use of NetFlow, 2 crucial changes are needed.

NetFlow to the Edge: First, we need to expand the useful deployment scenarios for NetFlow. Instead of just collecting NetFlow at networking choke points, let’s expand flow collection to the edge of the network (cloud, servers and clients). This would considerably broaden the overall view that any NetFlow analytics provide.

This would permit companies to augment and utilize existing NetFlow analytics tools to remove the growing blind spot of visibility into public cloud activities.

Rich, contextual NetFlow: Secondly, we need to use NetFlow for more than simple network visibility.

Instead, let’s utilize an extended variation of NetFlow and include information on the user, device,
application, and binary responsible for each tracked network connection. That would allow us to rapidly connect every network connection back to its source.

In fact, these 2 changes to NetFlow, are exactly what Ziften has actually achieved with ZFlow. ZFlow supplies an expanded version of NetFlow that can be released at the network edge, including as part of a VM or container image, and the resulting information gathering can be taken in and analyzed with existing NetFlow tools for analysis. Over and above conventional NetFlow Internet Protocol Flow Info eXport (IPFIX) visibility of the network, ZFlow offers greater visibility with the addition of info on application, device, user and binary for each network connection.

Eventually, this allows Ziften ZFlow to deliver end-to-end visibility in between any two endpoints, physical or virtual, getting rid of traditional blind spots like East West traffic in data centers and enterprise cloud deployments.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation