Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver
The dissolving of the traditional boundary is happening quick. So what happens to the endpoint?
Financial investment in border security, as defined by firewall software, managed gateways and invasion detection/prevention systems (IDS/IPS), is changing. Investments are being questioned, with returns not able to conquer the expenses and complexity to create, keep, and justify these antiquated defenses.
More than that, the paradigm has actually altered – employees are no longer specifically working in the workplace. Lots of people are logging hours from home or while traveling – neither location is under the umbrella of a firewall system. Instead of keeping the cyber criminals out, firewalls typically have the opposite result – they avoid the authorized people from being productive. The paradox? They create a safe haven for attackers to breach and conceal for months, then traverse to vital systems.
So Exactly what Has Changed So Much?
The endpoint has actually become the last line of defense. With the aforementioned failure in border defense and a “mobile all over” workforce, we should now impose trust at the endpoint. Easier said than done, however.
In the endpoint area, identity & access management (IAM) systems are not the perfect answer. Even ingenious businesses like Okta, OneLogin, and cloud proxy suppliers such as Blue Coat and Zscaler can not conquer one simple truth: trust exceeds easy recognition, authentication, and authorization.
File encryption is a second attempt at safeguarding entire libraries and individual assets. In the most recent (2016) Ponemon study on data breaches, encryption only conserved 10% of the cost per breached record (from $158 to $142). This isn’t really the remedy that some make it seem.
The Whole Picture is changing.
Organizations needs to be prepared to embrace new paradigms and attack vectors. While organizations need to provide access to trusted groups and individuals, they have to resolve this in a much better method.
Crucial organization systems are now accessed from anywhere, at any time, not just from desks in business office buildings. And professionals (contingent workforce) are quickly consisting of over half of the total enterprise labor force.
On endpoint devices, the binary is predominantly the issue. Most likely benign events, such as an executable crash, could show something easy – like Windows 10 Desktop Manager (DWM) rebooting. Or it could be a much deeper issue, such as a harmful file or early indicators of an attack.
Trusted access doesn’t fix this vulnerability. According to the Ponemon Institute, in between 70% and 90% of all attacks are brought on by human error, social engineering, or other human aspects. This needs more than easy IAM – it needs behavioral analysis.
Rather than making good better, border and identity access companies made bad much faster.
When and Where Does the Good Part of the Story Start?
Taking a step back, Google (Alphabet Corp) announced a perimeter-less network model in late 2014, and has actually made considerable development. Other enterprises – from corporations to governments – have done this (quietly and less severe), but BeyondCorp has done this and revealed its efforts to the world. The design approach, endpoint plus (public) cloud displacing cloistered enterprise network, is the crucial idea.
This alters the entire discussion about an endpoint – be it a laptop, PC, workstation, or server – as subservient to the corporate/enterprise/private/ company network. The endpoint really is the last line of defense, and needs to be protected – yet also report its activity.
Unlike the standard perimeter security model, BeyondCorp doesn’t gate access to tools and services based on a user’s physical area or the stemming network; instead, access policies are based upon information about a device, its state, and its associated user. BeyondCorp considers both external networks and internal networks to be completely untrusted, and gates access to apps by dynamically asserting and implementing levels, or “tiers,” of access.
By itself, this appears harmless. However the reality is that this is an extreme new model which is imperfect. The access requirements have moved from network addresses to device trust levels, and the network is heavily segmented by VLAN’s, instead of a central model with potential for breaches, hacks, and dangers at the human level (the “soft chewy center”).
The bright side? Breaching the border is very challenging for potential enemies, while making network pivoting next to impossible as soon as they are past the reverse proxy (a common system utilized by attackers today – proving that firewall software do a better job of keeping the bad guys in rather than letting the good guys get out). The inverse design even more applies to Google cloud servers, presumably firmly managed, inside the border, versus client endpoints, who are all just about everywhere.
Google has done some nice improvements on tested security techniques, significantly to 802.1 X and Radius, bundled it as the BeyondCorp architecture, consisting of strong identity and access management (IAM).
Why is this important? What are the gaps?
Ziften believes in this technique because it highlights device trust over network trust. However, Google does not particularly reveal a device security agent or stress any kind of client-side tracking (apart from very rigorous setup control). While there may be reporting and forensics, this is something which every organization ought to be knowledgeable about, given that it’s a question of when – not if – bad things will occur.
Considering that implementing the preliminary phases of the Device Inventory Service, we’ve consumed billions of deltas from over 15 data sources, at a typical rate of about 3 million daily, totaling over 80 terabytes. Keeping historical data is necessary in permitting us to understand the end-to-end life cycle of a particular device, track and examine fleet-wide trends, and perform security audits and forensic examinations.
This is a costly and data-heavy procedure with 2 imperfections. On ultra-high-speed networks (used by the likes of Google, universities and research study organizations), adequate bandwidth allows for this kind of communication to take place without flooding the pipes. The first problem is that in more pedestrian corporate and government situations, this would trigger high user interruption.
Second, computing devices should have the horse power to constantly collect and send data. While a lot of employees would be delighted to have present developer-class workstations at their disposal, the expenditure of the devices and process of revitalizing them regularly makes this excessive.
A Lack of Lateral Visibility
Very few systems really generate ‘boosted’ netflow, enhancing traditional network visibility with rich, contextual data.
Ziften’s patented ZFlow ™ offers network flow details on data generated from the endpoint, otherwise achieved using brute force (human labor) or pricey network devices.
ZFlow serves as a “connective tissue” of sorts, which extends and finishes the end-to-end network visibility cycle, adding context to on-network, off-network and cloud servers/endpoints, enabling security groups to make quicker and more informed and accurate choices. In essence, purchasing Ziften services result in a labor savings, plus a boost in speed-to-discovery and time-to-remediation due to innovation functioning as an alternative to people resources.
For companies moving/migrating to the cloud (as 56% are planning to do by 2021 according to IDG Enterprise’s 2015 Cloud Study), Ziften provides unrivaled visibility into cloud servers to much better monitor and secure the total infrastructure.
In Google’s environment, just corporate-owned devices (COPE) are permitted, while crowding out bring your own device (BYOD). This works for a business like Google that can give out new devices to all staff – phone, tablet, laptop computer, and so on. Part of the reason is that the vesting of identity in the device itself, plus user authentication as usual. The device should meet Google requirements, having either a TPM or a software application equivalent of a TPM, to hold the X. 509 cert used to confirm device identity and to assist in device-specific traffic encryption. There should be several agents on each endpoint to validate the device validation asserts called out in the access policy, which is where Ziften would need to partner with the systems management agent company, since it is most likely that agent cooperation is necessary to the process.
In summary, Google has developed a first-rate solution, however its applicability and functionality is restricted to companies like Alphabet.
Ziften uses the same level of functional visibility and security protection to the masses, utilizing a light-weight agent, metadata/network flow tracking (from the endpoint), and a best-in-class console. For companies with specialized needs or incumbent tools, Ziften offers both an open REST API and an extension framework (to enhance ingestion of data and activating response actions).
This yields the advantages of the BeyondCorp model to the masses, while protecting network bandwidth and endpoint (machine) computing resources. As organizations will be sluggish to move totally far from the enterprise network, Ziften partners with firewall and SIEM vendors.
Finally, the security landscape is gradually shifting towards managed detection & response (MDR). Managed security service providers (MSSP’s) provide traditional monitoring and management of firewalls, gateways and border intrusion detection, however this is inadequate. They lack the abilities and the technology.
Ziften’s system has actually been tested, integrated, authorized and executed by a number of the emerging MDR’s, highlighting the standardization (capability) and versatility of the Ziften platform to play an essential function in remediation and occurrence response.