Written By Michael Vaughn And Presented By Charles Leaver Ziften CEO

 

Answers To Your Questions About WannaCry Ransomware

The WannaCry ransomware attack has actually contaminated more than 300,000 computer systems in 150 nations so far by exploiting vulnerabilities in Microsoft’s Windows os.
In this quick video Chief Data Scientist Dr. Al Hartmann and I talk about the nature of the attack, as well as how Ziften can assist companies secure themselves from the vulnerability known as “EternalBlue.”.

As mentioned in the video, the issue with this Server Message Block (SMB) file sharing service is that it’s on the majority of Windows os and found in the majority of environments. Nevertheless, we make it simple to identify which systems in your environment have or haven’t been patched to date. Importantly, Ziften Zenith can also from another location disable the SMB file-sharing service entirely, offering organizations valuable time to make sure that those computers are correctly patched.

If you wonder about Ziften Zenith, our 20 minute demo consists of a consultation with our specialists around how we can assist your organization prevent the worst digital disaster to strike the internet in years.

Written By Roark Pollock And Presented By Charles Leaver CEO Ziften

 

The Endpoint Security Buyer’s Guide

The most typical point for a sophisticated relentless attack or a breach is the endpoint. And they are definitely the entry point for many ransomware and social engineering attacks. Making use of endpoint security products has actually long been considered a best practice for securing end points. Unfortunately, those tools aren’t staying up to date with today’s danger environment. Advanced risks, and truth be told, even less advanced risks, are typically more than appropriate for fooling the average employee into clicking something they should not. So organizations are looking at and assessing a huge selection of next-gen endpoint security (NGES) solutions.

With that in mind, here are ten tips to think about if you’re looking at NGES solutions.

Idea 1: Begin with the end first

Do not let the tail wag the dog. A danger decrease technique should always start by evaluating issues and then looking for possible solutions for those problems. However all too often we get captivated with a “glossy” brand-new technology (e.g., the most recent silver bullet) and we wind up trying to shoehorn that innovation into our environments without totally evaluating if it solves an understood and identified problem. So exactly what issues are you aiming to resolve?

– Is your current end point security tool failing to stop dangers?
– Do you require much better visibility into activities at the end point?
– Are compliance requirements dictating constant end point tracking?
– Are you trying to reduce the time and expense of incident response?

Specify the problems to resolve, and then you’ll have a measuring stick for success.

Suggestion 2: Understand your audience. Exactly who will be using the tool?

Comprehending the issue that has to be fixed is an essential first step in understanding who owns the problem and who would (operationally) own the solution. Every functional group has its strengths, weak points, choices and prejudices. Define who will need to utilize the solution, and others that could gain from its usage. Is it:

– Security group,
– IT group,
– The governance, risk and compliance (GRC) group,
– Helpdesk or end user assistance group,
– Or perhaps the server group, or a cloud operations team?

Idea 3: Know what you suggest by endpoint

Another typically overlooked early step in defining the problem is defining the endpoint. Yes, all of us used to know what we implied when we stated endpoint however today end points come in a lot more varieties than before.

Sure we want to secure desktops and laptop computers but how about mobile devices (e.g. smartphones and tablets), virtual endpoints, cloud based endpoints, or Internet of Things (IoT) devices? And how about your servers? All of these devices, obviously, can be found in several tastes so platform assistance needs to be attended to also (e.g. Windows only, Mac OSX, Linux, etc?). Likewise, consider assistance for end points even when they are working remote, or are working offline. What are your needs and exactly what are “nice to haves?”

Tip 4: Start with a structure of continuous visibility

Continuous visibility is a fundamental ability for resolving a host of security and functional management problems on the end point. The old adage is true – that you can’t manage what you cannot see or determine. Even more, you can’t secure exactly what you cannot effectively manage. So it must start with constant or all the time visibility.

Visibility is foundational to Management and Security

And think of what visibility implies. Enterprises need one source of truth that at a minimum monitors, saves, and analyzes the following:

– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and usage patterns
– Binary data – attributes of set up binaries
– Procedures data – tracking info and stats
– Network connection data – stats and internal behavior of network activity on the host

Suggestion 5: Monitor your visibility data

End point visibility data can be kept and analyzed on the premises, in the cloud, or some mix of both. There are advantages to each. The suitable technique differs, but is typically driven by regulative requirements, internal privacy policies, the end points being monitored, and the total cost considerations.

Know if your company needs on-premise data retention

Know whether your company enables cloud based data retention and analysis or if you are constrained to on premise services only. Within Ziften, 20-30% of our clients keep data on-premise simply for regulative factors. Nevertheless, if legally an alternative, the cloud can offer cost advantages (to name a few).

Pointer 6: Know what is on your network

Understanding the problem you are attempting to solve requires comprehending the assets on the network. We have found that as many as 30% of the endpoints we at first find on customers’ networks are un-managed or unknown devices. This obviously produces a huge blind spot. Minimizing this blind spot is a vital best practice. In fact, SANS Critical Security Controls 1 and 2 are to perform an inventory of authorized and unauthorized devices and software applications attached to your network. So search for NGES solutions that can finger print all connected devices, track software stock and usage, and perform ongoing constant discovery.

Suggestion 7: Know where you are exposed

After finding out exactly what devices you need to monitor, you need to make certain they are operating in up to date setups. SANS Critical Security Controls 3 advises ensuring safe and secure configurations tracking for laptops, workstations, and servers. SANS Critical Security Controls 4 advises making it possible for continuous vulnerability evaluation and remediation of these devices. So, look for NGES services that supply constant tracking of the state or posture of each device, and it’s even of more benefit if it can help enforce that posture.

Also look for solutions that provide constant vulnerability evaluation and remediation.

Keeping your general endpoint environment solidified and free of important vulnerabilities prevents a huge quantity of security concerns and removes a great deal of backend pressure on the IT and security operations teams.

Tip 8: Cultivate constant detection and response

An essential objective for numerous NGES solutions is supporting constant device state monitoring, to allow reliable risk or incident response. SANS Critical Security Control 19 recommends robust incident response and management as a best practice.

Search for NGES solutions that provide all-the-time or continuous danger detection, which leverages a network of global risk intelligence, and multiple detection methods (e.g., signature, behavioral, machine learning, etc). And try to find incident response solutions that help prioritize determined dangers and/or issues and provide workflow with contextual system, application, user, and network data. This can help automate the proper response or next steps. Finally, understand all the response actions that each solution supports – and look for a service that supplies remote access that is as close as possible to “sitting at the endpoint keyboard”.

Pointer 9: Consider forensics data collection

In addition to event response, organizations need to be prepared to attend to the need for forensic or historical data analysis. The SANS Critical Security Control 6 suggests the upkeep, monitoring and analysis of all audit logs. Forensic analysis can take numerous types, but a structure of historic end point monitoring data will be key to any examination. So try to find solutions that maintain historical data that permits:

– Forensic tasks include tracing lateral threat movement through the network gradually,
– Determining data exfiltration efforts,
– Determining source of breaches, and
– Figuring out proper remediation actions.

Suggestion 10: Tear down the walls

IBM’s security team, which supports an outstanding ecosystem of security partners, estimates that the typical business has 135 security tools in place and is dealing with 40 security suppliers. IBM customers definitely tend to be large businesses however it’s a common refrain (complaint) from companies of all sizes that security services don’t integrate well enough.

And the complaint is not just that security services don’t play well with other security solutions, however likewise that they do not always integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations need to think about these (and other) integration points as well as the supplier’s willingness to share raw data, not just metadata, through an API.

Additional Pointer 11: Prepare for customizations

Here’s a bonus idea. Assume that you’ll want to customize that glossy brand-new NGES solution quickly after you get it. No solution will satisfy all of your needs right out of the box, in default setups. Find out how the solution supports:

– Custom data collection,
– Signaling and reporting with custom data,
– Custom-made scripting, or
– IFTTT (if this then that) performance.

You know you’ll want new paint or brand-new wheels on that NGES solution quickly – so ensure it will support your future customization jobs easy enough.

Look for support for easy personalizations in your NGES solution

Follow the bulk of these tips and you’ll undoubtedly prevent many of the common pitfalls that plague others in their assessments of NGES services.

Written By Ziften CEO Charles Leaver

 

Do you want to handle and protect your endpoints, your data center, the cloud and your network? In that case Ziften can provide the ideal solution for you. We collect data, and allow you to associate and utilize that data to make decisions – and remain in control over your enterprise.

The details that we receive from everybody on the network can make a real world difference. Think about the proposition that the 2016 U.S. elections were influenced by cyber criminals from another nation. If that holds true, hackers can do almost anything – and the concept that we’ll go for that as the status quo is merely ridiculous.

At Ziften, we believe the best method to fight those hazards is with higher visibility than you have actually ever had. That visibility crosses the entire enterprise, and links all the major players together. On the back end, that’s real and virtual servers in the data center and the cloud. That’s applications and containers and infrastructure. On the other side, it’s laptops and desktops, irrespective of how and where they are connected.

End-to-end – that’s the thinking behind all that we do at Ziften. From end point to cloud, right the way from an internet browser to a DNS server. We connect all that together, with all the other elements to give your organization a complete solution.

We likewise catch and keep real-time data for up to 12 months to let you know exactly what’s occurring on the network today, and supply historical pattern analysis and cautions if something changes.

That lets you find IT faults and security problems instantly, as well as be able to search out the root causes by looking back in time to see where a breach or fault may have first happened. Active forensics are a total requirement in security: After all, where a breach or fault tripped an alarm may not be the place where the problem started – or where a hacker is running.

Ziften supplies your security and IT groups with the visibility to understand your existing security posture, and identify where enhancements are needed. Endpoints non-compliant? Found. Rogue devices? Found. Off-network penetration? This will be detected. Out-of-date firmware? Unpatched applications? All found. We’ll not just assist you find the problem, we’ll help you fix it, and ensure it remains repaired.

End to end security and IT management. Real time and historic active forensics. Onsite, offline, in the cloud. Incident detection, containment and response. We have actually got it all covered. That’s exactly what makes Ziften better.

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

According to Gartner public cloud services market surpassed $208 billion in 2016. This represented about a 17% rise year over year. Pretty good when you consider the on-going issues most cloud clients still have regarding data security. Another particularly interesting Gartner finding is the typical practice by cloud clients to contract services to several public cloud providers.

According to Gartner “most businesses are already using a combination of cloud services from different cloud providers”. While the commercial rationale for using multiple suppliers is sound (e.g., preventing vendor lock in), the practice does produce additional intricacy inmonitoring activity across an organization’s significantly fragmented IT landscape.

While some suppliers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls across the AWS infrastructure) companies have to comprehend and attend to the visibility issues related to transferring to the cloud irrespective of the cloud supplier or companies they work with.

Unfortunately, the ability to track application and user activity, and networking communications from each VM or endpoint in the cloud is limited.

Irrespective of where computing resources live, companies must address the questions of “Which users, machines, and applications are interacting with each other?” Organizations require visibility across the infrastructure so that they can:

  • Quickly identify and focus on problems
  • Speed root cause analysis and identification
  • Lower the mean time to repair problems for end users
  • Rapidly identify and eliminate security hazards, minimizing overall dwell times.

Alternatively, bad visibility or bad access to visibility data can minimize the efficiency of existing management and security tools.

Businesses that are used to the ease, maturity, and reasonably cheapness of keeping track of physical data centers are sure to be disappointed with their public cloud alternatives.

What has been missing is a basic, common, and stylish solution like NetFlow for public cloud infrastructure.

NetFlow, naturally, has had 20 years approximately to become a de facto standard for network visibility. A typical implementation includes the monitoring of traffic and aggregation of flows where the network chokes, the collection and storage of flow data from multiple collection points, and the analysis of this flow information.

Flows consist of a standard set of destination and source IP addresses and port and protocol info that is usually gathered from a router or switch. Netflow data is relatively cheap and easy to collect and supplies almost ubiquitous network visibility and allows for analysis which is actionable for both network tracking and performance management applications.

Most IT personnel, particularly networking and some security teams are incredibly comfy with the technology.

However NetFlow was developed for solving exactly what has become a rather limited problem in the sense that it only gathers network info and does this at a minimal number of potential places.

To make better use of NetFlow, 2 crucial changes are needed.

NetFlow to the Edge: First, we need to expand the useful deployment scenarios for NetFlow. Instead of just collecting NetFlow at networking choke points, let’s expand flow collection to the edge of the network (cloud, servers and clients). This would considerably broaden the overall view that any NetFlow analytics provide.

This would permit companies to augment and utilize existing NetFlow analytics tools to remove the growing blind spot of visibility into public cloud activities.

Rich, contextual NetFlow: Secondly, we need to use NetFlow for more than simple network visibility.

Instead, let’s utilize an extended variation of NetFlow and include information on the user, device,
application, and binary responsible for each tracked network connection. That would allow us to rapidly connect every network connection back to its source.

In fact, these 2 changes to NetFlow, are exactly what Ziften has actually achieved with ZFlow. ZFlow supplies an expanded version of NetFlow that can be released at the network edge, including as part of a VM or container image, and the resulting information gathering can be taken in and analyzed with existing NetFlow tools for analysis. Over and above conventional NetFlow Internet Protocol Flow Info eXport (IPFIX) visibility of the network, ZFlow offers greater visibility with the addition of info on application, device, user and binary for each network connection.

Eventually, this allows Ziften ZFlow to deliver end-to-end visibility in between any two endpoints, physical or virtual, getting rid of traditional blind spots like East West traffic in data centers and enterprise cloud deployments.