Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften


In the first post on edit distance, we looked at searching for destructive executables with edit distance (i.e., how many character edits it requires to make 2 text strings match). Now let’s look at how we can utilize edit distance to search for malicious domains, and how we can develop edit distance functions that can be integrated with other domain name functions to pinpoint suspect activity.

Case Study Background

Exactly what are bad actors playing at with malicious domains? It might be merely utilizing a close spelling of a typical domain name to fool careless users into looking at ads or picking up adware. Legitimate sites are gradually picking up on this technique, sometimes called typo-squatting.

Other malicious domains are the result of domain name generation algorithms, which might be used to do all types of nefarious things like evade counter measures that obstruct recognized jeopardized sites, or overwhelm domain name servers in a dispersed DoS attack. Older variants use randomly generated strings, while further advanced ones include tricks like injecting typical words, additionally puzzling protectors.

Edit distance can aid with both use cases: here we will find out how. First, we’ll exclude common domain names, given that these are usually safe. And, a list of regular domain names provides a standard for spotting abnormalities. One good source is Quantcast. For this discussion, we will stick to domains and prevent subdomains (e.g. ziften.com, not www.ziften.com).

After data cleansing, we compare each prospect domain (input data observed in the wild by Ziften) to its prospective next-door neighbors in the very same top-level domain (the tail end of a domain name – classically.com,. org, and so on and today can be practically anything). The standard task is to discover the nearby next-door neighbor in regards to edit distance. By discovering domain names that are one step removed from their nearest next-door neighbor, we can easily find typo-ed domain names. By discovering domain names far from their next-door neighbor (the normalized edit distance we introduced in Part 1 is useful here), we can likewise discover anomalous domains in the edit distance area.

Exactly what were the Outcomes?

Let’s look at how these results appear in reality. Be careful when browsing to these domain names since they might include malicious material!

Here are a few prospective typos. Typo squatters target popular domains since there are more opportunities somebody will check them out. Numerous of these are suspect according to our threat feed partners, however there are some false positives too with charming names like “wikipedal”.


Here are some strange looking domain names far from their next-door neighbors.


So now we have produced 2 beneficial edit distance metrics for searching. Not just that, we have three features to potentially add to a machine-learning model: rank of nearby neighbor, range from next-door neighbor, and edit distance 1 from neighbor, showing a threat of typo shenanigans. Other features that might play well with these include other lexical functions such as word and n-gram distributions, entropy, and string length – and network functions like the number of unsuccessful DNS requests.

Streamlined Code that you can Experiment with

Here is a streamlined variation of the code to have fun with! Created on HP Vertica, but this SQL should run with many advanced databases. Note the Vertica editDistance function may vary in other executions (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).


Written by Charles Leaver Ziften CEO


If your enterprise computing environment is not appropriately managed there is no way that it can be absolutely secure. And you can’t successfully manage those intricate business systems unless there’s a good sense that they are protected.

Some may call this a chicken and egg circumstance, where you do not know where to start. Should you start with security? Or should you begin with system management? That’s the incorrect approach. Think of this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter first. Rather, both are mixed together – and dealt with as a single delicious reward.

Numerous companies, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management team reporting to a CISO. The CIO team and the CISO team do not know each other, talk with each other only when definitely needed, have distinct budgets, definitely have different priorities, read various reports, and use different management platforms. On an everyday basis, what makes up a task, a concern or an alert for one team flies completely under the other group’s radar.

That’s bad, since both the IT and security groups should make assumptions. The IT group thinks that all assets are safe and secure, unless someone tells them otherwise. For instance, they presume that devices and applications have not been compromised, users have actually not escalated their privileges, etc. Similarly, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and apps are up to date, patches have been used, and so on

Given that the CIO and CISO groups aren’t talking to each other, do not comprehend each others’ functions and goals, and aren’t utilizing the same tools, those presumptions might not be appropriate.

And again, you cannot have a secure environment unless that environment is properly managed – and you can’t manage that environment unless it’s secure. Or to put it another way: An environment that is not secure makes anything you do in the IT organization suspect and irrelevant, and implies that you cannot understand whether the information you are seeing is correct or controlled. It may all be fake news.

How to Bridge the IT / Security Space

Ways to bridge that gap? It sounds easy but it can be difficult: Ensure that there is an umbrella covering both the IT and security teams. Both IT and security report to the very same individual or structure somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the business doesn’t have a safe and secure environment, and there’s a breach, the value of the brand name and the business can be reduced to zero. Likewise, if the users, devices, infrastructure, application, and data aren’t managed well, the business can’t work effectively, and the value drops. As we have actually talked about, if it’s not properly handled, it cannot be secured, and if it’s not protected, it cannot be well handled.

The fiduciary duty of senior executives (like the CFO) is to secure the value of company assets, which implies making certain IT and security speak to each other, understand each other’s concerns, and if possible, can see the exact same reports and data – filtered and displayed to be meaningful to their specific areas of responsibility.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, developed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that provides IT groups what they need to do their tasks, and provides security groups exactly what they need also – without coverage spaces that could weaken assumptions about the state of business security and IT management.

We have to make sure that our business’s IT infrastructure is built on a protected foundation – and also that our security is executed on a well-managed base of hardware, infrastructure, software applications and users. We can’t operate at peak performance, and with complete fiduciary duty, otherwise.

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO


A study just recently completed by Gallup found that 43% of US citizens that were in employment worked from another location for some of their work time in 2016. Gallup, who has been surveying telecommuting patterns in the USA for nearly a 10 years, continues to see more staff members working outside of traditional workplaces and more of them doing this for more days out of the week. And, obviously the variety of linked devices that the average staff member uses has increased as well, which helps encourage the convenience and preference of working away from the office.

This mobility definitely produces better workers, and one hopes more productive employees, but the issues that these trends present for both security and systems operations teams ought to not be dismissed. IT systems management. IT asset discovery, and hazard detection and response functions all benefit from real-time and historical visibility into device, application, network connection and user activity. And to be truly efficient, endpoint visibility and tracking ought to work no matter where the user and device are operating, be it on the network (regional), off the network but connected (remote), or disconnected (offline). Existing remote working trends are significantly leaving security and operational groups blind to potential issues and threats.

The mainstreaming of these trends makes it much more challenging for IT and security teams to limit what was previously considered higher threat user habits, for example working from a coffee bar. But that ship has sailed and today systems management and security teams need to be able to adequately track user, device, application, and network activity, identify anomalies and inappropriate actions, and enforce proper action or fixes no matter whether an endpoint is locally linked, from another location linked, or detached.

Additionally, the fact that many employees now regularly access cloud-based assets and applications, and have back up USB or network connected storage (NAS) drives at their homes additionally magnifies the need for endpoint visibility. Endpoint controls often offer the only record of activity being remotely performed that no longer necessarily terminates in the organization network. Offline activity presents the most severe example of the requirement for continuous endpoint monitoring. Clearly network controls or network tracking are of negligible use when a device is running offline. The setup of a proper endpoint agent is crucial to make sure the capture of all important security and system data.

As an example of the types of offline activities that may be identified, a client was just recently able to track, flag, and report uncommon habits on a business laptop computer. A high level executive transferred huge quantities of endpoint data to an unapproved USB stick while the device was offline. Because the endpoint agent had the ability to collect this behavioral data during this offline period, the client had the ability to see this unusual action and follow up properly. Continuing to monitor the device, applications, and user behaviors even when the endpoint was detached, gave the client visibility they never ever had in the past.

Does your company have constant monitoring and visibility when employee endpoints are on an island? If so, how do you achieve this?

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


If you are a student of history you will observe numerous examples of extreme unintended repercussions when brand-new technology has actually been presented. It typically surprises individuals that brand-new technologies might have nefarious intentions in addition to the positive purposes for which they are brought to market however it happens on a very regular basis.

For example, Train robbers using dynamite (“You think you used enough Dynamite there, Butch?”) or spammers using email. Just recently making use of SSL to conceal malware from security controls has actually become more typical because the legitimate use of SSL has made this technique better.

Because new technology is frequently appropriated by bad actors, we have no need to think this will not be true about the brand-new generation of machine-learning tools that have reached the market.

To what degree will these tools be misused? There are most likely a number of ways in which hackers could utilize machine learning to their advantage. At a minimum, malware writers will test their new malware against the brand-new class of innovative hazard security products in a bid to modify their code to ensure that it is less likely to be flagged as destructive. The effectiveness of protective security controls constantly has a half life due to adversarial learning. An understanding of machine learning defenses will assist attackers become more proactive in reducing the effectiveness of artificial intelligence based defenses. An example would be an enemy flooding a network with phony traffic with the intention of “poisoning” the machine learning model being constructed from that traffic. The goal of the opponent would be to fool the defender’s machine learning tool into misclassifying traffic or to develop such a high degree of false positives that the protectors would dial back the fidelity of the notifications.

Machine learning will likely likewise be utilized as an attack tool by attackers. For example, some scientists forecast that attackers will make use of machine learning techniques to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to customize a social engineering attack is especially troubling provided the effectiveness of spear phishing. The ability to automate mass customization of these attacks is a potent economic incentive for hackers to embrace the techniques.

Expect breaches of this type that deliver ransomware payloads to rise dramatically in 2017.

The need to automate tasks is a major driver of financial investment choices for both attackers and protectors. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will increasingly become a standard part of defense in depth methods, it is not a magic bullet. It ought to be understood that hackers are actively dealing with evasion approaches around machine learning based detection solutions while also utilizing machine learning for their own attack purposes. This arms race will require defenders to progressively achieve incident response at machine pace, further exacerbating the requirement for automated incident response capabilities.

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO


The repetition of a theme when it comes to computer security is never ever a bad thing. As advanced as some attacks may be, you actually have to check for and comprehend the use of typical readily available tools in your environment. These tools are typically utilized by your IT staff and more than likely would be whitelisted for usage and can be missed by security teams mining through all the pertinent applications that ‘could’ be executed on an endpoint.

Once somebody has breached your network, which can be carried out in a variety of ways and another post for another day, indications of these tools/programs running in your environment ought to be looked at to ensure appropriate use.

A few commands/tools and their functions:

Netstat – Information on the current connections on the network. This may be utilized to identify other systems within the network.

Powershell – Built in Windows command line utility and can carry out a variety of activities for example getting crucial details about the system, eliminating procedures, adding files or removing files and so on

WMI – Another powerful integrated Windows utility. Can shift files around and collect essential system information.

Route Print – Command to view the local routing table.

Net – Adding domains/groups/users/accounts.

RDP (Remote Desktop Protocol) – Program to gain access to systems remotely.

AT – Scheduled tasks.

Looking for activity from these tools can consume a lot of time and in some cases be frustrating, however is required to deal with who might be moving around in your environment. And not simply exactly what is occurring in real time, but historically also to see a path somebody might have taken through the network. It’s often not ‘patient zero’ that is the target, once they get a grip, they could use these tools and commands to start their reconnaissance and finally migrate to a high worth asset. It’s that lateral motion that you wish to discover.

You must have the ability to gather the details gone over above and the methods to sift through to find, alert, and examine this data. You can utilize Windows Events to monitor various modifications on a device and after that filter that down.

Looking at some screen shots below from our Ziften console, you can see a quick distinction between what our IT group utilized to push out changes in the network, versus someone running an extremely similar command themselves. This may be much like what you discover when somebody did that remotely say through an RDP session.





An interesting side note in these screenshots is that in all scenarios, the Process Status is ‘Terminated’. You would not see this detail throughout a live investigation or if you were not constantly gathering the data. But given that we are gathering all of the info continuously, you have this historical data to look at. If in the event you were observing the Status as ‘Running’, this might suggest that someone is live on that system as of now.

This only scratches the surface of what you need to be gathering and the best ways to analyze exactly what is right for your network, which of course will be different than that of others. But it’s a good place to start. Destructive actors with intent to do you damage will usually look for the path of least resistance. Why try and create brand new and fascinating tools, when a lot of exactly what they require is currently there and all set to go.

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


There may be a joke somewhere concerning the forensic expert that was late to the incident response celebration. There is the seed of a joke in the concept at least however of course, you need to comprehend the differences between forensic analysis and incident response to value the potential for humor.

Forensic analysis and incident response are related disciplines that can leverage similar tools and associated data sets however also have some important distinctions. There are four particularly essential distinctions between forensic analysis and incident response:

– Goals.
– Data requirements.
– Group skills.
– Advantages.

The difference in the goals of forensic analysis and incident response is possibly the most essential. Incident response is concentrated on determining a quick (i.e., near real-time) reaction to an instant hazard or concern. For instance, a house is on fire and the firemen that show up to put that fire out are associated with incident response. Forensic analysis is usually performed as part of an arranged compliance, legal discovery, or law enforcement investigation. For example, a fire detective may examine the remains of that home fire to figure out the overall damage to the house, the cause of the fire, and whether the origin was such that other houses are also facing the same risk. To puts it simply, incident response is concentrated on containment of a danger or concern, while forensic analysis is concentrated on a complete understanding and extensive remediation of a breach.

A second significant distinction between the disciplines is the data resources required to attain the goals. Incident response groups normally just require short-term data sources, frequently no greater than a month or so, while forensic analysis teams usually need a lot longer lived logs and files. Keep in mind that the typical dwell time of an effective attack is somewhere in between 150 and 300 days.

While there is commonness in the personnel skills of incident response and forensic analysis groups, and in fact incident response is typically considered a subset of the border forensic discipline, there are important distinctions in job requirements. Both kinds of research require strong log analysis and malware analysis abilities. Incident response requires the capability to quickly separate a contaminated device and to establish ways to remediate or quarantine the device. Interactions have the tendency to be with other security and operations team members. Forensic analysis generally requires interactions with a much broader set of departments, including legal, compliance, operations and HR.

Not remarkably, the viewed benefits of these activities likewise differ.

The capability to get rid of a risk on one machine in near real-time is a significant determinate in keeping breaches separated and limited in effect. Incident response, and proactive danger hunting, is first line of defense in security operations. Forensic analysis is incident responses’ less attractive relative. However, the benefits of this work are undeniable. A comprehensive forensic examination enables the removal of all hazards with the mindful analysis of an entire attack chain of events. Which is no laughing matter.

Do your endpoint security processes allow both instant incident response, and long term historical forensic analysis?

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften


Why are the same tricks being utilized by opponents all of the time? The simple response is that they are still working today. For example, Cisco’s 2017 Cybersecurity Report informs us that after years of decline, spam email with harmful attachments is once again on the rise. Because conventional attack vector, malware authors generally mask their activities using a filename similar to a common system procedure.

There is not necessarily a connection with a file’s path name and its contents: anyone who has attempted to hide sensitive details by giving it a boring name like “taxes”, or altered the extension on a file attachment to circumvent e-mail guidelines knows this principle. Malware creators understand this too, and will typically name their malware to resemble common system processes. For instance, “explore.exe” is Internet Explorer, however “explorer.exe” with an additional “r” may be anything. It’s easy even for professionals to overlook this small distinction.

The opposite issue, known.exe files running in unusual places, is simple to solve, using string functions and SQL sets.


What about the other case, discovering close matches to the executable name? The majority of people begin their search for near string matches by sorting data and visually looking for discrepancies. This normally works effectively for a little set of data, maybe even a single system. To find these patterns at scale, however, requires an algorithmic approach. One established method for “fuzzy matching” is to use Edit Distance.

What’s the best approach to determining edit distance? For Ziften, our technology stack includes HP Vertica, making this job easy. The internet has lots of data scientists and data engineers singing Vertica’s praises, so it will be enough to discuss that Vertica makes it easy to develop custom-made functions that make the most of its power – from C++ power tools, to statistical modeling scalpels in R and Java.

This Git repo is preserved by Vertica lovers operating in industry. It’s not an official offering, but the Vertica team is certainly familiar with it, and moreover is believing everyday about the best ways to make Vertica better for data scientists – a good space to see. Best of all, it includes a function to determine edit distance! There are also alternative tools for natural language processing here like word tokenizers and stemmers.

By using edit distance on the top executable paths, we can quickly find the closest match to each of our top hits. This is an intriguing dataset as we can arrange by distance to find the closest matches over the whole data set, or we can sort by frequency of the top path to see what is the nearest match to our frequently utilized processes. This data can likewise surface on contextual “report card” pages, to reveal, e.g. the leading 5 closest strings for a given path. Below is an example to give a sense of usage, based on real data ZiftenLabs observed in a client environment.


Setting a threshold of 0.2 appears to discover great results in our experience, but the take away is that these can be edited to fit specific use cases. Did we discover any malware? We notice that “teamviewer_.exe” (ought to be just “teamviewer.exe”), “iexplorer.exe” (should be “iexplore.exe”), and “cvshost.exe” (needs to be svchost.exe, unless possibly you work for CVS pharmacy…) all look strange. Considering that we’re already in our database, it’s likewise insignificant to obtain the associated MD5 hashes, Ziften suspicion scores, and other attributes to do a deeper dive.


In this particular real-life environment, it ended up that teamviewer_.exe and iexplorer.exe were portable applications, not familiar malware. We helped the customer with further investigation on the user and system where we observed the portable applications since use of portable apps on a USB drive could be proof of suspicious activity. The more disturbing find was cvshost.exe. Ziften’s intelligence feeds suggest that this is a suspicious file. Searching for the md5 hash for this file on VirusTotal confirms the Ziften data, suggesting that this is a potentially serious Trojan infection that could be part of a botnet or doing something a lot more harmful. As soon as the malware was found, however, it was easy to fix the problem and ensure it stays resolved using Ziften’s capability to kill and persistently block processes by MD5 hash.

Even as we establish sophisticated predictive analytics to detect harmful patterns, it is important that we continue to enhance our capabilities to hunt for known patterns and old tricks. Even if brand-new hazards emerge doesn’t suggest the old ones disappear!

If you liked this post, keep looking here for part 2 of this series where we will use this approach to hostnames to identify malware droppers and other malicious websites.

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver


In the very recent past everybody understood exactly what you implied if you raised the issue of an endpoint. If someone wanted to sell you an endpoint security product, you understood exactly what devices that software application was going to protect. However when I hear somebody casually discuss endpoints today, The Princess Bride’s Inigo Montoya comes to mind: “You keep utilizing that word. I don’t believe it means exactly what you believe it implies.” Today an endpoint could be practically any kind of device.

In truth, endpoints are so diverse these days that people have actually taken to calling them “things.” In accordance with Gartner at the close of 2016 there were more than six billion “things” connected to the internet. The consulting company forecasts that this number will shoot up to twenty one billion by the year 2020. The business utilization of these things will be both generic (e.g. linked light bulbs and Heating and Cooling systems) and industry particular (e.g. oil rig security tracking). For IT and security groups charged with connecting and securing endpoints, this is just half of the new difficulty, nevertheless. The welcoming of virtualization innovation has redefined what an endpoint is, even in environments where these groups have traditionally operated.

The last decade has seen a huge change in the way end users gain access to details. Physical devices continue to be more mobile with many information employees now doing the majority of their computing and interaction on laptops and mobile phones. More importantly, everyone is becoming an info worker. Today, much better instrumentation and tracking has enabled levels of data collection and analysis that can make the insertion of information technology into practically any job rewarding.

At the same time, more standard IT assets, particularly servers, are becoming virtualized to eliminate a few of the conventional restrictions in actually having those assets tied to physical devices.

These 2 trends together will impact security groups in important ways. The universe of “endpoints” will include billions of long-lived and unsecure IoT endpoints in addition to billions of virtual endpoint instances that will be scaled up and down as needed along with moved to various physical locations on demand.

Organizations will have very different worries about these 2 basic types of endpoints. Over their life times, IoT devices will need to be safeguarded from a host of hazards a few of which have yet to be thought up. Tracking and protecting these devices will require sophisticated detection capabilities. On the positive side, it will be possible to keep distinct log data to make it possible for forensic examination.

Virtual endpoints, on the other hand, provide their own important concerns. The ability to move their physical location makes it much more hard to make sure proper security policies are constantly attached to the endpoint. The practice of re-imaging virtual endpoints can make forensic investigation tough, as important data is typically lost when a new image is applied.

So it is irrelevant what word or phrases are utilized to explain your endpoints – endpoint, user device, systems, client device, mobile phone, server, virtual device, container, cloud workload, IoT device, and so on – it is essential to comprehend precisely what someone indicates when they utilize the term endpoint.

Written By Dr Al Hartmann And Presented By Charles Leaver CEO Ziften


If Prevention Has Failed Then Detection Is Crucial

The last scene in the well known Vietnam War movie Platoon depicts a North Vietnamese Army regiment in a surprise night attack breaching the concertina wire perimeter of an American Army battalion, overrunning it, and slaughtering the startled defenders. The desperate company leader, grasping their dire protective problem, orders his air assistance to strike his own position: “For the record, it’s my call – Dump whatever you have actually got left on my position!” Minutes later the battleground is immolated in a napalm hellscape.

Although physical dispute, this shows two elements of cybersecurity (1) You have to handle inevitable border breaches, and (2) It can be bloody hell if you don’t detect early and respond forcefully. MITRE Corporation has actually been leading the call for rebalancing cyber security priorities to place due focus on detecting breaches in the network interior rather than merely concentrating on penetration prevention at the network border. Instead of defense in depth, the latter produces a flawed “tootsie pop” defense – hard, crunchy shell, soft chewy center. Writing in a MITRE blog post, “We could see that it wouldn’t be a question of if your network would be breached but when it will be breached,” describes Gary Gagnon, MITRE’s senior vice president, director of cybersecurity, and chief security officer. “Today, organizations are asking ‘For how long have the intruders been inside? How far have they got?'”.

Some call this the “presumed breach” method to cyber security, or as posted to Twitter by F-Secure’s Chief Research study Officer:.

Q: How many of the Fortune 500 are compromised – Response: 500.

This is based upon the possibility that any sufficiently intricate cyber environment has an existing compromise, and that Fortune 500 businesses are of magnificently complicated scale.

Shift the Burden of Perfect Execution from the Protectors to the Attackers.

The conventional cybersecurity viewpoint, originated from the tradition boundary defense design, has been that the enemy just needs to be right once, while the protector must be right each time. A sufficiently resourced and consistent hacker will eventually achieve penetration. And time to effective penetration reduces with increasing size and intricacy of the target enterprise.

A boundary or prevention-reliant cyber-defense model basically demands perfect execution by the defender, while delivering success to any adequately continual attack – a plan for specific cyber disaster. For instance, a leading cyber security red group reports effective enterprise penetration in under three hours in more than 90% of their customer engagements – and these white hats are restricted to ethical ways. Your enterprise’s black hat assailants are not so constrained.

To be practical, the cyber defense strategy must turn the tables on the assailants, moving to them the unreachable problem of perfect execution. That is the rationale for a strong detection capability that continuously monitors endpoint and network behavior for any unusual indications or observed opponent footprints inside the perimeter. The more sensitive the detection capability, the more care and stealth the attackers must exercise in perpetrating their kill chain sequence, and the more time and labor and talent they must invest. The protectors require but observe a single attacker footfall to discover their foot tracks and relax the attack kill chain. Now the defenders end up being the hunter, the attackers the hunted.

The MITRE ATT&CK Design.

MITRE supplies a detailed taxonomy of opponent footprints, covering the post-compromise segment of the kill chain, known by the acronym ATT&CK, for Adversarial Tactics, Techniques, and Common Knowledge. ATT&CK project team leader Blake Strom says, “We decided to concentrate on the post-attack duration [portion of kill chain lined in orange listed below], not just because of the strong probability of a breach and the scarcity of actionable details, but also because of the many chances and intervention points available for effective protective action that do not always depend on prior knowledge of enemy tools.”




As displayed in the MITRE figure above, the ATT&CK model supplies extra granularity on the attack kill chain post-compromise phases, breaking these out into 10 strategy categories as shown. Each strategy classification is additionally detailed into a list of methods an attacker may use in carrying out that strategy. The January 2017 design upgrade of the ATT&CK matrix lists 127 strategies throughout its ten tactic categories. For example, Computer system registry Run Keys/ Start Folder is a method in the Perseverance classification, Brute Force is a technique in the Qualifications classification, and Command-Line Interface is a method in the Execution classification.

Leveraging Endpoint Detection and Response (EDR) in the ATT&CK Model.

Endpoint Detection and Response (EDR) solutions, such as Ziften provides, offer crucial visibility into assailant usage of methods noted in the ATT&CK model. For example, Registry Run Keys/ Start Folder technique usage is reported, as is Command Line Interface usage, because these both include readily observable endpoint habits. Strength use in the Qualifications classification should be obstructed by design in each authentication architecture and be observable from the resulting account lockout. But even here the EDR product can report occasions such as unsuccessful login attempts, where a hacker may have a few guesses to try, while staying under the account lockout attempt limit.

For mindful defenders, any technique usage may be the attack giveaway that deciphers the entire kill chain. EDR products contend based upon their strategy observation, reporting, and informing abilities, in addition to their analytics capability to perform more of the attack pattern detection and kill chain reconstruction, in support of protecting security analysts staffing the enterprise SOC. Here at Ziften we will detail more of EDR product abilities in support of the ATT&CK post compromise detection design in future blogs in this series.

Written By Michael Vaughan And Presented By Charles Leaver Ziften CEO


More customized products are needed by security, network and functional groups in 2017

A lot of us have actually gone to security conventions throughout the years, but none bring the same high
level of enjoyment as RSA – where security is talked about by the world. Of all the conventions I have actually gone to and worked, nothing comes close the enthusiasm for brand-new technology individuals exhibited this past week in downtown San Francisco.

After taking a couple of days to digest the dozens of conversations about the requirements and limitations with present security solutions, Ihave actually had the ability to synthesize a singular theme amongstparticipants: People want customized services that fit their environment and will work throughout several internal teams.

When I refer to the term “people,” I indicate everybody in attendance no matter technological sector. Functional specialists, security professionals, network veterans, as well as user behavior experts often
visited the Ziften booth and shared their experiences.

Everybody appeared more prepared than ever to discuss their needs and wants for their environment. These participants had their own set of objectives they wanted to attain within their department and they were desperate for answers. Because the Ziften Zenith solution supplies such broad visibility on enterprise devices, it’s not unexpected that our cubicle stayed crowded with individuals excited to learn more about a new, refreshingly easy endpoint security innovation.

Attendees came with complaints about myriad enterprise-centric security issues and sought deeper insight into exactly what’s really taking place on their network and on devices taking a trip in and out of the office.

End users of old-school security solutions are on the look out for a newer, more essential software applications.

If I could select simply one of the frequent concerns I got at RSA to share, it’s this one:

” Just what is endpoint discovery?”

1) Endpoint discovery: Ziften reveals a historical view of unmanaged devices which have actually been connected to other business endpoints at some point in time. Ziften allows users to discover known
and unidentified entities which are active or have been interactive with recognized endpoints.

a. Unmanaged Asset Discovery: Ziften utilizes our extension platform to expose these unknown entities operating on the network.

b. Extensions: These are custom fit options customized to the user’s particular desires and
requirements. The Ziften Zenith agent can execute the assigned extension one time, on a schedule or on a continuous basis.

Almost always after the above explanation came the genuine factor they were attending:

Individuals are searching for a large range of services for numerous departments, which includes executives. This is where operating at Ziften makes addressing this question a treat.

Only a part of the RSA participants are security specialists. I consulted with dozens of network, operation, endpoint management, vice presidents, general supervisors and channel partners.

They plainly all use and comprehend the requirement for quality security software applications however
seemingly discover the translation to organization value missing among security vendors.

NetworkWorld’s Charles Araujo phrased the issue rather well in an article a short article last week:

Enterprises must also rationalize security data in a company context and handle it holistically as part of the general IT and business operating model. A group of vendors is also attempting to tackle this obstacle …

Ziften was amongst only 3 companies mentioned.

After listening to those needs and wants of people from various business-critical backgrounds and discussing to them the capabilities of Ziften’s Extension platform, I generally explained how Ziften would modulate an extension to resolve their need, or I provided a brief demo of an extension that would permit them to overcome an obstacle.

2) Extension Platform: Tailored, actionable solutions.

a. SKO Silos: Extensions based upon fit and requirement (operations, network, endpoint, etc).

b. Customized Requests: Need something you can’t see? We can repair that for you.

3) Enhanced Forensics:

a. Security: Danger management, Risk Evaluation, Vulnerabilities, Suspicious metadata.

b. Operations: Compliance, License Rationalization, Unmanaged Assets.

c. Network: Ingress/Egress IP movement, Domains, Volume metadata.

4) Visibility within the network– Not just exactly what goes in and goes out.

a. ZFlow: Lastly see the network traffic inside your business.

Needless to say, everyone I spoke with in our cubicle rapidly comprehended the vital value of having a tool such as Ziften Zenith running in and across their enterprise.

Forbes writer, Jason Bloomberg, stated it best when he just recently described the future of enterprise security software applications and how all signs point towards Ziften leading the way:

Possibly the broadest interruption: vendors are enhancing their capability to understand how bad actors behave, and can thus take actions to prevent, spot or alleviate their destructive activities. In particular, today’s vendors comprehend the ‘Cyber Kill Chain’ – the actions a skilled, patient hacker (known in the biz as a sophisticated persistent danger, or APT) will require to accomplish his or her wicked objectives.

The product of U.S. Defense professional Lockheed Martin, The Cyber Kill Chain contains seven links: reconnaissance, weaponization, shipment, exploitation, setup, establishing command and control, and actions on objectives.

Today’s more innovative vendors target one or more of these links, with the goal of avoiding, discovering or reducing the attack. Five suppliers at RSA stood out in this category.

Ziften offers an agent based  technique to tracking the behavior of users, devices, applications, and
network elements, both in real-time as well as throughout historic data.

In real time, analysts utilize Ziften for danger recognition and avoidance, while they utilize the historic data to discover steps in the kill chain for mitigation and forensic functions.