Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver


The dissolving of the traditional boundary is happening quick. So what happens to the endpoint?

Financial investment in border security, as defined by firewall software, managed gateways and invasion detection/prevention systems (IDS/IPS), is changing. Investments are being questioned, with returns not able to conquer the expenses and complexity to create, keep, and justify these antiquated defenses.

More than that, the paradigm has actually altered – employees are no longer specifically working in the workplace. Lots of people are logging hours from home or while traveling – neither location is under the umbrella of a firewall system. Instead of keeping the cyber criminals out, firewalls typically have the opposite result – they avoid the authorized people from being productive. The paradox? They create a safe haven for attackers to breach and conceal for months, then traverse to vital systems.

So Exactly what Has Changed So Much?

The endpoint has actually become the last line of defense. With the aforementioned failure in border defense and a “mobile all over” workforce, we should now impose trust at the endpoint. Easier said than done, however.

In the endpoint area, identity & access management (IAM) systems are not the perfect answer. Even ingenious businesses like Okta, OneLogin, and cloud proxy suppliers such as Blue Coat and Zscaler can not conquer one simple truth: trust exceeds easy recognition, authentication, and authorization.

File encryption is a second attempt at safeguarding entire libraries and individual assets. In the most recent (2016) Ponemon study on data breaches, encryption only conserved 10% of the cost per breached record (from $158 to $142). This isn’t really the remedy that some make it seem.

The Whole Picture is changing.

Organizations needs to be prepared to embrace new paradigms and attack vectors. While organizations need to provide access to trusted groups and individuals, they have to resolve this in a much better method.

Crucial organization systems are now accessed from anywhere, at any time, not just from desks in business office buildings. And professionals (contingent workforce) are quickly consisting of over half of the total enterprise labor force.

On endpoint devices, the binary is predominantly the issue. Most likely benign events, such as an executable crash, could show something easy – like Windows 10 Desktop Manager (DWM) rebooting. Or it could be a much deeper issue, such as a harmful file or early indicators of an attack.

Trusted access doesn’t fix this vulnerability. According to the Ponemon Institute, in between 70% and 90% of all attacks are brought on by human error, social engineering, or other human aspects. This needs more than easy IAM – it needs behavioral analysis.

Rather than making good better, border and identity access companies made bad much faster.

When and Where Does the Good Part of the Story Start?

Taking a step back, Google (Alphabet Corp) announced a perimeter-less network model in late 2014, and has actually made considerable development. Other enterprises – from corporations to governments – have done this (quietly and less severe), but BeyondCorp has done this and revealed its efforts to the world. The design approach, endpoint plus (public) cloud displacing cloistered enterprise network, is the crucial idea.

This alters the entire discussion about an endpoint – be it a laptop, PC, workstation, or server – as subservient to the corporate/enterprise/private/ company network. The endpoint really is the last line of defense, and needs to be protected – yet also report its activity.

Unlike the standard perimeter security model, BeyondCorp doesn’t gate access to tools and services based on a user’s physical area or the stemming network; instead, access policies are based upon information about a device, its state, and its associated user. BeyondCorp considers both external networks and internal networks to be completely untrusted, and gates access to apps by dynamically asserting and implementing levels, or “tiers,” of access.

By itself, this appears harmless. However the reality is that this is an extreme new model which is imperfect. The access requirements have moved from network addresses to device trust levels, and the network is heavily segmented by VLAN’s, instead of a central model with potential for breaches, hacks, and dangers at the human level (the “soft chewy center”).

The bright side? Breaching the border is very challenging for potential enemies, while making network pivoting next to impossible as soon as they are past the reverse proxy (a common system utilized by attackers today – proving that firewall software do a better job of keeping the bad guys in rather than letting the good guys get out). The inverse design even more applies to Google cloud servers, presumably firmly managed, inside the border, versus client endpoints, who are all just about everywhere.

Google has done some nice improvements on tested security techniques, significantly to 802.1 X and Radius, bundled it as the BeyondCorp architecture, consisting of strong identity and access management (IAM).

Why is this important? What are the gaps?

Ziften believes in this technique because it highlights device trust over network trust. However, Google does not particularly reveal a device security agent or stress any kind of client-side tracking (apart from very rigorous setup control). While there may be reporting and forensics, this is something which every organization ought to be knowledgeable about, given that it’s a question of when – not if – bad things will occur.

Considering that implementing the preliminary phases of the Device Inventory Service, we’ve consumed billions of deltas from over 15 data sources, at a typical rate of about 3 million daily, totaling over 80 terabytes. Keeping historical data is necessary in permitting us to understand the end-to-end life cycle of a particular device, track and examine fleet-wide trends, and perform security audits and forensic examinations.

This is a costly and data-heavy procedure with 2 imperfections. On ultra-high-speed networks (used by the likes of Google, universities and research study organizations), adequate bandwidth allows for this kind of communication to take place without flooding the pipes. The first problem is that in more pedestrian corporate and government situations, this would trigger high user interruption.

Second, computing devices should have the horse power to constantly collect and send data. While a lot of employees would be delighted to have present developer-class workstations at their disposal, the expenditure of the devices and process of revitalizing them regularly makes this excessive.

A Lack of Lateral Visibility

Very few systems really generate ‘boosted’ netflow, enhancing traditional network visibility with rich, contextual data.

Ziften’s patented ZFlow ™ offers network flow details on data generated from the endpoint, otherwise achieved using brute force (human labor) or pricey network devices.

ZFlow serves as a “connective tissue” of sorts, which extends and finishes the end-to-end network visibility cycle, adding context to on-network, off-network and cloud servers/endpoints, enabling security groups to make quicker and more informed and accurate choices. In essence, purchasing Ziften services result in a labor savings, plus a boost in speed-to-discovery and time-to-remediation due to innovation functioning as an alternative to people resources.

For companies moving/migrating to the cloud (as 56% are planning to do by 2021 according to IDG Enterprise’s 2015 Cloud Study), Ziften provides unrivaled visibility into cloud servers to much better monitor and secure the total infrastructure.

In Google’s environment, just corporate-owned devices (COPE) are permitted, while crowding out bring your own device (BYOD). This works for a business like Google that can give out new devices to all staff – phone, tablet, laptop computer, and so on. Part of the reason is that the vesting of identity in the device itself, plus user authentication as usual. The device should meet Google requirements, having either a TPM or a software application equivalent of a TPM, to hold the X. 509 cert used to confirm device identity and to assist in device-specific traffic encryption. There should be several agents on each endpoint to validate the device validation asserts called out in the access policy, which is where Ziften would need to partner with the systems management agent company, since it is most likely that agent cooperation is necessary to the process.


In summary, Google has developed a first-rate solution, however its applicability and functionality is restricted to companies like Alphabet.

Ziften uses the same level of functional visibility and security protection to the masses, utilizing a light-weight agent, metadata/network flow tracking (from the endpoint), and a best-in-class console. For companies with specialized needs or incumbent tools, Ziften offers both an open REST API and an extension framework (to enhance ingestion of data and activating response actions).

This yields the advantages of the BeyondCorp model to the masses, while protecting network bandwidth and endpoint (machine) computing resources. As organizations will be sluggish to move totally far from the enterprise network, Ziften partners with firewall and SIEM vendors.

Finally, the security landscape is gradually shifting towards managed detection & response (MDR). Managed security service providers (MSSP’s) provide traditional monitoring and management of firewalls, gateways and border intrusion detection, however this is inadequate. They lack the abilities and the technology.

Ziften’s system has actually been tested, integrated, authorized and executed by a number of the emerging MDR’s, highlighting the standardization (capability) and versatility of the Ziften platform to play an essential function in remediation and occurrence response.

Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver

Ransomware that is customized to enterprise attack campaigns has actually emerged in the wild. This is an obvious development of consumer-grade ransomware, driven by the larger bounties which enterprises have the ability to pay coupled to the sheer scale of the attack surface area (internet-facing endpoints and un-patched software applications). To the hacker, your business is a tempting target with a big fat wallet simply asking to be overturned.

Your Organization is an Attractive Target

Simple Google inquiries may already have recognized unpatched internet-facing servers by the ratings across your domain, or your credulous users might already be opening “spear phishing” emails crafted just for them probably authored by individuals they know.

The weaponized invoices are sent to your accounting department, the weaponized legal notifications go to your legal department, the weaponized resumes are sent to your human resources department, and the weaponized trade publication articles go to your public relations firm. That must cover it, to begin with. Include the watering hole drive-by’s planted on market sites frequented by your workers, the social networks attacks targeted to your crucial executives and their families, the infected USB sticks strewn around your centers, and the compromises of your providers, customers, and organization partners.

Business compromise isn’t really an “if” but a “when”– the when is continual, the who is legion.

Targeted Ransomware Has Arrived

Malware analysts are now reporting on enterprise-targeted ransomware, a natural evolution in the monetization of business cyber invasions. Christiaan Beek and Andrew Furtak explain this in an excerpt from Intel Security Advanced Threat Research, February 2016:

” Throughout the past few weeks, we have received information about a new campaign of targeted ransomware attacks. Instead of the regular modus operandi (phishing attacks or drive-by downloads that result in automatic execution of ransomware), the cyber attackers acquired persistent access to the victim’s network through susceptibility exploitation and spread their access to any linked systems that they could. On each system, several tools were utilized to discover, secure, and delete the initial files along with any backups.”

Careful reading of this citation instantly reveals steps to be taken. Preliminary penetration was by “vulnerability exploitation,” as is often the case. A sound vulnerability management program with tracked and enforced direct exposure tolerances (measured in days) is compulsory. Since the hackers “spread their access to any connected system,” it is also requisite to have robust network division and access controls. Think about it as a water tight compartment on a warship to avoid sinking when the hull is breached. Of unique note, the cyber attackers “delete the initial files as well as any backups,” so there should be no delete access from a compromised system to its backup files – systems must just have the ability to add to their backups.

Your Backups Are Not Current Are They?

Obviously, there need to be current backups of any files that should endure an enterprise invasion. Paying the ransom is not an efficient choice given that any files produced by malware are inherently suspicious and need to be thought about tainted. Enterprise auditors or regulators can decline files excreted from some malware orifice as legally valid, the chain of custody having actually been completely broken. Financial data might have been altered with deceptive transactions, setup data might have been interfered with, infections may have been planted for later re-entry, or the malware file controls may simply have had errors or omissions. There would be no way to invest any confidence in such data, and accepting it as legitimate might further compromise all future downstream data dependent upon or stemmed from it. Deal with ransomware data as garbage. Either have a robust backup strategy – frequently checked and verified – or prepare to suffer your losses.

Exactly what is Your Preparation for a Breach?

Even with sound backups privacy of impacted data need to be assumed to be breached since it was read by malware. Even with comprehensive network logs, it would be unwise to prove that no data had actually been exfiltrated. In a targeted attack the assailants generally take data stock, evaluating a minimum of samples of the data to examine its potential value – they could be leaving cash on the table otherwise. Data ransom demands may simply be the last money making stage in a business breach after mining all other value from the intrusion considering that the ransom demand exposes the compromise.

Have a Thorough Remediation Plan

One must presume that competent assailants have actually arranged several, cunningly-concealed avenues of re-entry at numerous staggered time points (well after your crisis group has actually stood down and pricey consultants flown off to their next gig). Any roaming proof remaining was thoroughly staged to deceive investigators and deflect blame. Costly re-imaging of systems should be exceedingly thorough, touching every sector of the disk across its entire recording surface area and re-creating master boot records (MBR’s) and volume boot records from scratch. Some ransomware is known to compromise MBR’s.

Likewise, do not presume system firmware has not been compromised. If you can update the firmware, so can hackers. It isn’t really tough for hacking organizations to explore firmware hacking alternatives when their enterprise targets standardize system hardware configurations, permitting a little laboratory effort to go a long way. The industrialization of cybercrime enables the advancement and sale of firmware hacks on the dark internet to a wider criminal market.

Assistance Is On Offer With Excellent EDR Tools

After all of this bad news, there is an answer. When it concerns targeted ransomware attacks, taking proactive steps instead of reactive clean-up is far less agonizing. A good Endpoint Detection and Response (EDR) tool can assist on both ends. EDR tools are good for recognizing exposed vulnerabilities and active applications. Some applications have such an infamous history of exposing vulnerabilities that they are best removed from the environment (Adobe Flash, for instance). EDR tools are likewise good at tracking all considerable endpoint incidents, so that investigators can recognize a “patient zero” and track the pivot activity of targeted enterprise-spreading ransomware. Attackers depend on endpoint opacity to assist with hiding their actions from security staff, but EDR is there to make it possible for open visibility of significant endpoint events that could indicate an attack in progress. EDR isn’t really restricted to the old anti-virus convict-or-acquit model, that allows newly remixed attack code to evade AV detection.

Excellent EDR tools are always alert, constantly reporting, constantly tracking, readily available when you need it: now or retroactively. You would not turn a blind eye to enterprise network activity, so do not turn a blind eye to business endpoint activity.

Written By Josh Linder And Presented By Ziften CEO Charles Leaver

The market for business behavioral analytics is evolving – again – to support the security usage case. In the current Gartner User and Entity Behavior (UEBA) Trends Report, Ziften is delighted to be listed as a “Vendor to Watch.” We believe that our recognized relationships with threat intelligence feeds and visualization tools shows our inclusion within this research study note.

In the UEBA Market Report, Analysts Eric Ahlm and Avivah Litan describe that there is a prospective convergence in the advanced risk and analytics markets. The notion of UEBA – which extends user behavioral analytics to now include organizations, business processes, and self-governing devices such as the Internet of Things – needs deep understanding and the ability to react quickly and effectively.

At Ziften our established relationships with risk intelligence feeds and visualization tools shows our addition within this research study note. Our platform provides risk detection across different behavior vectors, instead of looking at a single-threaded signature feed. With integrations to orchestration and response systems, Ziften uniquely couples signature-based and behavioral analysis, while bridging the gap from protecting the endpoint to securing the entity. Constant tracking from the endpoint – including network flow – is crucial to understanding the complete risk landscape and vital for a holistic security architecture.

We commend Gartner on recognizing four areas for security and analytic vendors to concentrate on: User Behavior, Host/App Habits, Network Habits, and External Communications Behavior. We are the only endpoint supplier – today – to monitor both network behavior and external interactions habits. Ziften’s ZFLow ™ makes use of network telemetry to go beyond the standard IPFIX flow data, and augment with Layer 4 and Layer 5 os and user habits. Our threat intelligence integration – with Blue Coat, iSIGHT Partners, AlienVault and the National Vulnerability Database – is second to none. In addition, our unique relationship with ReversingLabs supplies binary analysis straight within the Ziften administration console.

Eventually, our continuous endpoint visibility system is pivotal in assisting to find behavioral threats that are tough to correlate without using advanced analytics.

Gartner Report

6 additional technology pattern takeaways which Gartner readers ought to think about:

– Application of Analytics to Discovering Breaches Differs
– Data Science for Analytics Technologies Still Up and Coming
– The Need for Extended Telemetry Drives Analytics Market Convergence
– Merging Between Analytics-Based Detection Suppliers and Orchestration/Response Vendors Likely
– SIEM Technologies Positioned to Be Central to Consolidation for Analytics Detection
– Advanced Behavioral Analytics Providers Extending Their Reach to Security Buyers


Gartner does not back any vendor, services or product illustrated in its research study publications, and does not encourage technology users to pick only those suppliers with the greatest ratings or other classification. Gartner research publications consist of the viewpoints of Gartner’s research study company and need to not be interpreted as statements of fact. Gartner disclaims all guarantees, expressed or implied, with respect to this research, consisting of any warranties of merchantability or fitness for a specific function.

Written By Michael Bunyard And Presented By Ziften CEO Charles Leaver

The real truth of contemporary life is that if cyber attackers wish to breach your network, then it is simply a matter of time before they will be successful. The endpoint is the most typical vector of attack, and the people are the most significant point of vulnerability in any organization. The endpoint device is where they communicate with whatever info that an attacker seeks: intellectual property, credentials, cyber ransom, etc. There are brand-new Next Generation Endpoint Security (NGES) systems, where Ziften is a leader, that supply the required visibility and insight to assist decrease or prevent the chances or duration of an attack. Methodologies of avoidance include reducing the attack area through removing recognized vulnerable applications, cutting version proliferation, eliminating destructive procedures, and making sure compliance with security policies.

But prevention can just go so far. No service is 100% effective, so it is important to take a proactive, real time methodology to your environment, watching endpoint habits, finding when breaches have actually happened, and reacting immediately with the necessary action. Ziften also offers these abilities, typically referred to as Endpoint Detection and Response, and organizations should alter their mindset from “How can we prevent attacks?” to “We are going to be breached, so exactly what do we do then?”

To understand the true breadth or depth of an attack, organizations need to have the ability to take a look back and reconstruct the conditions surrounding a breach. Security investigators require answers to the following 6 concerns, and they require them fast, because Incident Response personnel are outnumbered and dealing with restricted time windows to mitigate damage.

Where was the cyber attack activity initially seen?

This is where the ability to look back to the point in time of initial infection is crucial. In order to do this efficiently, organizations have to be able to go as far back in history as necessary to determine patient zero. The unfortunate state of affairs according to Gartner is that when a cyber breach occurs, the typical dwell time before a breach is spotted is a stunning 205 days. In accordance with the 2015 Verizon Data Investigations Breach Report (DBIR), in 60% of cases, attackers were able to penetrate companies within minutes. That’s why NGES systems that don’t constantly monitor and record activity but rather regularly poll or scan the endpoint can miss out on the preliminary critical penetration. Likewise, DBIR discovered that 95% of malware types showed up for less than four weeks, and four from 5 didn’t last 7 days. You need the capability to continually monitor endpoint activity and recall in time (however long ago the attack occurred) and reconstruct the preliminary infection.

How did it act?

What happened step by step after the preliminary infection? Did malware execute for a second every five minutes? Was it able to obtain escalated privileges? A constant picture of exactly what took place at the endpoint behaviorally is important to obtain an investigation started.

How and where did the attack spread after initial compromise?

Usually the enemy isn’t really after the details readily available at the point of infection, but rather wish to utilize it as a preliminary beachhead to pivot through the network to find its way to the valuable data. Endpoints include the servers that the endpoints are connected to, so it is essential to be able to see a total picture of any lateral motion that happened after the infiltration to know exactly what assets were compromised and possibly likewise infected.

How did the contaminated endpoint(s) behavior(s) change?

What was going on prior to and after the infection? What network connections were being attempted? How much network traffic was flowing? What procedures were active prior to and after the attack? Immediate answers to these questions are vital to quick triage.

What user activity took place, and was there any prospective insider participation?

What actions did the user take previously and after the infection occurred? Was the user present on the computer? Was a USB drive used? Was the time period outside their normal use pattern? These and many more artifacts need to be supplied to paint a full picture.

What mitigation is needed to solve the attack and prevent another one?

Reimaging the contaminated machine(s) is a time-consuming and costly solution but sometimes this is the only method to know for sure that all hazardous artifacts have actually been gotten rid of (although state-sponsored attacks may embed into system or drive firmware to remain immune even to reimaging). But with a clear picture of all activity that occurred, lesser actions such as removing malicious files from all systems affected may suffice. Re-examining security policies will probably be necessary, and NGES solutions can assist automate future actions should similar scenarios occur. Automatable actions consist of sandboxing, cutting off network access from infected devices, killing processes, and much more.

Do not wait till after a cyber attack happens and you need to contract an army of professionals and spend valuable time and finances piecing the facts together. Make certain you are prepared to answer these 6 crucial questions and have all the responses at your fingertips in minutes.

Written By Michael Steward And Presented By Charles Leaver CEO Ziften


Internal Revenue Service Attackers Make Early Returns Due to Previous External Attacks


The IRS breach was the most distinct cyber attack of 2015. Timeless attacks today involve phishing emails intended to obtain preliminary access to target systems where lateral movement is then carried out until data exfiltration takes place. But the IRS hack was various – much of the data needed to perform it was previously obtained. In this case, all the hackers had to do was walk in the front door and submit the returns. How could this take place? Here’s exactly what we understand:

The IRS website has a “Get Transcript” function for users to obtain previous tax return information. As long as the requester can provide the right information, the system will return previous and current W2’s and old income tax returns, etc. With anyone’s SSN, Date of Birth and filing status, the attackers might begin the retrieval process of past filing year’s info. The system likewise had a Knowledge Based Authentication (KBA) system, which asked questions based on the asked for users credit history.

KBA isn’t really fool proof, though. The questions it asks can oftentimes be guessed based on other information already learned the user. The system asks questions such as “Which of the following streets have you lived on?” or “Which of the following cars have you owned?”

After the dust settled, it’s estimated that the hackers attempted to gather 660,000 transcripts of previous tax payer details via Get Transcript, where they succeeded in 334,000 of those attempts. The unsuccessful efforts appear to have gotten as far as the KBA questions where the hackers failed to supply the appropriate answers. It’s approximated that the hackers got away with over $50 million dollars. So, how did they do it?

Security researchers think that the hackers used information from previous attacks such as SSNs, DOBs, addresses and filing statuses to try to obtain prior income tax return information on its target victims. If they achieved success and answered the KBA questions correctly, they submitted a claim for the 2015 calendar year, often times increasing the withholdings amount on the tax return form to get a bigger return. As discussed formerly not all attempts succeeded, however over 50% of the efforts led to major losses for the Internal Revenue Service.

Detection and response systems like Ziften are targeted at identifying when there are compromised endpoints (for example through phishing attacks). We do this by offering real-time visibility of Indicators of Compromise (IoC’s). If the theories are correct and the attackers utilized details obtained from previous attacks outside of the IRS, the jeopardized companies could have gained from the visibility Ziften supplies and alleviated against mass-data exfiltration. Ultimately, the Internal Revenue Service seems to be the vehicle – rather than preliminary victim – of these cyber attacks.

Written By Michael Pawloski And Presented By Ziften CEO Charles Leaver

The Consumers Of Comcast Are Victims Of Data Exfiltration and Shared Hacks Via Other Companies

The private information of roughly 200,000 Comcast customers was jeopardized on November 5th 2015. Comcast was required to make this announcement when it came to light that a list of 590,000 Comcast client emails and passwords could be acquired on the dark web for a token $1,000. Comcast argues that there was no security breach to their network but rather it was via past, shared hacks from other businesses. Comcast even more declares that just 200,000 of these 590,000 clients in fact still exist in their system.

Less than 2 months earlier, Comcast had currently been slapped with a $22 million fine over its unintentional publishing of nearly 75,000 consumers’ individual information. Somewhat ironically, these customers had actually specifically paid Comcast for “unlisted voice-over-IP,” a line item on the Comcast bill that stipulated that each consumer’s information would be kept private.

Comcast instituted a mass-reset of 200,000 consumer passwords, who may have accessed these accounts before the list was put up for sale. While a simple password reset by Comcast will to some extent safeguard these accounts going forward, this doesn’t do anything to protect those consumers who might have recycled the same e-mail and password mix on banking and charge card logins. If the customer accounts were accessed before being revealed it is definitely possible that other individual information – such as automated payment info and home address – were already acquired.

The conclusion to this: Assuming Comcast wasn’t attacked directly, they were the victim of various other hacks that contained data associated with their consumers. Detection and Response services like Ziften can avoid mass data exfiltration and frequently reduce damage done when these inevitable attacks happen.

Written By Matthew Fullard Presented By Charles Leaver CEO Ziften

Trump Hotels Point of Sale Susceptibility Emphasize Requirement for Quicker Detection of Anomalous Activity

Trump Hotels, suffered a data breach, in between May 19th 2014 and June 2, 2015. The point of infection used was malware, and infected their front desk computer systems, POS systems, and restaurants. Nevertheless, in their own words they claim that they “did not discover any evidence that any customer information was stolen from our systems.” While it’s reassuring to find out that no evidence was discovered, if malware exists on point of sales systems it is probably there to take info related to the payment cards that are swiped, or increasingly tapped, inserted, or waved. A lack of proof does not indicate the absence of a criminal offense, and to Trump Hotel’s credit, they have offered totally free credit tracking services. If one is to examine a Point-of-Sale (or POS) system nevertheless you’ll notice something in abundance as an administrator: They hardly ever change, and software applications will be nearly uniform across the deployment environment. This can present both positives and negatives when thinking of securing such an environment. Software modifications are sluggish to happen, need rigorous screening, and are tough to roll out.

However, because such an environment is so homogeneous, it is also a lot easier to recognize POS vulnerabilities when something new has actually changed.

At Ziften we monitor all executing binaries and network connections that take place within a community the second they happen. If a single Point of Sale system started to make brand-new network connections, or began running brand-new software applications, no matter its intent, it would be flagged for additional review and evaluation. Ziften also collects endless historical data from your environment. If you want to know exactly what happened six to twelve months ago, this is not a problem. Now dwell times and antivirus detection rates can be measured using our incorporated threat feeds, as well as our binary collection and submission technology. Likewise, we’ll tell you which users initiated which applications at exactly what time throughout this historical record, so you can learn your preliminary point of infection.

Point of Sale issues continue to afflict the retail and hospitality markets, which is a pity given the fairly uncomplicated environment to monitor with detection and response.

Written By Andy Wilson And Presented By Ziften CEO Charles Leaver

USA retail outlets still appear an appealing target for cyber criminals looking for credit card data as Marriott franchisee White Lodging Services Corp confirmed a data breach in the Spring of 2015, affecting customers at 14 hotels across the nation from September 2014 to January 2015. This breach follows White Lodging suffered a similar cyber attack in 2014. The assailants in both cases were reportedly able to compromise the Point-of-Sale systems of the Marriott Lounges and Restaurants at numerous locations run by White Lodging. The enemies had the ability to acquire names printed on clients’ credit or debit cards, credit or debit card numbers, the security code and card expiration dates. POS systems were likewise the target of recent breaches at Target, Neiman Marcus, Home Depot, and others.

Generally, Point-of-Sale (or POS) systems at numerous United States retail outlets were “locked down” Windows computers running a small set of applications geared toward their function – ringing up the sale and processing a transaction with the Charge card merchant or bank. Modern POS terminals are essentially PC’s that run email applications, web browsers and remote desktop tools in addition to their transaction software applications. To be fair, they are often released behind a firewall program, but are still ripe for exploiting. The very best defenses can and will be breached if the target is important enough. For example, remote control tools used for management and upgrading of the Point of Sale systems are frequently pirated by hackers for their gains.

The credit card or payment processing network is a totally different, air-gapped, and encrypted network. So how did hackers manage to take the credit card data? They stole the data while it remained in memory on the Point of Sale terminal while the payment procedure was being conducted. Even if retailers don’t store credit card details, the data can be in an unencrypted state on the Point of Sale machine while the payment transaction is validated. Memory-scraping Point of Sale malware such as PoSeidon, FindPOS, FighterPOS, and PunKey are used by the data thieves to harvest the charge card info in its unencrypted state. The data is then normally encrypted and retrieved by the hackers or sent out to the Internet where it’s retrieved by the burglars.

Ziften’s service supplies continuous endpoint visibility that can find and remediate these kinds of dangers. Ziften’s MD5 hash analysis can detect brand-new and suspicious procedures or.dll files running in the POS environment. Ziften can likewise eliminate the process and collect the binary for more action or analysis. It’s likewise possible to find POS malware by alerting to Command and Control traffic. Ziften’s integrated Risk Intel and Custom-made Threat Feed options enables consumers to alert when Point of Sale malware talks to C&C nodes. Finally, Ziften’s historic data permits customers to begin the forensic examination of how the malware got in, what it did after it was installed, and executed and other devices are infected.

It’s past time for merchants to step up the game and look for brand-new solutions to protect their consumers’ payment cards.

Written By Josh Applebaum And Presented By Charles Leaver Ziften CEO

Experian Have to Learn from Past Errors And Implement A Continuous Monitoring Solution

Working in the security industry, I’ve always felt my job was hard to explain to the typical person. Over the last few years, that has altered. Sadly, we are seeing a brand-new data breach revealed every few weeks, with many more that are kept secret. These breaches are getting front page attention, and I can now explain to my friends exactly what I do without losing them after a couple of sentences. However, I still question exactly what it is we’re learning from all of this. As it turns out, numerous businesses are not learning from their own mistakes.

Experian, the international credit reporting firm, is a company with a lot to learn. Several months ago Experian revealed it had discovered its servers had been breached and that consumer data had been taken. When Experian revealed the breach they assured customers that “our consumer credit database was not accessed in this event, and no credit card or banking information was obtained.” Although Experian put in the time in their announcement to assure their clients that their financial info had not been taken, they elaborated further on what data really was taken: customers’ names, addresses, Social Security numbers, date of birth, driver’s license numbers, military ID numbers, passport numbers, and extra info utilized in T- Mobile’s own credit assessment. This is frightening for 2 reasons: the first is the kind of data that was stolen; the second is the fact that this isn’t really the first time this has actually happened to Experian.

Although the cyber criminals didn’t leave with “payment card or banking information” they did leave with individual data that could be exploited to open brand-new credit card, banking, and other monetary accounts. This in itself is a reason the T-Mobile customers involved should be concerned. Nevertheless, all Experian customers ought to be a little worried.

As it turns out, this isn’t the very first time the Experian servers have been jeopardized by hackers. In early 2014, T-Mobile had announced that a “fairly small” number of their consumers had their personal details stolen when Experian’s servers were breached. Brian Krebs has a very well-written post about how the hackers breached the Experian servers the first time, so we won’t get into too much detail here. In the very first breach of Experian’s servers, hackers had exploited a vulnerability in the organization’s support ticket system that was left exposed without first requiring a user to verify before utilizing it. Now to the frightening part: although it has actually ended up being widely known that the cyber attackers utilized a vulnerability in the organization’s support ticket system to gain access, it wasn’t until not long after the 2nd hack that their support ticket system was shut down.

It would be difficult to believe that it was a coincidence that Experian chose to close down their support ticket system mere weeks after they announced they had been breached. If this wasn’t a coincidence, then let’s ask: exactly what did Experian learn from the very first breach where consumers got away with sensitive consumer data? Companies who save their customers’ sensitive info need to be held accountable to not only protect their consumers’ data, but if also to guarantee that if breached they patch the holes that are discovered while investigating the attack.

When businesses are investigating a breach (or potential breach) it is essential that they have access to historic data so those investigating can try to piece back together the puzzle of how the attack unfolded. At Ziften, we offer a service that allows our customers to have a constant, real time view of the whole picture that takes place in their environment. In addition to providing real-time visibility for discovering attacks as they occur, our continuous monitoring system records all historical data to allow customers to “rewind the tape” and piece together what had occurred in their environment, regardless of how far back they have to look. With this brand-new visibility, it is now possible to not just learn that a breach took place, but to also learn why a breach happened, and hopefully learn from past mistakes to keep them from taking place again.

Written By Craig Hand And Presented By Ziften CEO Charles Leaver

UCLA Health Data Breach Likely Due To Inferior Security

UCLA Health announced on July 17th 2015 that it was the victim of a health data breach impacting as many as 4.5 million health care clients from the four medical facilities it runs in the Southern California region. According to UCLA Health officials, Personally Identifiable Information (PII) and Protected Health Information (PHI) was accessed however no evidence yet indicates that the data was taken. This data went as far back as 1990. The authorities also mentioned that there was no evidence at this time, that any credit card or monetary data was accessed.

“At this time” is key here. The details accessed (or potentially taken, its certainly hard to understand at this point) is practically good for the life of that individual and possibly still useful past the death of that individual. The info offered to the criminals consisted of: Names, Addresses, Contact numbers, Social Security Numbers, Medical condition, Medications prescribed, Medical treatments carried out, and test outcomes.

Little is known about this cyber attack like so numerous others we learn about but never hear any real information on. UCLA Health discovered unusual activity in sections of their network in October of 2014 (although access possibly started one month previously), and right away called the FBI. Finally, by May 2015 – a full seven months later on – investigators stated that a data breach had taken place. Once again, officials declare that the enemies are most likely highly advanced, and not in the USA. Lastly, we the public get to find out about a breach a complete two months later July 17, 2015.

It’s been stated lots of times before that we as security professionals need to be right 100% of the time, while the cyber criminals only need to find that 1% that we may not be able to rectify. Based upon our research about the breach, the bottom line is UCLA Health had poor security practices. One factor is based on the easy fact that the accessed data was not encrypted. We have actually had HIPAA now for some time, UCLA is a well-regarded bastion of Higher Education, yet still they failed to safeguard data in the most basic methods. The claim that these were highly sophisticated people is also suspicious, as up until now no genuine evidence has been disclosed. After all, when is the last time that an organization that has been breached declared it wasn’t from an “sophisticated” cyber attack? Even if they declare they have such proof, as members of the public we won’t see it in order to vet it properly.

Because there isn’t enough disclosed information about the breach, its tough to figure out if any service would have helped in finding the breach quicker rather than later on. Nevertheless, if the breach began with malware being provided to and executed by a UCLA Health network user, the probability that Ziften might have assisted in finding the malware and potentially stopping it would have been reasonably high. Ziften might have likewise alerted on suspicious, unknown, or understood malware in addition to any interactions the malware may have made in order to spread out internally or to exfiltrate data to an external host.

When are we going to learn? As all of us know, it’s not a matter of if, however when, companies will be attacked. Smart companies are preparing for the inescapable with detection and response solutions that alleviate damage.