Windows Server 2019 RDSH is a go

Posted on

Originally Seen: April 17, 2018

UPDATE: Microsoft on April 24 released the next preview build of Windows Server 2019, which includes RDSH. “Because of a bug, the RDSH role was missing in previous releases of Windows Server 2019 – this build fixes that,” the company said in a blog post announcing Build 17650.]

Remote Desktop Session Host is coming to the Windows Server 2019 preview and official release, Microsoft has confirmed.

The Remote Desktop Session Host (RDSH) role was not available in the first preview build of Windows Server 2019 that Microsoft released to the Insiders Program in March. At that time, experts said they did not expect the company to include RDSH when the operating system becomes generally available later this year.

In a statement to SearchVirtualDesktop this week, however, a company spokesperson said: “The RDSH role will be in the preview build available for Insiders soon. Windows Server 2019 will have the [Remote Desktop Services] roles like in Windows Server 2016.”

Mixed messages on Windows Server 2019 RDSH

Up until now, the messaging from Microsoft around RDSH in Windows Server 2019 caused confusion and frustration among some in the IT community. The company declined to officially comment on the future of RDSH in March, although some members of the Windows Server team posted on Twitter about the issue.

Jeff Woolsey, principal program manager for Windows Server, said in March that Remote Desktop Services (RDS) — the set of technologies that provide remote desktop and application access — was “not gone.” Last week, he reiterated that statement, and Scott Manchester, Microsoft group manager for RDS, said RDSH would be coming to the Windows Server 2019 preview in about two weeks.

IT administrators and industry observers wondered why Microsoft had not clarified earlier that Windows Server 2019 would indeed have the RDSH role.

“Microsoft was disconcertingly quiet about the feature omission,” said Jeff Wilhelm, CTO at Envision Technology Advisors, a solutions provider in Pawtucket, R.I. “There was much speculation.”

One possibility is that the code for the RDSH role simply wasn’t ready, and instead of releasing something incomplete or buggy in the preview, Microsoft removed it altogether.

Other speculation focused on a potential new multi-user Windows 10 feature. Microsoft has not commented on that, but it may continue to be a possibility for session-hosted desktops without RDSH.

The news that RDSH will be in the next Insider build should mean “a sigh of relief” for service providers and IT admins, Wilhelm said in an email.

“RDSH provides an important feature to users at many organizations, and the announced improvements, including HTML5 support, are a welcome addition,” he said.

Advertisements

Protecting safety instrumented systems from malware attacks

Posted on

Originally seen: February 2018

Trisis malware targets safety instrumented systems and puts industrial control systems at risk. Expert Ernie Hayden reviews what to know about SIS and its security measures.

A newly discovered attack on industrial control systems has the security world uncovering more questions than answers.

The 2018 S4 Conference even included presentations and multiple side conversations about the attack called Trisis/Triton/HatMan.

The first public awareness of this attack came after cybersecurity company FireEye published a blog post about it in mid-December 2017. The company’s moniker for this malware was Triton. Close on the heels of the FireEye announcement, Dragos CEO Robert Lee published a white paper analyzing the malware that he called Trisis because it targeted Schneider Electric’s Triconex Safety Instrumented Systems.

On Dec. 18, 2017, the U.S. Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC) published its malware analysis report, industrial control systems (ICS)-CERT MAR-17-352-01, “HatMan — Safety System Targeted Malware,” which included its summary of the Triton/Trisis/HatMan malicious code.

Almost daily, new commentaries and analyses of Triton/Trisis/HatMan are published. It is obvious that the attack has raised more questions than answers, including: Who orchestrated the attack? Why did they develop this code? What was the attack’s purpose? Are there more malware attacks to come?

One thing is certain, according to Lee, “Trisis is the first ever [attack] to target safety instrumented systems, and it is the one that gives me the most concern.”

What is a safety instrumented system?

A simple, but not perfect, way to think about safety instrumented systems (SIS) is to consider them part of a dead man’s switch configuration.

Dead man’s switch (DMS) mechanisms are used in a variety of operating environments, such as locomotives, lawn mowers, chainsaws, snowblowers, and even for aircraft refueling. The idea is that the DMS must be continuously held or pressed by an operator, and, if the switch/handle is released during operation, the machine will either stop running or transition to a safer state, such as idling.

A DMS control in a locomotive can be a floor pedal, trigger handle or push-button where the device must be continuously held or pressed to enable the locomotive to move forward. If the engineer driving the train is incapacitated for any reason, the release of the DMS causes the engine to idle and, in some locomotives, the emergency brakes are applied. The system fails safe.

Traditional dead man’s switches in trains can be overridden using duct tape, heavy bricks or other methods, and, in rare cases, the switches can fail to engage when an incapacitated engineer slumps forward.

Safety instrumented systems are more complicated than the dead man’s switch described above. However, the SIS is installed — optimally in its own dedicated network zone — so that plant operations can be shut down under extreme plant conditions without human intervention. In other words, the plant can fail safe.

Emerson Process Management literature notes that safety instrumented systems “are specifically designed to protect personnel, equipment, and the environment by reducing the likelihood or the impact severity of an identified emergency event.”

An SIS is composed of a combination of sensors, logic solvers and final elements that are separate and distinct from the other plant controls. If the plant is out of control, the SIS is there to shut the plant down with no reliance on human intervention.

Some SIS configurations include dedicated sensors that shut down plant operations — such as refineries — when certain pressures or temperatures are exceeded. Another example of an SIS working is a nuclear reactor automatic shutdown — called a SCRAM — when coolant flow is below a minimum rate, etc. Again, no human intervention is necessary.

According to standards established by the International Electrotechnical Commission in IEC 61511 and the International Society of Automation ISA S84.01, safety instrumented systems must be separate and distinct — independent — from other control systems that operate and control the same equipment/systems. The controls and control systems contained within the SIS are devoted solely to the proper operation of the safety system. There is no reliance on outside controls or sensor input for the SIS to trip.

Safety instrumented systems architecture

Some ICS security experts may refer to the Purdue reference model — or the Purdue model — when discussing ICS network architecture. The Purdue model is a part of the Purdue Enterprise Reference Architecture, which provides a framework for designing manufacturing systems, and which was developed in the 1990s by Theodore Williams and the members of the Industry-Purdue University Consortium for Computer Integrated Manufacturing.

The Purdue model is intended to help users understand a production network. The model organizes an industrial plant network architecture into four levels — plus the underlying physical process level — and is illustrated below.

  • Level 0: The physical process — This is where the physical work in the plant gets done.
  • Level 1: Intelligent devices — This level includes the sensors, programmable logic controllers (PLCS) and the actuators. This level has its own distinct and separate sub-zone specifically for SIS.
  • Level 2: Control systems — This level is where the production is monitored, controlled and electronically supervised, and it includes the production video screens that display the Human Machine Interface and real-time controls and software.
  • Level 3: Manufacturing operations systems — This level is essentially the brains of the manufacturing operations. It includes manufacturing execution systems, maintenance and plant performance management systems, data historians, and middleware.
  • Level 4: Business logistics systems — This is on the enterprise side of the plant. At this level, business-related manufacturing activities are performed, primarily relying on enterprise resource planning software, such as SAP. This is where the plant production schedule is generated and material inventory, shipping and use are monitored and modified.

Basically, when you are in the manufacturing plant, you are looking primarily at Levels 0, 1, 2 and 3.

ICS network, or the Purdue model

ERNIE HAYDEN – The Purdue model

In the Purdue model, the SIS is located at Level 1 and comprises its own stand-alone network zone. The safety and protection systems monitor the manufacturing processes and, under emergency conditions, activate and return the plant to a safe state by closing valves, shutting off burners, increasing cooling water flow, etc. These safety and protection systems also include tools that monitor manufacturing and alert an operator of impending unsafe conditions.

SIS controller versus a PLC

You should understand that, visually, an SIS and an off-the-shelf programmable logic controller(PLC) or other industrial PC may look the same; however, they have different functions and different implementation schemes. Vendors typically use their current line of PLCs and modify them to fill the SIS role.

According to Clint Bodungen’s Hacking Exposed: Industrial Control Systems, “… the SIS typically uses a complicated series of both analog and digital 1-out-of-2 or 2-out-of-3 voting systems to monitor and respond to adverse process conditions. Normally, SISs are designated to provide only a few core functions …” when compared to the normal, multifunctional PLC in the manufacturing plant.

Safety instrumented systems may not be installed in every plant; however, they will be included in plants that can be affected by hackers, cyberthreats, terrorists or internal attackers, and which could result in serious dangers, such as death, injury, environmental releases, etc. So you may not see an SIS in a benign manufacturing facility, but the refinery next-door would have more than one SIS in place.

The crisis of a violated SIS

As noted above in Lee’s comment regarding his concern that the SIS has been violated by Trisis, the demonstration of the SIS being violated by malware is very serious. Trisis/Triton/HatMan in the SIS may not do anything catastrophic by itself; however, the NCCIC observes “… it could be very damaging when combined with malware that impacts the (manufacturing) process in tandem.”

Essentially the capacity to disable, modify or inhibit the ability of an SIS to fail safely can potentially result in physical consequences, environmental impact, injuries and even death.

In the Dragos blog, a FAQ entry rhetorically asks, “Is Trisis a Big Deal?” The answer is yes.

  • Trisis is the fifth known ICS-tailored malware following Stuxnet, Havex, BlackEnergy2 and CrashOverride.
  • Trisis is the first publicly known ICS-tailored malware to target SIS.
  • Lastly, because SISs are specifically designed and deployed to ensure the safety of the manufacturing process, environment and human life, an assault on SIS is “… bold and unsettling.”

Yes, the effect of Trisis is disconcerting and brings added attention to the security and integrity of the SIS. However, an SIS can be defeated without exotic malware by placing the SIS controller in bypass, placing the logic solver in an infinite loop, changing the trip and alarm set points, disconnecting the output from the logic, spoofing the inputs, etc., according to Secure the SIS by William L. Mostia. So waiting for SIS malware to prompt you to protect your SIS may not be enough.

The SIS is critical to safe plant operations and needs to be designed, implemented and maintained with utmost care and oversight. The Trisis/Triton/HatMan attack has certainly awakened the ICS security community.

As observed by Dale Peterson, founder and CEO of Digital Bond, a control system security company based in Sunrise, Fla., and producer of the annual S4 Conference, we are in the early stages of analyzing this SIS malware attack.

The industry needs to be reminded of how much we learned about Stuxnet after the detailed work done by Ralph Langner, et al.; however, it took time and resources before we really knew the details of Stuxnet. We may need to be patient with the ICS researchers who can tell us more about this new SIS malware and how we can best protect ourselves.

AT&T mobile 5G network falling short

Posted on Updated on

Originally Seen: TechTarget April 2018

The latest update on AT&T’s mobile 5G network trials indicates the company will need to work faster to meet its goal of launching a commercial service by the end of the year.

AT&T’s latest update on its mobile 5G trials indicates the carrier has significant hurdles to clear to achieve its goal of launching by the end of the year a commercial service based on the high-speed wireless technology.

AT&T published this week a blog describing its progress in the mobile 5G network trials in Austin and Waco, Texas; Kalamazoo, Mich.; and South Bend, Ind. The company started the tests roughly 18 months ago in Austin, adding the other cities late last year.

AT&T, along with Verizon and other carriers, is spending billions of dollars to develop fifth-generation wireless networks for business, consumer and internet of things applications. But the latest metrics published by AT&T were not what analysts would expect from technology for delivering mobile broadband to smartphones, tablets and other devices.

When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me.

Chris Antlitz, analyst, Technology Business Research Inc.

“When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me,” said Chris Antlitz, an analyst at Technology Business Research Inc., based in Hampton, N.H.. “It seems like there are some inconsistencies there.”

AT&T plans to deliver mobile 5G over the millimeter wave (mmWave) band, which is a spectrum between 30 gigahertz (GHz) and 300 GHz. MmWave allows for data rates up to 10 Gbps, which comfortably accommodates carriers’ plans for 5G. But before service providers can use the technology, they have to surmount its limitations in signal distance and in traveling through obstacles, like buildings.

AT&T’s mobile 5G network challenges

AT&T’s update indicates mmWave’s constraints remain a challenge. In Waco, for example, AT&T delivered 5G to a retail business roughly 500 feet away from its cellular transmitter. That maximum distance would require more transmitters than the population outside of major cities could support, Antlitz said.

AT&T, however, could provide a fixed wireless network that sends a 5G signal to residences and businesses as an alternative to wired broadband, Antlitz said. AT&T rival Verizon plans to offer that product by the end of the year.

Other shortcomings include AT&T’s limited success in sending a 5G signal from the cellular transmitter through the buildings, trees and other obstacles likely to stand in the way of its destination. In the trial update, AT&T said it achieved gigabit speeds only in “some non-line of sight conditions.” A line of sight typically refers to an unobstructed path between the transmitting and receiving antennas.

Distance and piercing obstacles are challenges for any carrier using mmWave for a mobile 5G network. Buildings and other large physical objects can block the technology’s short, high-frequency wavelengths. Also, gases in the atmosphere, rain and humidity can weaken mmWave’s signal strength, limiting the technology’s reach to six-tenths of a mile or less.

AT&T’s achievement in network latency also falls short of what’s optimal for a mobile 5G network. The carriers’ 9 to 12 milliseconds seem “a little high,” Antlitz said. “I would expect that on LTE, not 5G. 5G should be lower.”

While AT&T has likely made some progress in developing mobile 5G, “a lot of work needs to be done,” said Rajesh Ghai, an analyst at IDC.

Delays possible in AT&T, Verizon 5G offerings

Meanwhile, Verizon is testing its fixed wireless 5G network — a combination of mmWave and proprietary technology — in 11 major metropolitan areas. So far, the features Verizon has developed places the carrier “fairly far ahead of AT&T in terms of maximizing the capabilities of 5G,” Antlitz said.

Nevertheless, neither Verizon nor AT&T is a sure bet for launching a commercial 5G network this year.

“Some of this stuff might wind up getting pushed into 2019,” Antlitz said. “There are so many things that could throw a monkey wrench in their timetable. The probability of something doing that is very high.”

HP keylogger: How did it get there and how can it be removed?

Posted on Updated on

Originally seen: October 2017 TechTarget.

 keylogging flaw found its way into dozens of Hewlett Packard laptops. Nick Lewis explains how the HP keylogger works and what can be done about it.

More than two dozen models of Hewlett Packard laptops were found to contain a keylogger that recorded keystrokes into a log file. HP released patches to remove the keylogger and the log files. How did the HP keylogger vulnerability get embedded in the laptops? And is there anything organizations can do to test new endpoint devices?

When it comes to security, having high expectations for security vendors and large vendors with deep pockets is reasonable given that customers usually pay a premium believing the vendors will devote significant resources to secure their products. Unfortunately, as with most other security teams, companies often don’t have enough resources or organizational fortitude to ensure security is incorporated into all of the enterprise’s software development.

But even the most secure software development can enable security issues to slip through the cracks. When you add in an outsourced hardware or software development team, it’s even easier for something to go unnoticed.

So while vendors might talk a good talk when it comes to security, monitoring them to ensure they uphold their end of your agreement is absolutely necessary.

One case where a vulnerability apparently escaped notice was uncovered when researchers at Modzero AG, an information security company based in Winterthur, Switzerland, found that a bug had been introduced into HP laptops by a third-party driver installed by default.

But even the most secure software development can enable security issues to slip through the cracks.

The vulnerability was discovered in the Conexant HD Audio Driver package, where the driver monitors for certain keystrokes used to mute or unmute audio. The keylogging functionality, complete with the ability to write all keystrokes to a log file, was probably introduced to help the developers debug the driver.

We can hope that the HP keylogger vulnerability was left in inadvertently when the drivers were released to customers. Modzero found metadata indicating the HP keylogger capability was present in HP computers since December 2015, if not earlier.

It’s difficult to know whether static or dynamic code analysis tools could have detected this vulnerability. However, given the resources available to HP in 2015, including a line of business related to application and code security, as well as the expectations of their customers, it might be reasonable to assume HP could have incorporated these tools into their software development practices. However, the transfer of all of HP’s information security businesses to a new entity, Hewlett Packard Enterprise, began in November 2015, and was completed in September 2017, when Micro Focus merged with HPE.

It’s possible that Modzero found the HP keylogger vulnerability while evaluating a potential new endpoint for an enterprise customer. They could have been monitoring for open files, or looking for which processes had the files open to determine what the process was doing. They could have been profiling the individual processes running by default on the system to see which binaries to investigate for vulnerabilities. They could even have been monitoring to see if any processes were monitoring keystrokes.

Enterprises can take these steps on their own or rely on third parties to monitor their vendors. Many enterprises will install their own image on an endpoint before deploying it on their network — the known good images used for developing specific images for target hardware could have their unique aspects analyzed with a dynamic or runtime application security tool to determine if any common vulnerabilities are present.

Ransomware recovery methods: What does the NIST suggest?

Posted on

Originally seen: Techtarget by Judith Myerson

Knowing what ransomware recovery methods are available is important as the threat continues to grow. Expert Judith Myerson outlines what the NIST recommends for enterprises.

 

Since the WannaCry outbreak, ransomware has attracted a great deal of attention. In response, the National Institute of Standards and Technology, or NIST, published a draft version of ransomware recovery methods. What methods has the NIST recommended?

Ransomware maliciously encrypts all of a victim’s documents and files so that they can’t decrypt them. To help enterprises with ransomware recovery, the NIST recommends corruption testing, logging analysis and data backups.

The corruption testing component of Tripwire Enterprise can be used to detect changes in file systems on servers and desktops, as well as when and which files were maliciously modified or overwritten.

Another tool that can be used for ransomware recovery is HPE ArcSight Security Enterprise Manager. The logging component of this tool collects security logs for analysis and reporting. This component is used to filter, search and manage the logs generated by the corruption testing component.

The corruption testing and logging components of this tool work together to provide information about the files that were encrypted by the ransomware. That information includes what programs were used and which users ran them.

Another helpful tool for ransomware recovery is the backup capability provided by IBM Spectrum Protect, which can be used to restore files hosted in physical, virtual or cloud environments. If a system fails due to ransomware, the operating system and the IBM Spectrum Protect client need to be physically reinstalled so that all files — including system files — can be restored to their previous state.

However, frequent backups require more resources. They also require more space on the server. An active file that has been frequently backed up may lose more data during the recovery process. Likewise, the restoration only covers up to a certain point in time and will not reflect recent changes to the file. Also, if a backup is done after a ransomware attack, the backups will include encrypted data. It is very important to properly label backups to ensure that the versions from prior to the attack are used.

The issue with these ransomware recovery recommendations is that they fail to mention the possibility of a server vulnerability that has enabled, for instance, a breach of Apache Struts servers that leads to the installation of a threat like the Cerber ransomware on locally networked computers.

 

 

VPNFilter malware infecting 500,000 devices is worse than we thought

Posted on

Malware tied to Russia can attack connected computers and downgrade HTTPS.

Originally seen:  – 

Two weeks ago, officials in the private and public sectors warned that hackers working for the Russian government infected more than 500,000 consumer-grade routers in 54 countries with malware that could be used for a range of nefarious purposes. Now, researchers from Cisco’s Talos security team say additional analysis shows that the malware is more powerful than originally thought and runs on a much broader base of models, many from previously unaffected manufacturers.

The most notable new capabilities found in VPNFilter, as the malware is known, come in a newly discovered module that performs an active man-in-the-middle attack on incoming Web traffic. Attackers can use this ssler module to inject malicious payloads into traffic as it passes through an infected router. The payloads can be tailored to exploit specific devices connected to the infected network. Pronounced “essler,” the module can also be used to surreptitiously modify content delivered by websites.

Besides covertly manipulating traffic delivered to endpoints inside an infected network, ssler is also designed to steal sensitive data passed between connected end-points and the outside Internet. It actively inspects Web URLs for signs they transmit passwords and other sensitive data so they can be copied and sent to servers that attackers continue to control even now, two weeks after the botnet was publicly disclosed.

To bypass TLS encryption that’s designed to prevent such attacks, ssler actively tries to downgrade HTTPS connections to plaintext HTTP traffic. It then changes request headers to signal that the end point isn’t capable of using encrypted connections. Ssler makes special accommodations for traffic to Google, Facebook, Twitter, and Youtube, presumably because these sites provide additional security features. Google, for example, has for years automatically redirected HTTP traffic to HTTPS servers. The newly discovered module also strips away data compression provided by the gzip application because plaintext traffic is easier to modify.

All your network traffic belongs to us

The new analysis, which Cisco is expected to detail in a report to be published Wednesday morning, shows that VPNFilter poses a more potent threat and targets more devices than was reported two weeks ago. Previously, Cisco believed the primary goal of VPNFilter was to use home and small-office routers, switches, and network-attached storage devices as a platform for launching obfuscated attacks on primary targets. The discovery of ssler suggests router owners themselves are a key target of VPNFilter.

“Initially when we saw this we thought it was primarily made for offensive capabilities like routing attacks around the Internet,” Craig Williams, a senior technology leader and global outreach manager at Talos, told Ars. “But it appears [attackers] have completely evolved past that, and now not only does it allow them to do that, but they can manipulate everything going through the compromised device. They can modify your bank account balance so that it looks normal while at the same time they’re siphoning off money and potentially PGP keys and things like that. They can manipulate everything going in and out of the device.”

While HTTP Strict Transport Security and similar measures designed to prevent unencrypted Web connections may help prevent the HTTP downgrade from succeeding, Williams said those offerings aren’t widely available in Ukraine, where a large number of the VPN-infected devices are located. What’s more, many sites in the US and Western Europe continue to provide HTTP as a fallback for older devices that don’t fully support HTTPS.

(Much) bigger attack surface

Talos said VPNFilter also targets a much larger number of devices than previously thought, including those made by ASUS, D-Link, Huawei, Ubiquiti, UPVEL, and ZTE. The malware also works on new models from manufacturers previously known to be targeted, including Linksys, MikroTik, Netgear, and TP-Link. Williams estimated that the additional models put 200,000 additional routers worldwide at risk of being infected. The full list of targeted devices is:

Asus Devices:
RT-AC66U (new)
RT-N10 (new)
RT-N10E (new)
RT-N10U (new)
RT-N56U (new)
RT-N66U (new)

D-Link Devices:
DES-1210-08P (new)
DIR-300 (new)
DIR-300A (new)
DSR-250N (new)
DSR-500N (new)
DSR-1000 (new)
DSR-1000N (new)

Huawei Devices:
HG8245 (new)

Linksys Devices:
E1200
E2500
E3000 (new)
E3200 (new)
E4200 (new)
RV082 (new)
WRVS4400N

Mikrotik Devices:
CCR1009 (new)
CCR1016
CCR1036
CCR1072
CRS109 (new)
CRS112 (new)
CRS125 (new)
RB411 (new)
RB450 (new)
RB750 (new)
RB911 (new)
RB921 (new)
RB941 (new)
RB951 (new)
RB952 (new)
RB960 (new)
RB962 (new)
RB1100 (new)
RB1200 (new)
RB2011 (new)
RB3011 (new)
RB Groove (new)
RB Omnitik (new)
STX5 (new)

Netgear Devices:
DG834 (new)
DGN1000 (new)
DGN2200
DGN3500 (new)
FVS318N (new)
MBRN3000 (new)
R6400
R7000
R8000
WNR1000
WNR2000
WNR2200 (new)
WNR4000 (new)
WNDR3700 (new)
WNDR4000 (new)
WNDR4300 (new)
WNDR4300-TN (new)
UTM50 (new)

QNAP Devices:
TS251
TS439 Pro
Other QNAP NAS devices running QTS software

TP-Link Devices:
R600VPN
TL-WR741ND (new)
TL-WR841N (new)

Ubiquiti Devices:
NSM2 (new)
PBE M5 (new)

Upvel Devices:
Unknown Models* (new)

ZTE Devices:
ZXHN H108N (new)

Incredibly targeted

Wednesday’s Talos report also provides new insights into a previously found packet sniffer module. It monitors traffic for data specific to industrial control systems that connect over a TP-Link R600 virtual private network. The sniffer module also looks for connections to a pre-specified IP address. It also looks for data packets that are 150 bytes or larger.

“They’re looking for very specific things,” Williams said. “They’re not trying to gather as much traffic as they can. They’re after certain very small things like credentials and passwords. We don’t have a lot of intel on that other than it seems incredibly targeted and incredibly sophisticated. We’re still trying to figure out who they were using that on.”

Wednesday’s report also details a self-destroy module that can be delivered to any infected device that currently lacks that capability. When executed it first removes all traces of VPNFilter from the device and then runs the command “rm -rf /*,” which deletes the remainder of the file system. The module then reboots the device.

Despite the discovery of VPNFilter and the FBI seizure two weeks ago of a key command and control server, the botnet still remains active, Williams said. The reason involves the deliberately piecemeal design of the malware. Stage 1 acts as a backdoor and is one of the few known pieces of router malware that can survive a reboot. Meanwhile, stages 2 and 3, which provide advanced functions for things such as man-in-the-middle attacks and self-destruction capabilities, have to be reinstalled each time an infected device is restarted.

To accommodate for this limitation, stage 1 relies on a sophisticated mechanism to locate servers where stage 2 and stage 3 payloads were available. The primary method involved downloading images stored on Photobucket.com and extracting an IP address from six integer values used for GPS latitude and longitude stored in the EXIF field of the image. When Photobucket removed those images, VPNFilter used a backup method that relied on a server located at ToKnowAll.com.

Even with the FBI’s seizure of ToKnowAll.com, devices infected by stage 1 can still be put into a listening mode that allows attackers to use specific trigger packets that manually install later VPNFilter stages. That means hundreds of thousands of devices likely remain infected with stage 1, and possibly stages 2 and 3.

There is no easy way to know if a router is infected. One method involves searching through logs for indicators of compromise listed at the end of Cisco’s report. Another involves reverse engineering the firmware, or at least extracting it from a device, and comparing it with the authorized firmware. Both of those things are out of the abilities of most router owners. That’s why it makes sense for people to simply assume a router may be infected and disinfect it. Researchers still don’t know how routers initially become infected with stage 1, but they presume it’s by exploiting known flaws for which patches are probably available.

Steps to fully disinfect devices vary from model to model. In some cases, pressing a recessed button on the back to perform a factory reset will wipe stage 1 clean. In other cases, owners must reboot the device and then immediately install the latest available authorized firmware from the manufacturer. Router owners who are unsure how to respond should contact their manufacturer, or, if the device is more than a few years old, buy a new one.

Router owners should always change default passwords and, whenever feasible, disable remote administration. For extra security, people can always run routers behind a proper security firewall. Williams said he has seen no evidence VPNFilter has infected devices running Tomato, Merlin WRT, and DD-WRT firmware, but that he can’t rule out that possibility.

Two weeks ago, however, the FBI recommended that all owners of consumer-grade routers, switches, and network-attached storage devices reboot their devices. While the advice likely disrupted VPNFilter’s advance and bought infected users time, it may also have created the mistaken belief that rebooting alone was enough to fully remove VPNFilter from infected devices.

“I’m concerned that the FBI gave people a false sense of security,” Williams said. “VPNFilter is still operational. It infects even more devices than we initially thought, and its capabilities are far in excess of what we initially thought. People need to get it off their network.”

HOW CREATIVE DDOS ATTACKS STILL SLIP PAST DEFENSES

Posted on Updated on

Originally Seen: March 12, 2018 on Wired.

DISTRIBUTED DENIAL OF service attacks, in which hackers use a targeted hose of junk traffic to overwhelm a service or take a server offline, have been a digital menace for decades. But in just the last 18 months, the public picture of DDoS defense has evolved rapidly. In fall 2016, a rash of then-unprecedented attacks caused internet outages and other service disruptions at a series of internet infrastructure and telecom companies around the world. Those attacks walloped their victims with floods of malicious data measured up to 1.2 Tbps. And they gave the impression that massive, “volumetric” DDOS attacks can be nearly impossible to defend against.

The past couple of weeks have presented a very different view of the situation, though. On March 1, Akamai defended developer platform GitHub against a 1.3 Tbps attack. And early last week, a DDOS campaign against an unidentified service in the United States topped out at a staggering 1.7 Tbps, according to the network security firm Arbor Networks. Which means that for the first time, the web sits squarely in the “terabit attack era,” as Arbor Networks put it. And yet, the internet hasn’t collapsed.

One might even get the impression from recent high-profile successes that DDoS is a solved problem. Unfortunately, network defenders and internet infrastructure experts emphasize that despite the positive outcomes, DDoS continues to pose a serious threat. And sheer volume isn’t the only danger. Ultimately, anything that causes disruption and affects service availability by diverting a digital system’s resources or overloading its capacity can be seen as a DDoS attack. Under that conceptual umbrella, attackers can generate a diverse array of lethal campaigns.

“DDoS will never be over as a threat, sadly,” says Roland Dobbins, a principal engineer at Arbor Networks. “We see thousands of DDoS attacks per day—millions per year. There are major concerns.”

Getting Clever

One example of a creative interpretation of a DDoS is the attack Netflix researchers tried out against the streaming service itself in 2016. It works by targeting Netflix’s application programming interface with carefully tailored requests. These queries are built to start a cascade within the middle and backend application layers the streaming service is built on—demanding more and more system resources as they echo through the infrastructure. That type of DDoS only requires attackers to send out a small amount of malicious data, so mounting the offensive would be cheap and efficient, but clever execution could cause internal disruptions or a total meltdown.

“What creates the nightmare situations are the smaller attacks that overwork applications, firewalls, and load balancers,” says Barrett Lyon, head of research and development at Neustar Security Solutions. “The big attacks are sensational, but it’s the well-crafted connection floods that have the most success.”

‘We see thousands of DDoS attacks per day—millions per year.’

ROLAND DOBBINS, ARBOR NETWORKS

These types of attacks target specific protocols or defenses as a way of efficiently undermining broader services. Overwhelming the server that manages firewall connections, for example, can allow attackers to access a private network. Similarly, deluging a system’s load balancers—devices that manage a network’s computing resources to improve speed and efficiency—can cause backups and overloads. These types of attacks are “as common as breathing,” as Dobbins puts it, because they take advantage of small disruptions that can have a big impact on an organization’s defenses.

Similarly, an attacker looking to disrupt connectivity on the internet in general can target the exposed protocols that coordinate and manage data flow around the web, rather than trying to take on more robust components.

That’s what happened last fall to Dyn, an internet infrastructure company that offers Domain Name System services (essentially the address book routing structure of the internet). By DDoSing Dyn and destabilizing the company’s DNS servers, attackers caused outages by disrupting the mechanism browsers use to look up websites. “The most frequently attacked targets for denial of service is web severs and DNS servers,” says Dan Massey, chief scientist at the DNS security firm Secure64 who formerly worked on DDoS defense research at the Department of Homeland Security. “But there are also so many variations on and so many components of denial of service attacks. There’s no such thing as one-size-fits-all defense.”

Memcached and Beyond

The type of DDoS attack hackers have been using recently to mount enormous attacks is somewhat similar. Known as memcached DDoS, these attacks take advantage of unprotected network management servers that aren’t meant to be exposed on the internet. And they capitalize on the fact that they can send a tiny customized packet to a memcached server, and elicit a much larger response in return. So a hacker can query thousands of vulnerable memcached servers multiple times per second each, and direct the much larger responses toward a target.

This approach is easier and cheaper for attackers than generating the traffic needed for large-scale volumetric attacks using a botnet—the platforms typically used to power DDoS assaults. The memorable 2016 attacks were famously driven by the so-called “Mirai” botnet. Mirai infected 600,000 unassuming Internet of Things products, like webcams and routers, with malware that hackers could use to control the devices and coordinate them to produce massive attacks. And though attackers continued to refine and advance the malware—and still use Mirai-variant botnets in attacks to this day—it was difficult to maintain the power of the original attacks as more hackers jockeyed for control of the infected device population, and it splintered into numerous smaller botnets.

‘There’s no such thing as one-size-fits-all defense.’

DAN MASSEY, SECURE64

While effective, building and maintaining botnets requires resources and effort, whereas exploiting memcached servers is easy and almost free. But the tradeoff for attackers is that memcached DDOS is more straightforward to defend against if security and infrastructure firms have enough bandwidth. So far, the high-profile memcached targets have all been defended by services with adequate resources. In the wake of the 2016 attacks, foreseeing that volumetric assaults would likely continue to grow, defenders seriously expanded their available capacity.

As an added twist, DDoS attacks have also increasingly incorporated ransom requests as part of hackers’ strategies. This has especially been the case with memcached DDoS. “It’s an attack of opportunity,” says Chad Seaman, a senior engineer on the security intelligence response team at Akamai. “Why not try and extort and maybe trick someone into paying it?”

The DDoS defense and internet infrastructure industries have made significant progress on DDoS mitigation, partly through increased collaboration and information-sharing. But with so much going on, the crucial point is that DDoS defense is still an active challenge for defenders every day. “

When sites continue to work it doesn’t mean it’s easy or the problem is gone.” Neustar’s Lyon says. “It’s been a long week.”

Look-Alike Domains and Visual Confusion

Posted on

Originally Seen: March 8th, 2018 on krebsonsecurity.

How good are you at telling the difference between domain names you know and trust and impostor or look-alike domains? The answer may depend on how familiar you are with the nuances of internationalized domain names (IDNs), as well as which browser or Web application you’re using.

For example, how does your browser interpret the following domain? I’ll give you a hint: Despite appearances, it is most certainly not the actual domain for software firm CA Technologies (formerly Computer Associates Intl Inc.), which owns the original ca.com domain name:

https://www.са.com/

Go ahead and click on the link above or cut-and-paste it into a browser address bar. If you’re using Google ChromeApple’s Safari, or some recent version of Microsoft‘s Internet Explorer or Edge browsers, you should notice that the address converts to “xn--80a7a.com.” This is called “punycode,” and it allows browsers to render domains with non-Latin alphabets like Cyrillic and Ukrainian.

Below is what it looks like in Edge on Windows 10; Google Chrome renders it much the same way. Notice what’s in the address bar (ignore the “fake site” and “Welcome to…” text, which was added as a courtesy by the person who registered this domain):

IE, Edge, Chrome and Safari all will convert https://www.са.com/ into its punycode output (xn--80a7a.com), in part to warn visitors about any confusion over look-alike domains registered in other languages. But if you load that domain in Mozilla Firefox and look at the address bar, you’ll notice there’s no warning of possible danger ahead. It just looks like it’s loading the real ca.com:

The domain “xn--80a7a.com” pictured in the first screenshot above is punycode for the Ukrainian letters for “s” (which is represented by the character “c” in Russian and Ukrainian), as well as an identical Ukrainian “a”.

It was registered by Alex Holden, founder of Milwaukee, Wis.-based Hold Security Inc.Holden’s been experimenting with how the different browsers handle punycodes in the browser and via email. Holden grew up in what was then the Soviet Union and speaks both Russian and Ukrainian, and he’s been playing with Cyrillic letters to spell English words in domain names.

Letters like A and O look exactly the same and the only difference is their Unicode value. There are more than 136,000 Unicode characters used to represent letters and symbols in 139 modern and historic scripts, so there’s a ton of room for look-alike or malicious/fake domains.

For example, “a” in Latin is the Unicode value “0061” and in Cyrillic is “0430.”  To a human, the graphical representation for both looks the same, but for a computer there is a huge difference. Internationalized domain names (IDNs) allow domain names to be registered in non-Latin letters (RFC 3492), provided the domain is all in the same language; trying to mix two different IDNs in the same name causes the domain registries to reject the registration attempt.

So, in the Cyrillic alphabet (Russian/Ukrainian), we can spell АТТ, УАНОО, ХВОХ, and so on. As you can imagine, the potential opportunity for impersonation and abuse are great with IDNs. Here’s a snippet from a larger chart Holden put together showing some of the more common ways that IDNs can be made to look like established, recognizable domains:

Holden also was able to register a valid SSL encryption certificate for https://www.са.com from Comodo.com, which would only add legitimacy to the domain were it to be used in phishing attacks against CA customers by bad guys, for example.

A SOLUTION TO VISUAL CONFUSION

To be clear, the potential threat highlighted by Holden’s experiment is not new. Security researchers have long warned about the use of look-alike domains that abuse special IDN/Unicode characters. Most of the major browser makers have responded in some way by making their browsers warn users about potential punycode look-alikes.

With the exception of Mozilla, which by most accounts is the third most-popular Web browser. And I wanted to know why. I’d read the Mozilla Wiki’s IDN Display Algorithm FAQ,” so I had an idea of what Mozilla was driving at in their decision not to warn Firefox users about punycode domains: Nobody wanted it to look like Mozilla was somehow treating the non-Western world as second-class citizens.

I wondered why Mozilla doesn’t just have Firefox alert users about punycode domains unless the user has already specified that he or she wants a non-English language keyboard installed. So I asked that in some questions I sent to their media team. They sent the following short statement in reply:

“Visual confusion attacks are not new and are difficult to address while still ensuring that we render everyone’s domain name correctly. We have solved almost all IDN spoofing problems by implementing script mixing restrictions, and we also make use of Safe Browsing technology to protect against phishing attacks. While we continue to investigate better ways to protect our users, we ultimately believe domain name registries are in the best position to address this problem because they have all the necessary information to identify these potential spoofing attacks.”

If you’re a Firefox user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true.”

Incidentally, anyone using the Tor Browser to anonymize their surfing online is exposed to IDN spoofing because Tor by default uses Mozilla as well. I could definitely see spoofed IDNs being used in targeting phishing attacks aimed at Tor users, many of whom have significant assets tied up in virtual currencies. Fortunately, the same “about:config” instructions work just as well on Tor to display punycode in lieu of IDNs.

Holden said he’s still in the process of testing how various email clients and Web services handle look-alike IDNs. For example, it’s clear that Twitter sees nothing wrong with sending the look-alike CA.com domain in messages to other users without any context or notice. Skype, on the other hand, seems to truncate the IDN link, sending clickers to a non-existent page.

“I’d say that most email services and clients are either vulnerable or not fully protected,” Holden said.

For a look at how phishers or other scammers might use IDNs to abuse your domain name, check out this domain checker that Hold Security developed. Here’s the first page of results for krebsonsecurity.com, which indicate that someone at one point registered krebsoṇsecurity[dot]com (that domain includes a lowercase “n” with a tiny dot below it, a character used by several dozen scripts). The results in yellow are just possible (unregistered) domains based on common look-alike IDN characters.

I wrote this post mainly because I wanted to learn more about the potential phishing and malware threat from look-alike domains, and I hope the information here has been interesting if not also useful. I don’t think this kind of phishing is a terribly pressing threat (especially given how far less complex phishing attacks seem to succeed just fine for now). But it sure can’t hurt Firefox users to change the default “visual confusion” behavior of the browser so that it always displays punycode in the address bar (see the solution mentioned above).

The security concerns of cloud cryptomining services

Posted on

Originally seen on: TechTarget

Cloud cryptomining as a service is a security risk to users. Expert Frank Siemons discusses cloud mining service providers and what to look out for if you use one.

One of the more interesting news stories over the last year has been the rise — and, currently, the fall of cryptocurrencies.

Bitcoin is the best-known variety, but other cryptocurrencies, such as Litecoin, Ripple and Ethereum, also saw dramatic increases in their worth during 2017. While some of this value dropped off in the first few weeks of 2018, there exists significant value in these currencies.

These virtual coins or their transactions can be mined for a fee, though some coin varieties are more profitable than others. Bitcoin, for instance, has passed the stage where mining at home returns a profit. The complexity and the mining workload have increased so much that the generated electricity costs far outweigh the value of the mined coins.

To avoid individual initial setup costs and to benefit from some of the efficiency increases that large specialized clusters bring, prospective miners can sign up with a cloud mining service provider.

Cloud mining service providers

The main benefit cloud cryptomining providers offer is their economy of scale. Primarily, these providers operate large data centers filled with specialized mining rigs. Everything from purpose-built hardware and software to power consumption is built around gaining maximum efficiency for cryptomining operations.

This significant investment has already been made, and the customer rents a small part of the processing power — expressed in mega or giga hashes per second — based on their expectancy that the currency will be at a certain price point during the rental period.

Security concerns for cloud cryptomining

The mined virtual coins need to be stored in a digital wallet eventually. Home miners are advised to store this wallet on an encrypted offline medium, such as a detachable USB drive, or to use a secure online digital wallet service.

However, both options carry the risk of losing the stored cryptocurrency. This could be due to the theft or loss of the USB drive, a compromised computer, or a hack or bug within a digital wallet service, for instance.

A cloud cryptomining provider is not bound by the same regulations as a traditional bank. This lack of regulation brings with it significant risk. The providers potentially hold a significant amount of value in the form of virtual money, which makes them an attractive target for cybercriminals.

Some research into where data centers are located and under which jurisdiction they fall is fundamental. After all, technically these data centers could hold a significant investment in their virtual vault. Even physical security is an essential factor to consider.

Because cloud cryptomining services depend on distributed networks and require access to the internet, fully air-gapped storage is not possible in a cloud system. This opens up an entry point for external attackers, which is what the NiceHash hackers exploited when they stole an estimated $64 million worth of bitcoin in 2017.

The attackers gained access to a corporate machine through an engineer’s VPN account and started making transactions via NiceHash’s payment system. This simply could not have happened if an offline wallet was used, as is often the case in smaller, individual setups.

Of course, attacks do not need to come from the outside. When relying on a company that is located in another country, the risk of internal fraud is high because it is handling a large amount of money without the protection of banking regulations. Several cases have been reported where either a staff member ran off with a significant amount of virtual currency or the entire cloud mining company was based on a scam.

Several provider comparison sites exist that discuss the reputations of cloud cryptomining companies. It is also advised to check online forums and social media channels before committing to any investment. Research is critical.

Conclusion

Where there is money, there is crime. The substantial increase in cryptocurrency investments and their meteoric rise in value over the recent months have paved the way for many scams and breaches that are traditionally linked to banks and investment schemes.

Does this mean cloud cryptomining is always unsafe? It does not, but it is essential to look at the providers with at least the same amount of scrutiny as one would use when looking at a more traditional investment firm.

Probably even more scrutiny should be applied because of the lack of proper regulation at this point. As always, technology has outpaced policy.

 

Google Bans Cryptocurrency-Related Ads

Posted on Updated on

Originally seen on: Bleepingcomputer.com

Google has decided to follow on Facebook’s footsteps and ban cryptocurrency-related advertising. The ban will enter into effect starting June 2018, the company said today in a help page.

In June 2018, Google will update the Financial services policy to restrict the advertisement of Contracts for Difference, rolling spot forex, and financial spread betting. In addition, ads for the following will no longer be allowed to serve:
‧  Binary options and synonymous products
‧  Cryptocurrencies and related content (including but not limited to initial coin offerings, cryptocurrency exchanges, cryptocurrency wallets, and cryptocurrency trading advice)

The ban will enter into effect across all of Google’s advertising network, including ads shown in search results, on third-party websites, and YouTube.

Some ads will be allowed, but not many

But the ban is not total. Google said that certain entities will be able to advertise a limited set of the banned services, including “cryptocurrencies and related content.”

These advertisers will need to apply for certification with Google. The downside is that the “Google certification process” will only be available for advertisers located in “certain countries.”

Google did not provide a list of countries, but said the advertisers will have to be licensed by relevant financial services and “comply with relevant legal requirements, including those related to complex speculative financial products.”

Prices for almost all cryptocurrencies fell across the board today after Google’s announcement, and most coins continued to lose value.

 

Scams and phishing sites to blame

While Google did not provide a backdrop to the reasons it banned cryptocurrency ads, they are likely to be the same to the ones cited by Facebook —misleading ads being abused to drive traffic to financial scams and phishing sites.

There’s been a surge in malware and phishing campaigns targeting cryptocurrency owners ever since Bitcoin price surged in December 2016 [12]. Just last month, Cisco Talos and Ukrainian police disrupted a cybercriminal operation that made over $50 million by using Google ads to to drive traffic to phishing sites.

Malicious ads for cryptocurrencies
Malicious ads for cryptocurrencies

 

report published by “Big Four” accounting firm Ernst & Young in December 2017 reveals that 10% of all ICO (Initial Coin Offering) funds were lost to hackers and scams, and cryptocurrency phishing sites made around $1.5 million per month. The company says that cryptocurrency hacks and scams are a big business, and estimates that crooks made over $2 billion by targeting cryptocoin fans in the past years.

Furthermore, a Bitcoin.com survey revealed that nearly half of 2017’s cryptocurrencies had already failed.

The recent trend of using the overhyped cryptocurrency market and ICOs for financial scams is also the reason why the US Securities and Exchange Commission (SEC) has started investigating and charging people involved in these practices.

This constant abuse of the cryptocurrency theme was the main reason why Facebook banned such ads on its platform, and is, most likely, the reason why Google is getting ready to implement a similar ban in June.