Month: August 2018

SMARTPHONE VOTING IS HAPPENING, BUT NO ONE KNOWS IF IT’S SAFE

Posted on

Originally seen on Wired by Emily Dreyfuss

When news hit this week that West Virginian military members serving abroad will become the first people to vote by phone in a major US election this November, security experts were dismayed. For years, they have warned that all forms of online voting are particularly vulnerable to attacks, and with signs that the midterm elections are already being targeted, they worry this is exactly the wrong time to roll out a new method. Experts who spoke to WIRED doubt that Voatz, the Boston-based startup whose app will run the West Virginia mobile voting, has figured out how to secure online voting when no one else has. At the very least, they are concerned about the lack of transparency.

“From what is available publicly about this app, it’s no different from sending voting materials over the internet,” says Marian Schneider, president of the nonpartisan advocacy group Verified Voting. “So that means that all the built-in vulnerability of doing the voting transactions over the internet is present.”

And there are a lot of vulnerabilities when it comes to voting over the internet. The device a person is using could be compromised by malware. Or their browser could be compromised. In many online voting systems, voters receive a link to an online portal in an email from their election officials—a link that could be spoofed to redirect to a different website. There’s also the risk that someone could impersonate the voter. The servers that online voting systems rely on could themselves be targeted by viruses to tamper with votes or by DDoS attacks to bring down the whole system. Crucially, electronic votes don’t create the paper trail that allows officials to audit elections after the fact, or to serve as a backup if there is in fact tampering.

But the thing is, people want to vote by phone. In a 2016 Consumer Reports survey of 3,649 voting-age Americans, 33 percent of respondents said that they would be more likely to vote if they could do it from an internet-connected device like a smartphone. (Whether it would actually increase voter turnout is unclear; a 2014 report conducted by an independent panel on internet voting in British Columbia concludes that, when all factors are considered, online voting doesn’t actually lead more people to vote.)

Thirty-one states and Washington, DC, already allow certain people, mostly service members abroad, to file absentee ballots online, according to Verified Voting. But in 28 of those states—including Alaska, where any registered voter can vote online—online voters must waive their right to a secret ballot, underscoring another major risk that security experts worry about with online voting: that it can’t protect voter privacy.

“Because of current technological limitations, and the unique challenges of running public elections, it is impossible to maintain separation of voters’ identities from their votes when Internet voting is used,” concludes a 2016 joint report from Common Cause, Verified Voting, and the Electronic Privacy Information Center. That’s true whether those votes were logged by email, fax, or an online portal.

Enter Voatz

Voatz says it’s different. The 12-person startup, which raised $2.2 million in venture capital in January, has worked on dozens of pilot elections, including primaries in two West Virginia counties this May. On a website FAQ, it notes, “There are several important differences between traditional Internet voting and the West Virginia pilot—mainly, security.”

Voatz CEO Nimit Sawhney says the app has two features that make it more secure than other forms of online voting: the biometrics it uses to authenticate a voter and the blockchain ledger where it stores the votes.

The biometrics part occurs when a voter authenticates their identity using a fingerprint scan on their phones. The app works only on certain Androids and recent iPhones with that feature. Voters must also upload a photo of an official ID—which Sawhney says Voatz verifies by scanning their barcodes—and a video selfie, which Voatz will match to the ID using facial-recognition technology. (“You have to move your face and blink your eyes to make sure you are not taking a video of somebody else or taking a picture of a picture,” Sawhney says.) It’s up to election officials to decide whether a voter should have to upload a new selfie or fingerprint scan each time they access the app or just the first time.

“We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer.”

NIMIT SAWHNEY, VOATZ

 

The blockchain comes in after the votes are entered. “The network then verifies it—there’s a whole bunch of checks—then adds it to the blockchain, where it stays in a lockbox until election night,” Sawhney says. Voatz uses a permissioned blockchain, which is run by a specific group of people with granted access, as opposed to a public blockchain like Bitcoin. And in order for election officials to access the votes on election night, they need Voatz to hand deliver them the cryptographic keys.

Sawhney says that election officials print out a copy of each vote once they access them, in order to do an audit. He also tells WIRED that in the version of the app that people will use in November, Voatz will add a way for voters to take a screenshot of their vote and have that separately sent to election officials for a secondary audit.

To address concerns about ballot secrecy, Sawhney says Voatz deletes all personal identification data from its servers, assigns each person a unique but anonymous identifier within the system, and employs a mix of network encryption methods. “We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer,” he says.

Experts Are Concerned

Very little information is publicly available about the technical architecture behind the Voatz app. The company says it has done a security audit with three third-party security firms, but the results of that audit are not public. Sawhney says the audit contains proprietary and security information that can’t leak to the public. He invited any security researchers who want to see the audit to come to Boston and view it in Voatz’s secure room after signing an NDA.

This lack of transparency worries people who’ve been studying voting security for a long time. “In over a decade, multiple studies by the top experts in the field have concluded that internet voting cannot be made secure with current technology. VOATZ claims to have done something that is not doable with current technology, but WON’T TELL US HOW,” writes Stanford computer scientist and Verified Voting founder David Dill in an email to WIRED.

Voatz shared one white paper with WIRED, but it lacks the kind of information experts might expect—details on the system architecture, threat tests, how the system responds to specific attacks, verification from third parties. “In my opinion, anybody purporting to have securely and robustly applied blockchain technology to voting should have prepared a detailed analysis of how their system would respond to a long list of known threats that voting systems must respond to, and should have made their analysis public,” Carnegie Mellon computer scientist David Eckhardt wrote in an email.

Ideally, experts say, Voatz would have held a public testing period of its app before deploying it in a live election. Back in 2010, for example, Washington, DC, was developing an open-source system for online voting and invited the public to try to hack the system in a mock trial. Researchers from the University of Michigan were able to compromise the election server in 48 hours and change all the vote tallies, according to their report afterward. They also found evidence of foreign operatives already in the DC election server. This kind of testing is now considered best practice for any online voting implementation, according to Eckhardt. Voatz’s trials have been in real primaries.

“West Virginia is handing over its votes to a mystery box.”

DAVID DILL, STANFORD UNIVERSITY

 

Voatz’s use of blockchain itself does not inspire security experts, either, who dismissed it mostly as marketing. When asked for his thoughts on Voatz’s blockchain technology, University of Michigan computer scientist Alex Halderman, who was part of the group that threat-tested the DC voting portal in 2010, sent WIRED a recent XKCD cartoon about voting software. In the last panel, a stick figure with a microphone tells two software engineers, “They say they’ve fixed it with something called ‘blockchain.’” The engineers’ response? “Aaaaa!!!” “Whatever they’ve sold you, don’t touch it.” “Bury it in the desert.” “Wear gloves.”

“Voting from an app on a mobile phone is as bad an idea as voting online from a computer,” says Avi Rubin, technical director of the Information Security Institute at Johns Hopkins, who has studied electronic voting systems since 1997. “The fact that someone is throwing around the blockchain buzzword does nothing to make this more secure. This is as bad an idea as there is.”

Blockchain has its own limitations, and it’s far from a perfect security solution for something like voting. First of all, information can be manipulated before it enters the chain. “In fact, there is an entire industry in viruses to manipulate cryptocurrency transactions before they enter the blockchain, and there is nothing to prevent the use of similar viruses to change the vote,” says Poorvi Vora, a computer scientist and election security expert at George Washington University.

She adds that if the blockchain is a permissioned version, as Voatz’s is, “It is possible for those maintaining the blockchain to collude to change the data, as well as to introduce denial of service type attacks.”

Click on this iOS phishing scam and you’ll be connected to “Apple Care”

Posted on Updated on

Scam website launched phone call, connected victims to “Lance Roger at Apple Care.”

Originally seen on ArsTechnica by:  – 

India-based tech support scams have taken a new turn, using phishing emails targeting Apple users to push them to a fake Apple website. This phishing attack also comes with a twist—it pops up a system dialog box to start a phone call. The intricacy of the phish and the formatting of the webpage could convince some users that their phone has been “locked for illegal activity” by Apple, luring users into soon clicking to complete the call.

Scammers are following the money. As more people use mobile devices as their primary or sole way of connecting to the Internet, phishing attacks and other scams have increasingly targeted mobile users. And since so much of people’s lives are tied to mobile devices, they’re particularly attractive targets for scammers and fraudsters.

“People are just more distracted when they’re using their mobile device and trust it more,” said Jeremy Richards, a threat intelligence researcher at the mobile security service provider Lookout. As a result, he said, phishing attacks against mobile devices have a higher likelihood of succeeding.

This particular phish, targeted at email addresses associated with Apple’s iCloud service, appears to be linked to efforts to fool iPhone users into allowing attackers to enroll them into rogue mobile device management services that allow bad actors to push compromised applications to the victim’s phones as part of a fraudulent Apple “security service.”

I attempted to bluff my way through a call to the “support” number to collect intelligence on the scam. The person answering the call, who identified himself as “Lance Roger from Apple Care,” became suspicious of me and hung up before I could get too far into the script.

Running down the scam

In a review of spam messages I’ve received this weekend, I found an email with the subject line, “[username], Critical alert for your account ID 7458.” Formatted to look like an official cloud account warning (but easily, by me at least, discernable as a phish), the email warned, “Sign-in attempt was blocked for your account [email address]. Someone just used your password to try to sign in to your profile.” A “Check Activity” button below was linked to a webpage on a compromised site for a men’s salon in southern India.

That page, using an obfuscated JavaScript, forwards the victim to another website, which in turn forwards to the site applesecurityrisks.xyz—a fake Apple Support page. JavaScript on that pagethen used a programmed “click” event to activate a link on the page that uses the tel:// uniform resource identifier (URI) handler. On an iPhone, this initiates a dialog box to start a phone call; on iPads and other Apple devices, this attempts to launch a FaceTime session.

Meanwhile, an animated dialog box on the screen urged the target to make the call because their phone had been “locked due to illegal activity.” Script on the site scrapes data from the “user agent” data sent by the browser to determine what type of device the page was visited from:

window.defaultText='Your |%model%| has been locked due to detected illegal activity! Immediately call Apple Support to unlock it!';

While the site is still active, it is now marked as deceptive by Google and Apple. I passed technical details of the phishing site to an Apple security team member.

The scam is obviously targeted at the same sort of audience as Windows tech support scamswe’ve reported on. But it doesn’t take too much imagination to see how schemes like this could be used to target people at a specific company, customers of a particular bank, or users of a certain cloud platform to perform much more tailored social engineering attacks.

Windows Server 2019 RDSH is a go

Posted on

Originally Seen: April 17, 2018

UPDATE: Microsoft on April 24 released the next preview build of Windows Server 2019, which includes RDSH. “Because of a bug, the RDSH role was missing in previous releases of Windows Server 2019 – this build fixes that,” the company said in a blog post announcing Build 17650.]

Remote Desktop Session Host is coming to the Windows Server 2019 preview and official release, Microsoft has confirmed.

The Remote Desktop Session Host (RDSH) role was not available in the first preview build of Windows Server 2019 that Microsoft released to the Insiders Program in March. At that time, experts said they did not expect the company to include RDSH when the operating system becomes generally available later this year.

In a statement to SearchVirtualDesktop this week, however, a company spokesperson said: “The RDSH role will be in the preview build available for Insiders soon. Windows Server 2019 will have the [Remote Desktop Services] roles like in Windows Server 2016.”

Mixed messages on Windows Server 2019 RDSH

Up until now, the messaging from Microsoft around RDSH in Windows Server 2019 caused confusion and frustration among some in the IT community. The company declined to officially comment on the future of RDSH in March, although some members of the Windows Server team posted on Twitter about the issue.

Jeff Woolsey, principal program manager for Windows Server, said in March that Remote Desktop Services (RDS) — the set of technologies that provide remote desktop and application access — was “not gone.” Last week, he reiterated that statement, and Scott Manchester, Microsoft group manager for RDS, said RDSH would be coming to the Windows Server 2019 preview in about two weeks.

IT administrators and industry observers wondered why Microsoft had not clarified earlier that Windows Server 2019 would indeed have the RDSH role.

“Microsoft was disconcertingly quiet about the feature omission,” said Jeff Wilhelm, CTO at Envision Technology Advisors, a solutions provider in Pawtucket, R.I. “There was much speculation.”

One possibility is that the code for the RDSH role simply wasn’t ready, and instead of releasing something incomplete or buggy in the preview, Microsoft removed it altogether.

Other speculation focused on a potential new multi-user Windows 10 feature. Microsoft has not commented on that, but it may continue to be a possibility for session-hosted desktops without RDSH.

The news that RDSH will be in the next Insider build should mean “a sigh of relief” for service providers and IT admins, Wilhelm said in an email.

“RDSH provides an important feature to users at many organizations, and the announced improvements, including HTML5 support, are a welcome addition,” he said.

Protecting safety instrumented systems from malware attacks

Posted on

Originally seen: February 2018

Trisis malware targets safety instrumented systems and puts industrial control systems at risk. Expert Ernie Hayden reviews what to know about SIS and its security measures.

A newly discovered attack on industrial control systems has the security world uncovering more questions than answers.

The 2018 S4 Conference even included presentations and multiple side conversations about the attack called Trisis/Triton/HatMan.

The first public awareness of this attack came after cybersecurity company FireEye published a blog post about it in mid-December 2017. The company’s moniker for this malware was Triton. Close on the heels of the FireEye announcement, Dragos CEO Robert Lee published a white paper analyzing the malware that he called Trisis because it targeted Schneider Electric’s Triconex Safety Instrumented Systems.

On Dec. 18, 2017, the U.S. Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC) published its malware analysis report, industrial control systems (ICS)-CERT MAR-17-352-01, “HatMan — Safety System Targeted Malware,” which included its summary of the Triton/Trisis/HatMan malicious code.

Almost daily, new commentaries and analyses of Triton/Trisis/HatMan are published. It is obvious that the attack has raised more questions than answers, including: Who orchestrated the attack? Why did they develop this code? What was the attack’s purpose? Are there more malware attacks to come?

One thing is certain, according to Lee, “Trisis is the first ever [attack] to target safety instrumented systems, and it is the one that gives me the most concern.”

What is a safety instrumented system?

A simple, but not perfect, way to think about safety instrumented systems (SIS) is to consider them part of a dead man’s switch configuration.

Dead man’s switch (DMS) mechanisms are used in a variety of operating environments, such as locomotives, lawn mowers, chainsaws, snowblowers, and even for aircraft refueling. The idea is that the DMS must be continuously held or pressed by an operator, and, if the switch/handle is released during operation, the machine will either stop running or transition to a safer state, such as idling.

A DMS control in a locomotive can be a floor pedal, trigger handle or push-button where the device must be continuously held or pressed to enable the locomotive to move forward. If the engineer driving the train is incapacitated for any reason, the release of the DMS causes the engine to idle and, in some locomotives, the emergency brakes are applied. The system fails safe.

Traditional dead man’s switches in trains can be overridden using duct tape, heavy bricks or other methods, and, in rare cases, the switches can fail to engage when an incapacitated engineer slumps forward.

Safety instrumented systems are more complicated than the dead man’s switch described above. However, the SIS is installed — optimally in its own dedicated network zone — so that plant operations can be shut down under extreme plant conditions without human intervention. In other words, the plant can fail safe.

Emerson Process Management literature notes that safety instrumented systems “are specifically designed to protect personnel, equipment, and the environment by reducing the likelihood or the impact severity of an identified emergency event.”

An SIS is composed of a combination of sensors, logic solvers and final elements that are separate and distinct from the other plant controls. If the plant is out of control, the SIS is there to shut the plant down with no reliance on human intervention.

Some SIS configurations include dedicated sensors that shut down plant operations — such as refineries — when certain pressures or temperatures are exceeded. Another example of an SIS working is a nuclear reactor automatic shutdown — called a SCRAM — when coolant flow is below a minimum rate, etc. Again, no human intervention is necessary.

According to standards established by the International Electrotechnical Commission in IEC 61511 and the International Society of Automation ISA S84.01, safety instrumented systems must be separate and distinct — independent — from other control systems that operate and control the same equipment/systems. The controls and control systems contained within the SIS are devoted solely to the proper operation of the safety system. There is no reliance on outside controls or sensor input for the SIS to trip.

Safety instrumented systems architecture

Some ICS security experts may refer to the Purdue reference model — or the Purdue model — when discussing ICS network architecture. The Purdue model is a part of the Purdue Enterprise Reference Architecture, which provides a framework for designing manufacturing systems, and which was developed in the 1990s by Theodore Williams and the members of the Industry-Purdue University Consortium for Computer Integrated Manufacturing.

The Purdue model is intended to help users understand a production network. The model organizes an industrial plant network architecture into four levels — plus the underlying physical process level — and is illustrated below.

  • Level 0: The physical process — This is where the physical work in the plant gets done.
  • Level 1: Intelligent devices — This level includes the sensors, programmable logic controllers (PLCS) and the actuators. This level has its own distinct and separate sub-zone specifically for SIS.
  • Level 2: Control systems — This level is where the production is monitored, controlled and electronically supervised, and it includes the production video screens that display the Human Machine Interface and real-time controls and software.
  • Level 3: Manufacturing operations systems — This level is essentially the brains of the manufacturing operations. It includes manufacturing execution systems, maintenance and plant performance management systems, data historians, and middleware.
  • Level 4: Business logistics systems — This is on the enterprise side of the plant. At this level, business-related manufacturing activities are performed, primarily relying on enterprise resource planning software, such as SAP. This is where the plant production schedule is generated and material inventory, shipping and use are monitored and modified.

Basically, when you are in the manufacturing plant, you are looking primarily at Levels 0, 1, 2 and 3.

ICS network, or the Purdue model

ERNIE HAYDEN – The Purdue model

In the Purdue model, the SIS is located at Level 1 and comprises its own stand-alone network zone. The safety and protection systems monitor the manufacturing processes and, under emergency conditions, activate and return the plant to a safe state by closing valves, shutting off burners, increasing cooling water flow, etc. These safety and protection systems also include tools that monitor manufacturing and alert an operator of impending unsafe conditions.

SIS controller versus a PLC

You should understand that, visually, an SIS and an off-the-shelf programmable logic controller(PLC) or other industrial PC may look the same; however, they have different functions and different implementation schemes. Vendors typically use their current line of PLCs and modify them to fill the SIS role.

According to Clint Bodungen’s Hacking Exposed: Industrial Control Systems, “… the SIS typically uses a complicated series of both analog and digital 1-out-of-2 or 2-out-of-3 voting systems to monitor and respond to adverse process conditions. Normally, SISs are designated to provide only a few core functions …” when compared to the normal, multifunctional PLC in the manufacturing plant.

Safety instrumented systems may not be installed in every plant; however, they will be included in plants that can be affected by hackers, cyberthreats, terrorists or internal attackers, and which could result in serious dangers, such as death, injury, environmental releases, etc. So you may not see an SIS in a benign manufacturing facility, but the refinery next-door would have more than one SIS in place.

The crisis of a violated SIS

As noted above in Lee’s comment regarding his concern that the SIS has been violated by Trisis, the demonstration of the SIS being violated by malware is very serious. Trisis/Triton/HatMan in the SIS may not do anything catastrophic by itself; however, the NCCIC observes “… it could be very damaging when combined with malware that impacts the (manufacturing) process in tandem.”

Essentially the capacity to disable, modify or inhibit the ability of an SIS to fail safely can potentially result in physical consequences, environmental impact, injuries and even death.

In the Dragos blog, a FAQ entry rhetorically asks, “Is Trisis a Big Deal?” The answer is yes.

  • Trisis is the fifth known ICS-tailored malware following Stuxnet, Havex, BlackEnergy2 and CrashOverride.
  • Trisis is the first publicly known ICS-tailored malware to target SIS.
  • Lastly, because SISs are specifically designed and deployed to ensure the safety of the manufacturing process, environment and human life, an assault on SIS is “… bold and unsettling.”

Yes, the effect of Trisis is disconcerting and brings added attention to the security and integrity of the SIS. However, an SIS can be defeated without exotic malware by placing the SIS controller in bypass, placing the logic solver in an infinite loop, changing the trip and alarm set points, disconnecting the output from the logic, spoofing the inputs, etc., according to Secure the SIS by William L. Mostia. So waiting for SIS malware to prompt you to protect your SIS may not be enough.

The SIS is critical to safe plant operations and needs to be designed, implemented and maintained with utmost care and oversight. The Trisis/Triton/HatMan attack has certainly awakened the ICS security community.

As observed by Dale Peterson, founder and CEO of Digital Bond, a control system security company based in Sunrise, Fla., and producer of the annual S4 Conference, we are in the early stages of analyzing this SIS malware attack.

The industry needs to be reminded of how much we learned about Stuxnet after the detailed work done by Ralph Langner, et al.; however, it took time and resources before we really knew the details of Stuxnet. We may need to be patient with the ICS researchers who can tell us more about this new SIS malware and how we can best protect ourselves.

AT&T mobile 5G network falling short

Posted on Updated on

Originally Seen: TechTarget April 2018

The latest update on AT&T’s mobile 5G network trials indicates the company will need to work faster to meet its goal of launching a commercial service by the end of the year.

AT&T’s latest update on its mobile 5G trials indicates the carrier has significant hurdles to clear to achieve its goal of launching by the end of the year a commercial service based on the high-speed wireless technology.

AT&T published this week a blog describing its progress in the mobile 5G network trials in Austin and Waco, Texas; Kalamazoo, Mich.; and South Bend, Ind. The company started the tests roughly 18 months ago in Austin, adding the other cities late last year.

AT&T, along with Verizon and other carriers, is spending billions of dollars to develop fifth-generation wireless networks for business, consumer and internet of things applications. But the latest metrics published by AT&T were not what analysts would expect from technology for delivering mobile broadband to smartphones, tablets and other devices.

When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me.

Chris Antlitz, analyst, Technology Business Research Inc.

“When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me,” said Chris Antlitz, an analyst at Technology Business Research Inc., based in Hampton, N.H.. “It seems like there are some inconsistencies there.”

AT&T plans to deliver mobile 5G over the millimeter wave (mmWave) band, which is a spectrum between 30 gigahertz (GHz) and 300 GHz. MmWave allows for data rates up to 10 Gbps, which comfortably accommodates carriers’ plans for 5G. But before service providers can use the technology, they have to surmount its limitations in signal distance and in traveling through obstacles, like buildings.

AT&T’s mobile 5G network challenges

AT&T’s update indicates mmWave’s constraints remain a challenge. In Waco, for example, AT&T delivered 5G to a retail business roughly 500 feet away from its cellular transmitter. That maximum distance would require more transmitters than the population outside of major cities could support, Antlitz said.

AT&T, however, could provide a fixed wireless network that sends a 5G signal to residences and businesses as an alternative to wired broadband, Antlitz said. AT&T rival Verizon plans to offer that product by the end of the year.

Other shortcomings include AT&T’s limited success in sending a 5G signal from the cellular transmitter through the buildings, trees and other obstacles likely to stand in the way of its destination. In the trial update, AT&T said it achieved gigabit speeds only in “some non-line of sight conditions.” A line of sight typically refers to an unobstructed path between the transmitting and receiving antennas.

Distance and piercing obstacles are challenges for any carrier using mmWave for a mobile 5G network. Buildings and other large physical objects can block the technology’s short, high-frequency wavelengths. Also, gases in the atmosphere, rain and humidity can weaken mmWave’s signal strength, limiting the technology’s reach to six-tenths of a mile or less.

AT&T’s achievement in network latency also falls short of what’s optimal for a mobile 5G network. The carriers’ 9 to 12 milliseconds seem “a little high,” Antlitz said. “I would expect that on LTE, not 5G. 5G should be lower.”

While AT&T has likely made some progress in developing mobile 5G, “a lot of work needs to be done,” said Rajesh Ghai, an analyst at IDC.

Delays possible in AT&T, Verizon 5G offerings

Meanwhile, Verizon is testing its fixed wireless 5G network — a combination of mmWave and proprietary technology — in 11 major metropolitan areas. So far, the features Verizon has developed places the carrier “fairly far ahead of AT&T in terms of maximizing the capabilities of 5G,” Antlitz said.

Nevertheless, neither Verizon nor AT&T is a sure bet for launching a commercial 5G network this year.

“Some of this stuff might wind up getting pushed into 2019,” Antlitz said. “There are so many things that could throw a monkey wrench in their timetable. The probability of something doing that is very high.”

HP keylogger: How did it get there and how can it be removed?

Posted on Updated on

Originally seen: October 2017 TechTarget.

 keylogging flaw found its way into dozens of Hewlett Packard laptops. Nick Lewis explains how the HP keylogger works and what can be done about it.

More than two dozen models of Hewlett Packard laptops were found to contain a keylogger that recorded keystrokes into a log file. HP released patches to remove the keylogger and the log files. How did the HP keylogger vulnerability get embedded in the laptops? And is there anything organizations can do to test new endpoint devices?

When it comes to security, having high expectations for security vendors and large vendors with deep pockets is reasonable given that customers usually pay a premium believing the vendors will devote significant resources to secure their products. Unfortunately, as with most other security teams, companies often don’t have enough resources or organizational fortitude to ensure security is incorporated into all of the enterprise’s software development.

But even the most secure software development can enable security issues to slip through the cracks. When you add in an outsourced hardware or software development team, it’s even easier for something to go unnoticed.

So while vendors might talk a good talk when it comes to security, monitoring them to ensure they uphold their end of your agreement is absolutely necessary.

One case where a vulnerability apparently escaped notice was uncovered when researchers at Modzero AG, an information security company based in Winterthur, Switzerland, found that a bug had been introduced into HP laptops by a third-party driver installed by default.

But even the most secure software development can enable security issues to slip through the cracks.

The vulnerability was discovered in the Conexant HD Audio Driver package, where the driver monitors for certain keystrokes used to mute or unmute audio. The keylogging functionality, complete with the ability to write all keystrokes to a log file, was probably introduced to help the developers debug the driver.

We can hope that the HP keylogger vulnerability was left in inadvertently when the drivers were released to customers. Modzero found metadata indicating the HP keylogger capability was present in HP computers since December 2015, if not earlier.

It’s difficult to know whether static or dynamic code analysis tools could have detected this vulnerability. However, given the resources available to HP in 2015, including a line of business related to application and code security, as well as the expectations of their customers, it might be reasonable to assume HP could have incorporated these tools into their software development practices. However, the transfer of all of HP’s information security businesses to a new entity, Hewlett Packard Enterprise, began in November 2015, and was completed in September 2017, when Micro Focus merged with HPE.

It’s possible that Modzero found the HP keylogger vulnerability while evaluating a potential new endpoint for an enterprise customer. They could have been monitoring for open files, or looking for which processes had the files open to determine what the process was doing. They could have been profiling the individual processes running by default on the system to see which binaries to investigate for vulnerabilities. They could even have been monitoring to see if any processes were monitoring keystrokes.

Enterprises can take these steps on their own or rely on third parties to monitor their vendors. Many enterprises will install their own image on an endpoint before deploying it on their network — the known good images used for developing specific images for target hardware could have their unique aspects analyzed with a dynamic or runtime application security tool to determine if any common vulnerabilities are present.