mobile

Calibration Attack Drills Down on iPhone, Pixel Users

Posted on Updated on

apple iphone pixel calibration fingerprinting tracking

Originally seen: Threatpost on May 23rd, 2019 by Tara Seals

A new way of tracking mobile users creates a globally unique device fingerprint that browsers and other protections can’t stop.

A proof-of-concept for a new type of privacy attack, dubbed “calibration fingerprinting,” uses data from Apple iPhone sensors to construct a globally unique fingerprint for any given mobile user. Researchers said that this provides an unusually effective means to track people as they browse across the mobile web and move between apps on their phones.

Further, the approach also affects Pixel phones from Google, which run on Android.

A research team from the University of Cambridge in the UK released their findings this week, showing that data gathered from the accelerometer, gyroscope and magnetometer sensors found in the smartphones can be used to generate the calibration fingerprint in less than a second – and that it never changes, even after a factory reset.

The attack also can be launched by any website a person visits via a mobile browser, or any app, without needing explicit confirmation or consent from the target.

In Apple’s case, the issue results from a weakness in iOS 12.1 and earlier, so iPhone users should update to the latest OS version as soon as possible. Google has not yet addressed the problem, according to the researchers.

A device fingerprint allows websites to detect return visits or track users, and in its innocuous form, can be used to protect against identity theft or credit-card fraud; advertisers often also rely on this to build a user profile to serve targeted ads.

Fingerprints are usually built with pretty basic info: The name and version of your browser, screen size, fonts installed and so on. And browsers are increasingly using blocking mechanisms to thwart such efforts in the name of privacy: On Apple iOS for iPhone for instance, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings and eliminate cross-domain tracking.

However, any iOS devices with the iOS version below 12.2, including the latest iPhone XS, iPhone XS Max and iPhone XR, it’s possible to get around those protections, by taking advantage of the fact that motion sensors used in modern smartphones use something called microfabrication to emulate the mechanical parts found in traditional sensor devices, according to the paper.

“MEMS sensors are usually less accurate than their optical counterparts due to various types of error,” the team said. “In general, these errors can be categorized as deterministic and random. Sensor calibration is the process of identifying and removing the deterministic errors from the sensor.”

Websites and apps can access the data from sensors, without any special permission from the users. In analyzing this freely accessible information, the researchers found that it was possible to infer the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for these systematic manufacturing errors. That calibration data can then be used as the fingerprint, because despite perceived homogeneity, every Apple iPhone is just a little bit different – even if two devices are from the same manufacturing batch.

“We found that the gyroscope and magnetometer on iOS devices are factory-calibrated and the calibration data differs from device-to-device,” the researchers said. “Extracting the calibration data typically takes less than one second and does not depend on the position or orientation of the device.”

To create a globally unique calibration footprint requires adding in a little more information, however, for instance from traditional fingerprinting sources.

“We demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app — around 67 bits of entropy for the iPhone 6S,” they said. “Calibration fingerprints generated by a website are less unique (~42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices.”

A longitudinal study also showed that the calibration fingerprint, which the researchers dubbed “SensorID,” doesn’t change over time or vary with conditions.

“We have not observed any change in the SensorID of our test devices in the past half year,” they wrote. “Our dataset includes devices running iOS 9/10/11/12. We have tested compass calibration, factory reset, and updating iOS (up until iOS 12.1); the SensorID always stays the same. We have also tried measuring the sensor data at different locations and under different temperatures; we confirm that these factors do not change the SensorID either.”

In terms of how applicable the SensorID approach is, the research team found that both mainstream browsers (Safari, Chrome, Firefox and Opera) and privacy-enhanced browsers (Brave and Firefox Focus) are vulnerable to the attack, even with the fingerprinting protection mode turned on.

Further, motion sensor data is accessed by 2,653 of the Alexa top 100,000 websites, the research found, including more than 100 websites exfiltrating motion-sensor data to remote servers.

“This is troublesome since it is likely that the SensorID can be calculated with exfiltrated data, allowing retrospective device fingerprinting,” the researchers wrote.

However, it’s possible to mitigate the calibration fingerprint attack on the vendor side by adding uniformly distributed random noise to the sensor outputs before calibration is applied at the factory level – something Apple did starting with iOS 12.2.

“Alternatively, vendors could round the sensor outputs to the nearest multiple of the nominal gain,” the paper said.

Privacy-focused mobile browsers meanwhile can add an option to disable the access to motion sensors via JavaScript.

“This could help protect Android devices and iOS devices that no longer receive updates from Apple,” according to the paper.

Although most of the research focused on iPhone, Apple is not the only vendor affected: The team found that the accelerometer of Google Pixel 2 and Pixel 3 can also be fingerprinted by the approach.

That said, the fingerprint has less individual entropy and is unlikely to be globally unique – meaning other kinds of fingerprinting data would also need to be gathered for full device-specific tracking.

Also, the paper noted that other Android devices that are also factory calibrated might be vulnerable but were outside the scope of testing.

While Apple addressed the issue, Google, which was notified in December about the attack vector, is still in the process of “investigating this issue,” according to the paper.

Threatpost has reached out to the internet giant for comment.

Opening this image file grants hackers access to your Android phone

Posted on Updated on

Originally seen on: Zdnet by Charlie Osborne, February 7th, 2019

Be careful if you are sent an image from a suspicious source.

Opening a cute cat meme or innocent landscape photo may seem harmless enough, but if it happens to be in a .PNG format, your Android device could be critically compromised due to a new attack.

In Google’s Android security update for February, the tech giant’s advisory noted a critical vulnerability which exists in the Android operating system’s framework.

All it takes to trigger the bug is for attackers to send a crafted, malicious Portable Network Graphic (.PNG) file to a victim’s device. Should the user open the file, the exploit is triggered.

Remote attackers are then able to execute arbitrary code in the context of a privileged process, according to Google.

Android versions 7.0 to 9.0 are impacted.

The vulnerability was one of three bugs impacting Android Framework — CVE-2019-1986,  CVE-2019-1987, and CVE-2019-1988 — and is the most severe security issue in the February update.

There are no current reports of the vulnerability being exploited in the wild. However, given the ease in which the bug can be exploited, users should accept incoming updates to their Android builds as soon as possible.

As vendors utilizing the Android operating system roll out security patches and updates at different rates, Google has declined to reveal the technical details of the exploit to mitigate the risk of attack.

Google’s bulletin also outlined remote code execution flaws impacting the Android library, system files, and Nvidia components. Elevation of privilege and information disclosure security holes have also been resolved.

Source code patches for the .PNG issue, alongside other security problems raised in the bulletin, have also been released to the Android Open Source Project (AOSP) repository.

In January, researchers revealed the existence of a new malvertising group called VeryMal. The scammers specifically target Apple users and bury malicious code in digital images using steganography techniques to redirect users from legitimate websites to malicious domains controlled by the attackers.

SMARTPHONE VOTING IS HAPPENING, BUT NO ONE KNOWS IF IT’S SAFE

Posted on

Originally seen on Wired by Emily Dreyfuss

When news hit this week that West Virginian military members serving abroad will become the first people to vote by phone in a major US election this November, security experts were dismayed. For years, they have warned that all forms of online voting are particularly vulnerable to attacks, and with signs that the midterm elections are already being targeted, they worry this is exactly the wrong time to roll out a new method. Experts who spoke to WIRED doubt that Voatz, the Boston-based startup whose app will run the West Virginia mobile voting, has figured out how to secure online voting when no one else has. At the very least, they are concerned about the lack of transparency.

“From what is available publicly about this app, it’s no different from sending voting materials over the internet,” says Marian Schneider, president of the nonpartisan advocacy group Verified Voting. “So that means that all the built-in vulnerability of doing the voting transactions over the internet is present.”

And there are a lot of vulnerabilities when it comes to voting over the internet. The device a person is using could be compromised by malware. Or their browser could be compromised. In many online voting systems, voters receive a link to an online portal in an email from their election officials—a link that could be spoofed to redirect to a different website. There’s also the risk that someone could impersonate the voter. The servers that online voting systems rely on could themselves be targeted by viruses to tamper with votes or by DDoS attacks to bring down the whole system. Crucially, electronic votes don’t create the paper trail that allows officials to audit elections after the fact, or to serve as a backup if there is in fact tampering.

But the thing is, people want to vote by phone. In a 2016 Consumer Reports survey of 3,649 voting-age Americans, 33 percent of respondents said that they would be more likely to vote if they could do it from an internet-connected device like a smartphone. (Whether it would actually increase voter turnout is unclear; a 2014 report conducted by an independent panel on internet voting in British Columbia concludes that, when all factors are considered, online voting doesn’t actually lead more people to vote.)

Thirty-one states and Washington, DC, already allow certain people, mostly service members abroad, to file absentee ballots online, according to Verified Voting. But in 28 of those states—including Alaska, where any registered voter can vote online—online voters must waive their right to a secret ballot, underscoring another major risk that security experts worry about with online voting: that it can’t protect voter privacy.

“Because of current technological limitations, and the unique challenges of running public elections, it is impossible to maintain separation of voters’ identities from their votes when Internet voting is used,” concludes a 2016 joint report from Common Cause, Verified Voting, and the Electronic Privacy Information Center. That’s true whether those votes were logged by email, fax, or an online portal.

Enter Voatz

Voatz says it’s different. The 12-person startup, which raised $2.2 million in venture capital in January, has worked on dozens of pilot elections, including primaries in two West Virginia counties this May. On a website FAQ, it notes, “There are several important differences between traditional Internet voting and the West Virginia pilot—mainly, security.”

Voatz CEO Nimit Sawhney says the app has two features that make it more secure than other forms of online voting: the biometrics it uses to authenticate a voter and the blockchain ledger where it stores the votes.

The biometrics part occurs when a voter authenticates their identity using a fingerprint scan on their phones. The app works only on certain Androids and recent iPhones with that feature. Voters must also upload a photo of an official ID—which Sawhney says Voatz verifies by scanning their barcodes—and a video selfie, which Voatz will match to the ID using facial-recognition technology. (“You have to move your face and blink your eyes to make sure you are not taking a video of somebody else or taking a picture of a picture,” Sawhney says.) It’s up to election officials to decide whether a voter should have to upload a new selfie or fingerprint scan each time they access the app or just the first time.

“We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer.”

NIMIT SAWHNEY, VOATZ

 

The blockchain comes in after the votes are entered. “The network then verifies it—there’s a whole bunch of checks—then adds it to the blockchain, where it stays in a lockbox until election night,” Sawhney says. Voatz uses a permissioned blockchain, which is run by a specific group of people with granted access, as opposed to a public blockchain like Bitcoin. And in order for election officials to access the votes on election night, they need Voatz to hand deliver them the cryptographic keys.

Sawhney says that election officials print out a copy of each vote once they access them, in order to do an audit. He also tells WIRED that in the version of the app that people will use in November, Voatz will add a way for voters to take a screenshot of their vote and have that separately sent to election officials for a secondary audit.

To address concerns about ballot secrecy, Sawhney says Voatz deletes all personal identification data from its servers, assigns each person a unique but anonymous identifier within the system, and employs a mix of network encryption methods. “We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer,” he says.

Experts Are Concerned

Very little information is publicly available about the technical architecture behind the Voatz app. The company says it has done a security audit with three third-party security firms, but the results of that audit are not public. Sawhney says the audit contains proprietary and security information that can’t leak to the public. He invited any security researchers who want to see the audit to come to Boston and view it in Voatz’s secure room after signing an NDA.

This lack of transparency worries people who’ve been studying voting security for a long time. “In over a decade, multiple studies by the top experts in the field have concluded that internet voting cannot be made secure with current technology. VOATZ claims to have done something that is not doable with current technology, but WON’T TELL US HOW,” writes Stanford computer scientist and Verified Voting founder David Dill in an email to WIRED.

Voatz shared one white paper with WIRED, but it lacks the kind of information experts might expect—details on the system architecture, threat tests, how the system responds to specific attacks, verification from third parties. “In my opinion, anybody purporting to have securely and robustly applied blockchain technology to voting should have prepared a detailed analysis of how their system would respond to a long list of known threats that voting systems must respond to, and should have made their analysis public,” Carnegie Mellon computer scientist David Eckhardt wrote in an email.

Ideally, experts say, Voatz would have held a public testing period of its app before deploying it in a live election. Back in 2010, for example, Washington, DC, was developing an open-source system for online voting and invited the public to try to hack the system in a mock trial. Researchers from the University of Michigan were able to compromise the election server in 48 hours and change all the vote tallies, according to their report afterward. They also found evidence of foreign operatives already in the DC election server. This kind of testing is now considered best practice for any online voting implementation, according to Eckhardt. Voatz’s trials have been in real primaries.

“West Virginia is handing over its votes to a mystery box.”

DAVID DILL, STANFORD UNIVERSITY

 

Voatz’s use of blockchain itself does not inspire security experts, either, who dismissed it mostly as marketing. When asked for his thoughts on Voatz’s blockchain technology, University of Michigan computer scientist Alex Halderman, who was part of the group that threat-tested the DC voting portal in 2010, sent WIRED a recent XKCD cartoon about voting software. In the last panel, a stick figure with a microphone tells two software engineers, “They say they’ve fixed it with something called ‘blockchain.’” The engineers’ response? “Aaaaa!!!” “Whatever they’ve sold you, don’t touch it.” “Bury it in the desert.” “Wear gloves.”

“Voting from an app on a mobile phone is as bad an idea as voting online from a computer,” says Avi Rubin, technical director of the Information Security Institute at Johns Hopkins, who has studied electronic voting systems since 1997. “The fact that someone is throwing around the blockchain buzzword does nothing to make this more secure. This is as bad an idea as there is.”

Blockchain has its own limitations, and it’s far from a perfect security solution for something like voting. First of all, information can be manipulated before it enters the chain. “In fact, there is an entire industry in viruses to manipulate cryptocurrency transactions before they enter the blockchain, and there is nothing to prevent the use of similar viruses to change the vote,” says Poorvi Vora, a computer scientist and election security expert at George Washington University.

She adds that if the blockchain is a permissioned version, as Voatz’s is, “It is possible for those maintaining the blockchain to collude to change the data, as well as to introduce denial of service type attacks.”

AT&T mobile 5G network falling short

Posted on Updated on

Originally Seen: TechTarget April 2018

The latest update on AT&T’s mobile 5G network trials indicates the company will need to work faster to meet its goal of launching a commercial service by the end of the year.

AT&T’s latest update on its mobile 5G trials indicates the carrier has significant hurdles to clear to achieve its goal of launching by the end of the year a commercial service based on the high-speed wireless technology.

AT&T published this week a blog describing its progress in the mobile 5G network trials in Austin and Waco, Texas; Kalamazoo, Mich.; and South Bend, Ind. The company started the tests roughly 18 months ago in Austin, adding the other cities late last year.

AT&T, along with Verizon and other carriers, is spending billions of dollars to develop fifth-generation wireless networks for business, consumer and internet of things applications. But the latest metrics published by AT&T were not what analysts would expect from technology for delivering mobile broadband to smartphones, tablets and other devices.

When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me.

Chris Antlitz, analyst, Technology Business Research Inc.

“When I look at how AT&T is characterizing these tests, it doesn’t look like mobile 5G to me,” said Chris Antlitz, an analyst at Technology Business Research Inc., based in Hampton, N.H.. “It seems like there are some inconsistencies there.”

AT&T plans to deliver mobile 5G over the millimeter wave (mmWave) band, which is a spectrum between 30 gigahertz (GHz) and 300 GHz. MmWave allows for data rates up to 10 Gbps, which comfortably accommodates carriers’ plans for 5G. But before service providers can use the technology, they have to surmount its limitations in signal distance and in traveling through obstacles, like buildings.

AT&T’s mobile 5G network challenges

AT&T’s update indicates mmWave’s constraints remain a challenge. In Waco, for example, AT&T delivered 5G to a retail business roughly 500 feet away from its cellular transmitter. That maximum distance would require more transmitters than the population outside of major cities could support, Antlitz said.

AT&T, however, could provide a fixed wireless network that sends a 5G signal to residences and businesses as an alternative to wired broadband, Antlitz said. AT&T rival Verizon plans to offer that product by the end of the year.

Other shortcomings include AT&T’s limited success in sending a 5G signal from the cellular transmitter through the buildings, trees and other obstacles likely to stand in the way of its destination. In the trial update, AT&T said it achieved gigabit speeds only in “some non-line of sight conditions.” A line of sight typically refers to an unobstructed path between the transmitting and receiving antennas.

Distance and piercing obstacles are challenges for any carrier using mmWave for a mobile 5G network. Buildings and other large physical objects can block the technology’s short, high-frequency wavelengths. Also, gases in the atmosphere, rain and humidity can weaken mmWave’s signal strength, limiting the technology’s reach to six-tenths of a mile or less.

AT&T’s achievement in network latency also falls short of what’s optimal for a mobile 5G network. The carriers’ 9 to 12 milliseconds seem “a little high,” Antlitz said. “I would expect that on LTE, not 5G. 5G should be lower.”

While AT&T has likely made some progress in developing mobile 5G, “a lot of work needs to be done,” said Rajesh Ghai, an analyst at IDC.

Delays possible in AT&T, Verizon 5G offerings

Meanwhile, Verizon is testing its fixed wireless 5G network — a combination of mmWave and proprietary technology — in 11 major metropolitan areas. So far, the features Verizon has developed places the carrier “fairly far ahead of AT&T in terms of maximizing the capabilities of 5G,” Antlitz said.

Nevertheless, neither Verizon nor AT&T is a sure bet for launching a commercial 5G network this year.

“Some of this stuff might wind up getting pushed into 2019,” Antlitz said. “There are so many things that could throw a monkey wrench in their timetable. The probability of something doing that is very high.”