phone

Calibration Attack Drills Down on iPhone, Pixel Users

Posted on Updated on

apple iphone pixel calibration fingerprinting tracking

Originally seen: Threatpost on May 23rd, 2019 by Tara Seals

A new way of tracking mobile users creates a globally unique device fingerprint that browsers and other protections can’t stop.

A proof-of-concept for a new type of privacy attack, dubbed “calibration fingerprinting,” uses data from Apple iPhone sensors to construct a globally unique fingerprint for any given mobile user. Researchers said that this provides an unusually effective means to track people as they browse across the mobile web and move between apps on their phones.

Further, the approach also affects Pixel phones from Google, which run on Android.

A research team from the University of Cambridge in the UK released their findings this week, showing that data gathered from the accelerometer, gyroscope and magnetometer sensors found in the smartphones can be used to generate the calibration fingerprint in less than a second – and that it never changes, even after a factory reset.

The attack also can be launched by any website a person visits via a mobile browser, or any app, without needing explicit confirmation or consent from the target.

In Apple’s case, the issue results from a weakness in iOS 12.1 and earlier, so iPhone users should update to the latest OS version as soon as possible. Google has not yet addressed the problem, according to the researchers.

A device fingerprint allows websites to detect return visits or track users, and in its innocuous form, can be used to protect against identity theft or credit-card fraud; advertisers often also rely on this to build a user profile to serve targeted ads.

Fingerprints are usually built with pretty basic info: The name and version of your browser, screen size, fonts installed and so on. And browsers are increasingly using blocking mechanisms to thwart such efforts in the name of privacy: On Apple iOS for iPhone for instance, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings and eliminate cross-domain tracking.

However, any iOS devices with the iOS version below 12.2, including the latest iPhone XS, iPhone XS Max and iPhone XR, it’s possible to get around those protections, by taking advantage of the fact that motion sensors used in modern smartphones use something called microfabrication to emulate the mechanical parts found in traditional sensor devices, according to the paper.

“MEMS sensors are usually less accurate than their optical counterparts due to various types of error,” the team said. “In general, these errors can be categorized as deterministic and random. Sensor calibration is the process of identifying and removing the deterministic errors from the sensor.”

Websites and apps can access the data from sensors, without any special permission from the users. In analyzing this freely accessible information, the researchers found that it was possible to infer the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for these systematic manufacturing errors. That calibration data can then be used as the fingerprint, because despite perceived homogeneity, every Apple iPhone is just a little bit different – even if two devices are from the same manufacturing batch.

“We found that the gyroscope and magnetometer on iOS devices are factory-calibrated and the calibration data differs from device-to-device,” the researchers said. “Extracting the calibration data typically takes less than one second and does not depend on the position or orientation of the device.”

To create a globally unique calibration footprint requires adding in a little more information, however, for instance from traditional fingerprinting sources.

“We demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app — around 67 bits of entropy for the iPhone 6S,” they said. “Calibration fingerprints generated by a website are less unique (~42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices.”

A longitudinal study also showed that the calibration fingerprint, which the researchers dubbed “SensorID,” doesn’t change over time or vary with conditions.

“We have not observed any change in the SensorID of our test devices in the past half year,” they wrote. “Our dataset includes devices running iOS 9/10/11/12. We have tested compass calibration, factory reset, and updating iOS (up until iOS 12.1); the SensorID always stays the same. We have also tried measuring the sensor data at different locations and under different temperatures; we confirm that these factors do not change the SensorID either.”

In terms of how applicable the SensorID approach is, the research team found that both mainstream browsers (Safari, Chrome, Firefox and Opera) and privacy-enhanced browsers (Brave and Firefox Focus) are vulnerable to the attack, even with the fingerprinting protection mode turned on.

Further, motion sensor data is accessed by 2,653 of the Alexa top 100,000 websites, the research found, including more than 100 websites exfiltrating motion-sensor data to remote servers.

“This is troublesome since it is likely that the SensorID can be calculated with exfiltrated data, allowing retrospective device fingerprinting,” the researchers wrote.

However, it’s possible to mitigate the calibration fingerprint attack on the vendor side by adding uniformly distributed random noise to the sensor outputs before calibration is applied at the factory level – something Apple did starting with iOS 12.2.

“Alternatively, vendors could round the sensor outputs to the nearest multiple of the nominal gain,” the paper said.

Privacy-focused mobile browsers meanwhile can add an option to disable the access to motion sensors via JavaScript.

“This could help protect Android devices and iOS devices that no longer receive updates from Apple,” according to the paper.

Although most of the research focused on iPhone, Apple is not the only vendor affected: The team found that the accelerometer of Google Pixel 2 and Pixel 3 can also be fingerprinted by the approach.

That said, the fingerprint has less individual entropy and is unlikely to be globally unique – meaning other kinds of fingerprinting data would also need to be gathered for full device-specific tracking.

Also, the paper noted that other Android devices that are also factory calibrated might be vulnerable but were outside the scope of testing.

While Apple addressed the issue, Google, which was notified in December about the attack vector, is still in the process of “investigating this issue,” according to the paper.

Threatpost has reached out to the internet giant for comment.

Opening this image file grants hackers access to your Android phone

Posted on Updated on

Originally seen on: Zdnet by Charlie Osborne, February 7th, 2019

Be careful if you are sent an image from a suspicious source.

Opening a cute cat meme or innocent landscape photo may seem harmless enough, but if it happens to be in a .PNG format, your Android device could be critically compromised due to a new attack.

In Google’s Android security update for February, the tech giant’s advisory noted a critical vulnerability which exists in the Android operating system’s framework.

All it takes to trigger the bug is for attackers to send a crafted, malicious Portable Network Graphic (.PNG) file to a victim’s device. Should the user open the file, the exploit is triggered.

Remote attackers are then able to execute arbitrary code in the context of a privileged process, according to Google.

Android versions 7.0 to 9.0 are impacted.

The vulnerability was one of three bugs impacting Android Framework — CVE-2019-1986,  CVE-2019-1987, and CVE-2019-1988 — and is the most severe security issue in the February update.

There are no current reports of the vulnerability being exploited in the wild. However, given the ease in which the bug can be exploited, users should accept incoming updates to their Android builds as soon as possible.

As vendors utilizing the Android operating system roll out security patches and updates at different rates, Google has declined to reveal the technical details of the exploit to mitigate the risk of attack.

Google’s bulletin also outlined remote code execution flaws impacting the Android library, system files, and Nvidia components. Elevation of privilege and information disclosure security holes have also been resolved.

Source code patches for the .PNG issue, alongside other security problems raised in the bulletin, have also been released to the Android Open Source Project (AOSP) repository.

In January, researchers revealed the existence of a new malvertising group called VeryMal. The scammers specifically target Apple users and bury malicious code in digital images using steganography techniques to redirect users from legitimate websites to malicious domains controlled by the attackers.

Click on this iOS phishing scam and you’ll be connected to “Apple Care”

Posted on Updated on

Scam website launched phone call, connected victims to “Lance Roger at Apple Care.”

Originally seen on ArsTechnica by:  – 

India-based tech support scams have taken a new turn, using phishing emails targeting Apple users to push them to a fake Apple website. This phishing attack also comes with a twist—it pops up a system dialog box to start a phone call. The intricacy of the phish and the formatting of the webpage could convince some users that their phone has been “locked for illegal activity” by Apple, luring users into soon clicking to complete the call.

Scammers are following the money. As more people use mobile devices as their primary or sole way of connecting to the Internet, phishing attacks and other scams have increasingly targeted mobile users. And since so much of people’s lives are tied to mobile devices, they’re particularly attractive targets for scammers and fraudsters.

“People are just more distracted when they’re using their mobile device and trust it more,” said Jeremy Richards, a threat intelligence researcher at the mobile security service provider Lookout. As a result, he said, phishing attacks against mobile devices have a higher likelihood of succeeding.

This particular phish, targeted at email addresses associated with Apple’s iCloud service, appears to be linked to efforts to fool iPhone users into allowing attackers to enroll them into rogue mobile device management services that allow bad actors to push compromised applications to the victim’s phones as part of a fraudulent Apple “security service.”

I attempted to bluff my way through a call to the “support” number to collect intelligence on the scam. The person answering the call, who identified himself as “Lance Roger from Apple Care,” became suspicious of me and hung up before I could get too far into the script.

Running down the scam

In a review of spam messages I’ve received this weekend, I found an email with the subject line, “[username], Critical alert for your account ID 7458.” Formatted to look like an official cloud account warning (but easily, by me at least, discernable as a phish), the email warned, “Sign-in attempt was blocked for your account [email address]. Someone just used your password to try to sign in to your profile.” A “Check Activity” button below was linked to a webpage on a compromised site for a men’s salon in southern India.

That page, using an obfuscated JavaScript, forwards the victim to another website, which in turn forwards to the site applesecurityrisks.xyz—a fake Apple Support page. JavaScript on that pagethen used a programmed “click” event to activate a link on the page that uses the tel:// uniform resource identifier (URI) handler. On an iPhone, this initiates a dialog box to start a phone call; on iPads and other Apple devices, this attempts to launch a FaceTime session.

Meanwhile, an animated dialog box on the screen urged the target to make the call because their phone had been “locked due to illegal activity.” Script on the site scrapes data from the “user agent” data sent by the browser to determine what type of device the page was visited from:

window.defaultText='Your |%model%| has been locked due to detected illegal activity! Immediately call Apple Support to unlock it!';

While the site is still active, it is now marked as deceptive by Google and Apple. I passed technical details of the phishing site to an Apple security team member.

The scam is obviously targeted at the same sort of audience as Windows tech support scamswe’ve reported on. But it doesn’t take too much imagination to see how schemes like this could be used to target people at a specific company, customers of a particular bank, or users of a certain cloud platform to perform much more tailored social engineering attacks.