google

Calibration Attack Drills Down on iPhone, Pixel Users

Posted on Updated on

apple iphone pixel calibration fingerprinting tracking

Originally seen: Threatpost on May 23rd, 2019 by Tara Seals

A new way of tracking mobile users creates a globally unique device fingerprint that browsers and other protections can’t stop.

A proof-of-concept for a new type of privacy attack, dubbed “calibration fingerprinting,” uses data from Apple iPhone sensors to construct a globally unique fingerprint for any given mobile user. Researchers said that this provides an unusually effective means to track people as they browse across the mobile web and move between apps on their phones.

Further, the approach also affects Pixel phones from Google, which run on Android.

A research team from the University of Cambridge in the UK released their findings this week, showing that data gathered from the accelerometer, gyroscope and magnetometer sensors found in the smartphones can be used to generate the calibration fingerprint in less than a second – and that it never changes, even after a factory reset.

The attack also can be launched by any website a person visits via a mobile browser, or any app, without needing explicit confirmation or consent from the target.

In Apple’s case, the issue results from a weakness in iOS 12.1 and earlier, so iPhone users should update to the latest OS version as soon as possible. Google has not yet addressed the problem, according to the researchers.

A device fingerprint allows websites to detect return visits or track users, and in its innocuous form, can be used to protect against identity theft or credit-card fraud; advertisers often also rely on this to build a user profile to serve targeted ads.

Fingerprints are usually built with pretty basic info: The name and version of your browser, screen size, fonts installed and so on. And browsers are increasingly using blocking mechanisms to thwart such efforts in the name of privacy: On Apple iOS for iPhone for instance, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings and eliminate cross-domain tracking.

However, any iOS devices with the iOS version below 12.2, including the latest iPhone XS, iPhone XS Max and iPhone XR, it’s possible to get around those protections, by taking advantage of the fact that motion sensors used in modern smartphones use something called microfabrication to emulate the mechanical parts found in traditional sensor devices, according to the paper.

“MEMS sensors are usually less accurate than their optical counterparts due to various types of error,” the team said. “In general, these errors can be categorized as deterministic and random. Sensor calibration is the process of identifying and removing the deterministic errors from the sensor.”

Websites and apps can access the data from sensors, without any special permission from the users. In analyzing this freely accessible information, the researchers found that it was possible to infer the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for these systematic manufacturing errors. That calibration data can then be used as the fingerprint, because despite perceived homogeneity, every Apple iPhone is just a little bit different – even if two devices are from the same manufacturing batch.

“We found that the gyroscope and magnetometer on iOS devices are factory-calibrated and the calibration data differs from device-to-device,” the researchers said. “Extracting the calibration data typically takes less than one second and does not depend on the position or orientation of the device.”

To create a globally unique calibration footprint requires adding in a little more information, however, for instance from traditional fingerprinting sources.

“We demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app — around 67 bits of entropy for the iPhone 6S,” they said. “Calibration fingerprints generated by a website are less unique (~42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices.”

A longitudinal study also showed that the calibration fingerprint, which the researchers dubbed “SensorID,” doesn’t change over time or vary with conditions.

“We have not observed any change in the SensorID of our test devices in the past half year,” they wrote. “Our dataset includes devices running iOS 9/10/11/12. We have tested compass calibration, factory reset, and updating iOS (up until iOS 12.1); the SensorID always stays the same. We have also tried measuring the sensor data at different locations and under different temperatures; we confirm that these factors do not change the SensorID either.”

In terms of how applicable the SensorID approach is, the research team found that both mainstream browsers (Safari, Chrome, Firefox and Opera) and privacy-enhanced browsers (Brave and Firefox Focus) are vulnerable to the attack, even with the fingerprinting protection mode turned on.

Further, motion sensor data is accessed by 2,653 of the Alexa top 100,000 websites, the research found, including more than 100 websites exfiltrating motion-sensor data to remote servers.

“This is troublesome since it is likely that the SensorID can be calculated with exfiltrated data, allowing retrospective device fingerprinting,” the researchers wrote.

However, it’s possible to mitigate the calibration fingerprint attack on the vendor side by adding uniformly distributed random noise to the sensor outputs before calibration is applied at the factory level – something Apple did starting with iOS 12.2.

“Alternatively, vendors could round the sensor outputs to the nearest multiple of the nominal gain,” the paper said.

Privacy-focused mobile browsers meanwhile can add an option to disable the access to motion sensors via JavaScript.

“This could help protect Android devices and iOS devices that no longer receive updates from Apple,” according to the paper.

Although most of the research focused on iPhone, Apple is not the only vendor affected: The team found that the accelerometer of Google Pixel 2 and Pixel 3 can also be fingerprinted by the approach.

That said, the fingerprint has less individual entropy and is unlikely to be globally unique – meaning other kinds of fingerprinting data would also need to be gathered for full device-specific tracking.

Also, the paper noted that other Android devices that are also factory calibrated might be vulnerable but were outside the scope of testing.

While Apple addressed the issue, Google, which was notified in December about the attack vector, is still in the process of “investigating this issue,” according to the paper.

Threatpost has reached out to the internet giant for comment.

Clever Phishing Attack Enlists Google Translate to Spoof Login Page

Posted on Updated on

Originally seen on ThreatPost by Lindsay O’Donnell: February 26th, 2019

A tricky two-stage phishing scam is targeting Facebook and Google credentials using a landing page that hides behind Google’s translate feature.

 

Recently-discovered phishing emails scoop up victims’ Facebook and Google credentials and hides its malicious landing page via a novel method – Google Translate.

The phishing campaign uses a two-stage attack to target both Google and Facebook usernames and passwords, according to researchers at Akamai who posted a Tuesday analysis. But in a tricky twist of events, the scam also evades detection through burying its landing page in a Google Translate page –  meaning that victims sees a legitimate Google domain and are more likely to input their credentials.

“When it comes to phishing, criminals put a lot of effort into making their attacks look legitimate, while putting pressure on their victims to take action,” Larry Cashdollar, with Akamai, said in a Tuesday post. “This is an interesting attack, as it uses Google Translate, and targets multiple accounts in one go.”

Cashdollar said that he first noticed the attack on Jan. 7 when an email notification on his phone informed him that his Google account had been accessed from a new Windows device.

The message, titled “Security Alert,” features an image branded with Google that says “A user has just signed in to your Google Account from a new Windows device. We are sending you this email to verify that it is you.” Then, there’s a “Consult the activity” button below the message.

phishing email

Interestingly, the message looked much more convincing in its condensed state on his mobile device, rather than on a desktop where the title of the email sender is more apparent, he said.

Upon closer look at the email, Cashdollar found that the “security alert” was sent from “facebook_secur[@]hotmail.com.”

That triggered two suspicions: Firstly, the email is from a Hotmail account, raising red flags – but also, the entire address had nothing to do with Google, instead referencing Facebook.

“Taking advantage of known brand names is a common phishing trick, and it usually works if the victim isn’t aware or paying attention,” he said. “Criminals conducting phishing attacks want to throw people off their game, so they’ll use fear, curiosity, or even false authority in order to make the victim take an action first, and question the situation later.”

When clicking on the “Consult the activity” button, Cashdollar was brought to a landing page that appeared to be a Google domain, prompting him to sign into his Google account.

However, one thing stuck out about the landing page – it was loading the malicious domain via Google Translate, Google’s service to help users translate webpages from one language to another.

phishing facebook google translate

Using Google Translate helps the bad actor hide any malicious attempts through several ways: Most importantly, the victim sees a legitimate Google domain which “in some cases… will help the criminal bypass endpoint defenses,” said Cashdollar.

Using Google Translate also means the URL bar is filled with random text. Upon further inspection of that text, victims could see the real, malicious domain, “mediacity,” being translated.

Luckily, “while this method of obfuscation might enjoy some success on mobile devices (the landing page is a near-perfect clone of Google’s older login portal), it fails completely when viewed from a computer,” said Cashdollar.

For those who fail to notice red flags regarding the landing page, their credentials (username and password) are collected – as well as other information including IP address and browser type – and emailed to the attacker.

“We are aware of the phishing attempts and have blocked all sites in question, on multiple levels,” a Google spokesperson told Threatpost. The spokesperson urged users to report them if they encounter a phishing site.

However, the attack didn’t stop there. The attacker then attempts to hit victims twice, by forwarding them to a different landing page that purports to be Facebook’s mobile login portal as part of the attack.

These type of two-stage attacks appear to be on the rise as bad actors look to take advantage of victims who already fell for the first part of the scam, Cashdollar told Threatpost: “It seems this is becoming more common as the attacker knows they’ve gained your trust and try to steal additional credentials.”

Like the Google page, this Facebook landing page has some red flags. It uses an older version of the Facebook mobile login form, for instance.

“This suggests that the kit is old, and likely part of a widely circulated collection of kits commonly sold or traded on various underground forums,” said Cashdollar.

Despite these mistakes, the two stages of the phishing attack suggest a certain level of sophistication on the part of the attacker.

“It isn’t every day that you see a phishing attack leverage Google Translate as a means of adding legitimacy and obfuscation on a mobile device. But it’s highly uncommon to see such an attack target two brands in the same session,” he said.

Phishing attacks have continued to grow over the past year – and this particular scam is only one example of how bad actors behind the scams are updating their methods to become trickier.

phishing attack google translate

According to a recent Proofpoint report, “State of the Phish,” 83 percent of respondents experienced phishing attacks in 2018 – up 5 percent from 2017.  That may not come as a surprise, as in the last year phishing has led to several massive hacks – whether it’s hijacking Spotify users’ accounts or large data breaches like the December San Diego Unified School District breach of 500,000.

Other methods of phishing have increased as well. Up to 49 percent of respondents said they have experienced “voice phishing” (when bad actors use social engineering over the phone to gain access to personal data) or “SMS/text phishing” tactics (when social engineering is used via texts to collect personal data) in 2018. That’s up from the 45 percent of those who experienced these methods in 2017.

Google Bans Cryptocurrency-Related Ads

Posted on Updated on

Originally seen on: Bleepingcomputer.com

Google has decided to follow on Facebook’s footsteps and ban cryptocurrency-related advertising. The ban will enter into effect starting June 2018, the company said today in a help page.

In June 2018, Google will update the Financial services policy to restrict the advertisement of Contracts for Difference, rolling spot forex, and financial spread betting. In addition, ads for the following will no longer be allowed to serve:
‧  Binary options and synonymous products
‧  Cryptocurrencies and related content (including but not limited to initial coin offerings, cryptocurrency exchanges, cryptocurrency wallets, and cryptocurrency trading advice)

The ban will enter into effect across all of Google’s advertising network, including ads shown in search results, on third-party websites, and YouTube.

Some ads will be allowed, but not many

But the ban is not total. Google said that certain entities will be able to advertise a limited set of the banned services, including “cryptocurrencies and related content.”

These advertisers will need to apply for certification with Google. The downside is that the “Google certification process” will only be available for advertisers located in “certain countries.”

Google did not provide a list of countries, but said the advertisers will have to be licensed by relevant financial services and “comply with relevant legal requirements, including those related to complex speculative financial products.”

Prices for almost all cryptocurrencies fell across the board today after Google’s announcement, and most coins continued to lose value.

 

Scams and phishing sites to blame

While Google did not provide a backdrop to the reasons it banned cryptocurrency ads, they are likely to be the same to the ones cited by Facebook —misleading ads being abused to drive traffic to financial scams and phishing sites.

There’s been a surge in malware and phishing campaigns targeting cryptocurrency owners ever since Bitcoin price surged in December 2016 [12]. Just last month, Cisco Talos and Ukrainian police disrupted a cybercriminal operation that made over $50 million by using Google ads to to drive traffic to phishing sites.

Malicious ads for cryptocurrencies
Malicious ads for cryptocurrencies

 

report published by “Big Four” accounting firm Ernst & Young in December 2017 reveals that 10% of all ICO (Initial Coin Offering) funds were lost to hackers and scams, and cryptocurrency phishing sites made around $1.5 million per month. The company says that cryptocurrency hacks and scams are a big business, and estimates that crooks made over $2 billion by targeting cryptocoin fans in the past years.

Furthermore, a Bitcoin.com survey revealed that nearly half of 2017’s cryptocurrencies had already failed.

The recent trend of using the overhyped cryptocurrency market and ICOs for financial scams is also the reason why the US Securities and Exchange Commission (SEC) has started investigating and charging people involved in these practices.

This constant abuse of the cryptocurrency theme was the main reason why Facebook banned such ads on its platform, and is, most likely, the reason why Google is getting ready to implement a similar ban in June.