cellphone

SMARTPHONE VOTING IS HAPPENING, BUT NO ONE KNOWS IF IT’S SAFE

Posted on

Originally seen on Wired by Emily Dreyfuss

When news hit this week that West Virginian military members serving abroad will become the first people to vote by phone in a major US election this November, security experts were dismayed. For years, they have warned that all forms of online voting are particularly vulnerable to attacks, and with signs that the midterm elections are already being targeted, they worry this is exactly the wrong time to roll out a new method. Experts who spoke to WIRED doubt that Voatz, the Boston-based startup whose app will run the West Virginia mobile voting, has figured out how to secure online voting when no one else has. At the very least, they are concerned about the lack of transparency.

“From what is available publicly about this app, it’s no different from sending voting materials over the internet,” says Marian Schneider, president of the nonpartisan advocacy group Verified Voting. “So that means that all the built-in vulnerability of doing the voting transactions over the internet is present.”

And there are a lot of vulnerabilities when it comes to voting over the internet. The device a person is using could be compromised by malware. Or their browser could be compromised. In many online voting systems, voters receive a link to an online portal in an email from their election officials—a link that could be spoofed to redirect to a different website. There’s also the risk that someone could impersonate the voter. The servers that online voting systems rely on could themselves be targeted by viruses to tamper with votes or by DDoS attacks to bring down the whole system. Crucially, electronic votes don’t create the paper trail that allows officials to audit elections after the fact, or to serve as a backup if there is in fact tampering.

But the thing is, people want to vote by phone. In a 2016 Consumer Reports survey of 3,649 voting-age Americans, 33 percent of respondents said that they would be more likely to vote if they could do it from an internet-connected device like a smartphone. (Whether it would actually increase voter turnout is unclear; a 2014 report conducted by an independent panel on internet voting in British Columbia concludes that, when all factors are considered, online voting doesn’t actually lead more people to vote.)

Thirty-one states and Washington, DC, already allow certain people, mostly service members abroad, to file absentee ballots online, according to Verified Voting. But in 28 of those states—including Alaska, where any registered voter can vote online—online voters must waive their right to a secret ballot, underscoring another major risk that security experts worry about with online voting: that it can’t protect voter privacy.

“Because of current technological limitations, and the unique challenges of running public elections, it is impossible to maintain separation of voters’ identities from their votes when Internet voting is used,” concludes a 2016 joint report from Common Cause, Verified Voting, and the Electronic Privacy Information Center. That’s true whether those votes were logged by email, fax, or an online portal.

Enter Voatz

Voatz says it’s different. The 12-person startup, which raised $2.2 million in venture capital in January, has worked on dozens of pilot elections, including primaries in two West Virginia counties this May. On a website FAQ, it notes, “There are several important differences between traditional Internet voting and the West Virginia pilot—mainly, security.”

Voatz CEO Nimit Sawhney says the app has two features that make it more secure than other forms of online voting: the biometrics it uses to authenticate a voter and the blockchain ledger where it stores the votes.

The biometrics part occurs when a voter authenticates their identity using a fingerprint scan on their phones. The app works only on certain Androids and recent iPhones with that feature. Voters must also upload a photo of an official ID—which Sawhney says Voatz verifies by scanning their barcodes—and a video selfie, which Voatz will match to the ID using facial-recognition technology. (“You have to move your face and blink your eyes to make sure you are not taking a video of somebody else or taking a picture of a picture,” Sawhney says.) It’s up to election officials to decide whether a voter should have to upload a new selfie or fingerprint scan each time they access the app or just the first time.

“We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer.”

NIMIT SAWHNEY, VOATZ

 

The blockchain comes in after the votes are entered. “The network then verifies it—there’s a whole bunch of checks—then adds it to the blockchain, where it stays in a lockbox until election night,” Sawhney says. Voatz uses a permissioned blockchain, which is run by a specific group of people with granted access, as opposed to a public blockchain like Bitcoin. And in order for election officials to access the votes on election night, they need Voatz to hand deliver them the cryptographic keys.

Sawhney says that election officials print out a copy of each vote once they access them, in order to do an audit. He also tells WIRED that in the version of the app that people will use in November, Voatz will add a way for voters to take a screenshot of their vote and have that separately sent to election officials for a secondary audit.

To address concerns about ballot secrecy, Sawhney says Voatz deletes all personal identification data from its servers, assigns each person a unique but anonymous identifier within the system, and employs a mix of network encryption methods. “We feel like that extra level of anonymization on the phone and on the network makes it really really hard to reverse-engineer,” he says.

Experts Are Concerned

Very little information is publicly available about the technical architecture behind the Voatz app. The company says it has done a security audit with three third-party security firms, but the results of that audit are not public. Sawhney says the audit contains proprietary and security information that can’t leak to the public. He invited any security researchers who want to see the audit to come to Boston and view it in Voatz’s secure room after signing an NDA.

This lack of transparency worries people who’ve been studying voting security for a long time. “In over a decade, multiple studies by the top experts in the field have concluded that internet voting cannot be made secure with current technology. VOATZ claims to have done something that is not doable with current technology, but WON’T TELL US HOW,” writes Stanford computer scientist and Verified Voting founder David Dill in an email to WIRED.

Voatz shared one white paper with WIRED, but it lacks the kind of information experts might expect—details on the system architecture, threat tests, how the system responds to specific attacks, verification from third parties. “In my opinion, anybody purporting to have securely and robustly applied blockchain technology to voting should have prepared a detailed analysis of how their system would respond to a long list of known threats that voting systems must respond to, and should have made their analysis public,” Carnegie Mellon computer scientist David Eckhardt wrote in an email.

Ideally, experts say, Voatz would have held a public testing period of its app before deploying it in a live election. Back in 2010, for example, Washington, DC, was developing an open-source system for online voting and invited the public to try to hack the system in a mock trial. Researchers from the University of Michigan were able to compromise the election server in 48 hours and change all the vote tallies, according to their report afterward. They also found evidence of foreign operatives already in the DC election server. This kind of testing is now considered best practice for any online voting implementation, according to Eckhardt. Voatz’s trials have been in real primaries.

“West Virginia is handing over its votes to a mystery box.”

DAVID DILL, STANFORD UNIVERSITY

 

Voatz’s use of blockchain itself does not inspire security experts, either, who dismissed it mostly as marketing. When asked for his thoughts on Voatz’s blockchain technology, University of Michigan computer scientist Alex Halderman, who was part of the group that threat-tested the DC voting portal in 2010, sent WIRED a recent XKCD cartoon about voting software. In the last panel, a stick figure with a microphone tells two software engineers, “They say they’ve fixed it with something called ‘blockchain.’” The engineers’ response? “Aaaaa!!!” “Whatever they’ve sold you, don’t touch it.” “Bury it in the desert.” “Wear gloves.”

“Voting from an app on a mobile phone is as bad an idea as voting online from a computer,” says Avi Rubin, technical director of the Information Security Institute at Johns Hopkins, who has studied electronic voting systems since 1997. “The fact that someone is throwing around the blockchain buzzword does nothing to make this more secure. This is as bad an idea as there is.”

Blockchain has its own limitations, and it’s far from a perfect security solution for something like voting. First of all, information can be manipulated before it enters the chain. “In fact, there is an entire industry in viruses to manipulate cryptocurrency transactions before they enter the blockchain, and there is nothing to prevent the use of similar viruses to change the vote,” says Poorvi Vora, a computer scientist and election security expert at George Washington University.

She adds that if the blockchain is a permissioned version, as Voatz’s is, “It is possible for those maintaining the blockchain to collude to change the data, as well as to introduce denial of service type attacks.”

Click on this iOS phishing scam and you’ll be connected to “Apple Care”

Posted on Updated on

Scam website launched phone call, connected victims to “Lance Roger at Apple Care.”

Originally seen on ArsTechnica by:  – 

India-based tech support scams have taken a new turn, using phishing emails targeting Apple users to push them to a fake Apple website. This phishing attack also comes with a twist—it pops up a system dialog box to start a phone call. The intricacy of the phish and the formatting of the webpage could convince some users that their phone has been “locked for illegal activity” by Apple, luring users into soon clicking to complete the call.

Scammers are following the money. As more people use mobile devices as their primary or sole way of connecting to the Internet, phishing attacks and other scams have increasingly targeted mobile users. And since so much of people’s lives are tied to mobile devices, they’re particularly attractive targets for scammers and fraudsters.

“People are just more distracted when they’re using their mobile device and trust it more,” said Jeremy Richards, a threat intelligence researcher at the mobile security service provider Lookout. As a result, he said, phishing attacks against mobile devices have a higher likelihood of succeeding.

This particular phish, targeted at email addresses associated with Apple’s iCloud service, appears to be linked to efforts to fool iPhone users into allowing attackers to enroll them into rogue mobile device management services that allow bad actors to push compromised applications to the victim’s phones as part of a fraudulent Apple “security service.”

I attempted to bluff my way through a call to the “support” number to collect intelligence on the scam. The person answering the call, who identified himself as “Lance Roger from Apple Care,” became suspicious of me and hung up before I could get too far into the script.

Running down the scam

In a review of spam messages I’ve received this weekend, I found an email with the subject line, “[username], Critical alert for your account ID 7458.” Formatted to look like an official cloud account warning (but easily, by me at least, discernable as a phish), the email warned, “Sign-in attempt was blocked for your account [email address]. Someone just used your password to try to sign in to your profile.” A “Check Activity” button below was linked to a webpage on a compromised site for a men’s salon in southern India.

That page, using an obfuscated JavaScript, forwards the victim to another website, which in turn forwards to the site applesecurityrisks.xyz—a fake Apple Support page. JavaScript on that pagethen used a programmed “click” event to activate a link on the page that uses the tel:// uniform resource identifier (URI) handler. On an iPhone, this initiates a dialog box to start a phone call; on iPads and other Apple devices, this attempts to launch a FaceTime session.

Meanwhile, an animated dialog box on the screen urged the target to make the call because their phone had been “locked due to illegal activity.” Script on the site scrapes data from the “user agent” data sent by the browser to determine what type of device the page was visited from:

window.defaultText='Your |%model%| has been locked due to detected illegal activity! Immediately call Apple Support to unlock it!';

While the site is still active, it is now marked as deceptive by Google and Apple. I passed technical details of the phishing site to an Apple security team member.

The scam is obviously targeted at the same sort of audience as Windows tech support scamswe’ve reported on. But it doesn’t take too much imagination to see how schemes like this could be used to target people at a specific company, customers of a particular bank, or users of a certain cloud platform to perform much more tailored social engineering attacks.