Returning to the Workplace — Cybersecurity Concerns Post-COVID-19

Posted on Updated on

Originally seen: on May 18th, 2020 by Peter Adams

As states begin to lift stay-at-home orders, many offices are re-opening their doors. They are re-establishing their operations and balancing recalling essential staff to the office and evaluating the future of working from home (WFH).

Office life will find a new normal, but that reality will require flexible and strategic leaders.

Reinventing Office Space

Forget business as it was. Social distancing is here to stay and will force a reinvention of the office space. Cubicles will become more like desk hotels. Employees need more room to work and higher walls to keep everyone safe. Similarly, shared spaces, such as bathrooms and breakrooms, will require a redesign.

  • Will people take a number or make an appointment for the breakroom?
  • What about bathrooms or hallways? When staff only has 5-feet of space, how can they maintain the recommended 6-feet of social distance?
  • How will elevators be kept clean and safe?

These are big questions, and they require answers, equally important is the larger issue of cybersecurity concerns in the age of COVID-19.

A New Normal in Office Technology

Specific technologies have already started to become obsolete. The desk phone is finally dead in many industries. The demands on mobility increased, and that caused other tools to become ubiquitous. The needs of the COVID-19 world pushed many companies to route calls to mobile phones or adopt softphone technology, where a computer or smartphone can function as the primary communication device. At the same time, virtual meetings and teleconferences became the way people connect.

Together, those technologies introduced new efficiencies. Cameras, headsets, and microphones will become the new standard in business operations. Still, that reality presents a new concern: how secure is your conversation?

WFH and Compromised Security

With the nearly instant shift from Work-From-Office (WFO) staff to WFH staff, IT departments in all businesses did whatever had to be done. Security was secondary to enabling the workforce and business. In many companies, IT departments compromised security to get their employees up and running.

Productivity is priority number one. Security is 1.1. Everything else is secondary.

Those security compromises must be addressed in a way that doesn’t close off the WFH employee but enables them to shift to WFO AND WFH model. Further, the compromised security that occurred initially must be addressed in earnest.

Smart Security Measures

Rethink how your services, applications, and systems are accessed from insecure networks (e.g., home networks). While it is unlikely that an organization can take responsibility for individual home networks, the need for a strong security posture still stands. A more enduring approach is to design your systems to support access from various networks. This requires strategic thinking and a sound fundamental understanding of business technology.

For instance, how are you going to protect the corporate data that people downloaded to their home computers after they return to the office? The data is still there.

Your company needs to implement the right security measures before making additional staff moves.

Start with the Basics

Updating all operating systems is an effective and simple place to begin. These are frequently outdated, and create vulnerabilities. In April 2020, Microsoft released 113 security updates to Windows 10. Most of these apply to Windows 7, but Windows 7 is no longer supported. Given that 26% of the computers in the world still run Windows 7, and most of those are in people’s homes according to Netmarketshare, there are now 113 new ways that someone could compromise those Windows 7 systems.

Adopt a Password Policy

Traditional passwords are outdated. The number of characters in your password will drive more security than complexity. We recommend that users pick a 16-character passphrase as the new minimum. A passphrase like “my dog has fleas” is a 16-character passphrase that would currently take over a thousand years to crack and is easy to remember.

Change this passphrase once every six months to maintain effective security practices. Passphrases and their respective policies must be implemented for every account, even the executive staff.

Adopt Multi-Factor Authentication

Multi-factor authentication (MFA) or two-factor authentication (2FA) leverages an existing user device, like a smartphone or token, to verify a known quantity, like a passphrase. Once the MFA/2FA is establishing, it only requires the passphrase. Unrecognized devices will require an authentication process that involves sending a code to a registered device.

Taking Action

Not everyone will be returning to the office. While increased productivity and lower costs will incentivize some organizations, others will still need to support dual office and remote environments. I expect many roles will never come back to the office.

Companies of all sizes will need to prepare for a new office landscape after COVID-19 and implementing new cybersecurity measures to support both WFO and WFH should be the first place they start. If you have questions about how to manage a secure WFO and WHF environment, reach out to your Aldrich Technology Advisor today.

The Benefits of MultiFactor Authentication – A Definitive Guide

Posted on Updated on

Originally seen: on May 1st, 2019

There is no question cybercrime is on the rise; over 1.76 billion user records were leaked in January 2019 alone. Even worse, a recent Gallup study revealed more Americans are now afraid of cybercrime than violent offences.

It’s important to understand that cybercriminals are just as sophisticated and innovative as modern IT security solutions. Often working in teams, hackers have a number of tools and resources at their disposal to access confidential data, some of which help them easily defeat traditional data security controls.


The Purpose of Multifactor Authentication

Multifactor Authentication (or MFA) has become a critical, preventative security measure for businesses and organizations of all sizes, and any individual who uses a smart device in their daily life. It offers an added layer of security that compliments how passwords are used to protect private data, thereby making it more difficult for potential hackers to exploit and obtain personal data, or to breach company networks.

To explain it simply, an authentication factor is a credential used to verify the identity of a person, entity or system.  When multifactor authentication is in place,  more than one credential is required prior to granting access to private systems or data.

Incidents such as the Facebook security breach in 2018, which exposed the personal information of over 50 million users, have forced companies to add a layer of security to their platforms. Tech giants including Twitter and Google have since adopted MFA to protect their users, and their data.


Commonly Utilized Authentication Factors

When it comes to identifying individual users, a combination of three  authentication factors are traditionally used:

  • Knowledge Factor – This is information that is known only to the user – for example, a series of security questions, PIN codes, or unique usernames and passwords
  • Possession Factor – This refers to something that a user owns – for example, a smart card, a smartphone, or an OTP (one-time passcode)
  • Inherence Factor – This refers to something that is exclusive to an individual user – for example, fingerprints, facial biometrics, voice controlled locks, or eye scans – any biometric element that can prove the user’s identity.

Typically, multifactor authentication combines at least two of the factors mentioned above – and in some cases, all three can be combined for added security.


Advantages of Multifactor Authentication for Businesses


Enhancing Compliance and Mitigating Legal Risks

Apart from data encryption, state and federal governments have also made it mandatory for certain businesses to implement multi-factor authentication into standard operating procedures at the end-user level.

For example, businesses who have employees that work with PII (Personally Identifiable Information), Social Security, or financial information, are bound by state and federal statutes to integrate multi-factor authentication into their security protocols. MFA is actually required  to meet mandatory compliance standards.


Making the Login Process Less Daunting

Many non-regulated businesses resist MFA implementations,  fearing  a more complex login process for employees and customers.

However, this extra layer of security enables organizations to redefine and reimagine their login processes on the road to enhanced security.

Setting Security Expectations

Identifying security requirements and expectations at your organization is an important part of any MFA implementation.  For example, your industry, business model,  applicable compliance regulations (if any) and the type of data you capture, utilize, and store to conduct normal business operations are important considerations.  An MFA implementation is an opportunity for every organization to identify and classify common business scenarios  based on risk level and determine when MFA login is required.

Based on a combination of factors, organizations might optionally decide that MFA is only required in certain high-risk scenarios, when accessing certain applications or databases, or  when employees login remotely, offsite, or when accessing internal systems for the first time using a new device.

MFA can also be used to set a limit on where a user can access your information from. If your employees are out in the field, and they use their own devices for work, your data is at a higher risk of theft, particularly when employees connect from external WIFI networks that are not secure.

MFA can be used to restrict user access based on their location. This means that if a user tries to access company data from an off-site location, you can easily verify whether or not they are actually an employee by requiring biometric authentication.

Single-Sign-On Solutions

Organizations who are considering MFA often decide to implement more sophisticated logins – for example, a single-login or sign-on; which is not only secure, but actually makes signing in to multiple systems easy, using one set of login credentials.

Single-sign-on authenticates the person accessing the information via MFA. Once it is confirmed that a user is authorized to access the content, they are automatically granted access to other systems associated with their user profile. This means they have access to multiple applications, without the need to log in to each one separately.

Many people now believe that passwords are dead – and for good reason. Aside from obvious risk factors involved with writing down login credentials or sharing them with unauthorized users, managing different and complex passwords for all your applications and devices means employees need to remember all of them – not exactly an easy job. This is exactly why corporate help-desks are bogged down with password reset requests – and why people have a hard time following best-practices for frequently resetting their passwords in apps that don’t require it.  When an organization selects a SSO solution that features biometric authentication, it’s an opportunity to eliminate employee passwords completely.

For these reasons, a SSO type of solution is very practical – especially since the most challenging  part of successfully implementing MFA is simplifying the login process.


MFA Is a Vital Aspect of Effective Cybersecurity

As cybercrimes continue to increase, organizations are beginning to realize the full scope of the threats that they now face. Modern cyber criminals don’t just target big corporations. 31% of businesses with an employee count of less than 250 have been popular targets of cybercrime.

It is also important to understand that cybercriminals aren’t just stealing critical data. Often, they aim to corrupt your data, or destroy it entirely. This is often carried out by installing difficult-to-detect malicious software (malware) that disrupts business and services, and spreads fear and propaganda.

As a result, the market for multifactor authentication is expected to reach $12.51 billion in the next 4 years.


A Great Step Towards Enhancing Mobile Engagement

Like it or not, we are in the middle of a digital transformation that’s not slowing down; and we are in it for the long haul.  (If you’re part of the vast majority that can’t go anywhere without their smart phone, we’re willing to bet that you like it.) As part of all this, we have, collectively,  become used to having access to all the resources and information we want and need – on the go, from anywhere in the world, any time we want it.  This is the height of digital convenience, and something that has brought about many positive changes in the world of business and in society It also continues to introduce new challenges in terms of data security. MFA offers a streamlined method of ensuring user authentication –allowing you to ensure security with greater certainty, without sacrificing ease of access.

As Coronavirus forces millions to work remotely, the US economy may have reached a ‘tipping point’ in favor of working from home

Posted on

Originally Seen: CNBC on March 23rd, 2020 by Lindsey Jacobson

  • Companies are enabling remote work to keep business running while helping employees follow social distancing guidelines.
  • A typical company saves about $11,000 per half-time telecommuter per year, according to Global Workplace Analytics.
  • As companies adapt to their remote work structures, the coronavirus pandemic is having a lasting impact on how work is conducted.

With the U.S. government declaring a state of emergency due to the coronavirus, companies are enabling work-from-home structures to keep business running and help employees follow social distancing guidelines. However, working remotely has been on the rise for a while.

“The coronavirus is going to be a tipping point. We plodded along at about 10% growth a year for the last 10 years, but I foresee that this is going to really accelerate the trend,” Kate Lister, president of Global Workplace Analytics, told CNBC.

Gallup’s State of the American Workplace 2017 study found that 43% of employees work remotely with some frequency. Research indicates that in a five-day workweek, working remotely for two to three days is the most productive. That gives the employee two to three days of meetings, collaboration and interaction, with the opportunity to just focus on the work for the other half of the week.

Higher-Income Workers Have More Work-at-Home Flexibility - 1354109860_1354109860
Robert Kent | Lifesize | Getty Images

Remote work seems like a logical precaution for many companies that employ people in the digital economy. However, not all Americans have access to the internet at home, and many work in industries that require in-person work.

According to the Pew Research Center, roughly three-quarters of American adults have broadband internet service at home. However, the study found that racial minorities, older adults, rural residents and people with lower levels of education and income are less likely to have broadband service at home. In addition, 1 in 5 American adults access the internet only through their smartphone and do not have traditional broadband access.

Full-time employees are four times more likely to have remote work options than part-time employees. A typical remote worker is college-educated, at least 45 years old and earns an annual salary of $58,000 while working for a company with more than 100 employees, according to Global Workplace Analytics.

New York, California and other states have enacted strict policies for people to remain at home during the coronavirus pandemic, which could change the future of work.

“I don’t think we’ll go back to the same way we used to operate,” Jennifer Christie, chief HR officer at Twitter, told CNBC. “I really don’t.”

What not to do when implementing remote: don’t replicate the in-office experience remotely

Posted on Updated on

Originally seen: Gitlab on March 23rd, 2020

Due to global issues concerning Coronavirus (COVID-19), rising rents in concentrated urban areas, and the ongoing battle amongst organizations for recruiting and retaining top talent, there has been a noted shift in appetite for working remotely. Companies which were previously against remote work are suddenly considering remote, or implementing remote, with varying degrees of intentionality.

The reality is that almost every company is already a remote company. If you have more than one office, operate a company across more than one floor in a building, or conduct work while traveling, you are a remote company. It behooves all of these firms to adopt remote-first practices, even if some interactions occur in a shared physical space.

On this page, we’re detailing what not to do when transitioning to remote, or moving towards remote.

Is this advice any good?

GitLab is the world’s largest all-remote company. We are 100% remote, with no company-owned offices anywhere on the planet. We have over 1,200 team members in more than 65 countries. The primary contributor to this article (Darren Murph, GitLab’s Head of Remote) has over 14 years of experience working in and reporting on colocated companies, hybrid-remote companies, and all-remote companies of various scale.

Just as it is valid to ask if GitLab’s product is any good, we want to be transparent about our expertise in the field of remote work.

Do not assume that there are no resources available yet

GitLab all-remote team

GitLab has created a comprehensive guide to working well remotely, covering popular topics such as:

  1. Transitioning to remote
  2. Forcing functions to work remote-first
  3. Hybrid-remote pitfalls to avoid
  4. Meetings
  5. Management
  6. Scaling
  7. Informal communication
  8. Building culture
  9. Combating burnout, isolation, and anxiety
  10. Embracing asynchronous workflows
  11. Remote workspaces
  12. Getting started in a remote role

The pages within, just like the entire GitLab handbook, are publicly accessible. Please consider studying these guides, implementing them, and contributing your learnings to make them better.

Do not replicate the in-office/colocated experience, remotely

It is vital to recognize and appreciate this point: an organization should not attempt to merely replicate the in-office/colocated experience, remotely.

Remote work is not traditional work which is simply conducted in a home office instead of a company office. There is a natural inclination for those who have not personally experienced remote work to assume that the core (or only) difference between in-office work and remote work is location (in-office vs. out-of-office). This is inaccurate, and if not recognized, can be damaging to the entire practice of working remotely.

The principles of remote work are different. The approach to conducting work is different. Just as multi-level office buildings required elevators and phones to be functional as workplaces, teams working remotely should embrace tools (GitLab, Figma, etc.) that enable asynchronous communication and should reconsider traditional thoughts on items such as meetings and informal communication.

Do not transfer all in-person meetings to virtual

Remote work isn’t something you do as a reaction to an event — it is an intentional approach to work that creates greater efficiency, more geographically and culturally diverse teams, and heightened transparency.

What is happening en masse related to Coronavirus (COVID-19) is largely a temporary work-from-home phenomenon, where organizations are not putting remote work ideals into place, as they expect to eventually require their team members to resume commuting into an office.

Merely transferring planned office meetings to virtual meetings misses an opportunity to answer a fundamental question: is there a better way to work than to have a meeting in the first place?

Do not assume that everyone has access to an optimal workspace

GitLab all-remote team

While long-term remote workers have had years to tweak and iterate on their home office, those who are thrust into working from anywhere may be ill-prepared. Organizations should not expect team members to be masters in office design and ergonomics. Too, what works best for one person will look different than another person.

If transitioning to remote, organizations should empower team members to spend company money as if it is their own when constructing a home office. Consider reimbursing expenses related to coworking spaces and external offices, as some team members will prefer to work outside of their homes.

Do this, not that

Some may find it useful to see examples of comparisons between colocated norms, and the most closely correlated remote recommendation. You will notice that many suggestions link back to asynchronous workflows, transparency, and working handbook-first, which are cornerstones to doing remote well.

Note that all of these suggestions are not exclusive to remote. Even for companies which intend to maintain offices or transition to a hybrid-remote company, implementing remote-first techniques ensure that all employees are viewed as first-class citizens and companies avoid the five dysfunctions of a team.

Do not assume that remote happens overnight

GitLab all-remote team

For companies who move into an office building, it’s unlikely that everything works perfectly on the first day. Signage may be missing, security gates may be erratic, elevators may be stuck, etc. Adapting to a workplace takes time, and polish comes with iteration.

The same is true when embracing remote work. Particularly for companies which were established with colocated norms, it is vital for leadership to recognize that the remote transition is a process, not a binary switch to be flipped. Leaders are responsible for embracing iteration, being open about what is and is not working, and messaging this to all employees.

Remote isn’t a structure that merely works or doesn’t work. Remote is a way of working that requires intentional and perpetual care and evaluation — just as you’d expect in an office environment. Working well remotely (or in-office, for that matter) is not something that is ever done or accomplished. There are always new tools to consider, new workflows to integrate, and new expertise to ingest.

Too, what works for a small remote team may not work for a remote team consisting of thousands of team members. All of this is equally true for colocated companies, though it tends to be less amenable to Band-aid (temporary) solutions in a remote environment.

Do not assume that remote management is drastically different

In truth, managing a remote company is much like managing any company. It comes down to trust, communication, and company-wide support of shared goals, all of which aid in avoiding dysfunction.

Remote forces you to do the things you should be doing way earlier and better. It forces discipline that sustains culture and efficiency at scale, particularly in areas which are easily deprioritized in small colocated companies.

It’s important to not assume that team members understand good remote work practices. GitLab managers are expected to coach their reports to utilize asyncronous communication, be handbook-first, design an optimal workspace, and understand the importance of self-learning/self-service.

Leaders should ensure that new remote hires read a getting started guide, and make themselves available to answer questions throughout one’s journey with the company.

Do not assume your existing values can remain static

To operate well as a remote enterprise, your values must be in support of this way of working. GitLab’s collection of values and sub-values contribute to a thriving all-remote environment. Consider studying the nuances of these values and adjusting or adding to your company’s existing values. Values that were established to support colocated norms may not apply to remote, particularly those which obstruct transparency.

Don’t be quick to brush values off as understood, either. For example, collaboration in a colocated space is routinely demonstrated by gathering people in a shared physical space in search of consensusCollaboration in a remote setting is demonstrated by empowering the greatest amount of people to contribute insights asynchronously while enabling the DRI (directly responsible individual) to make decisions without explanation.

Contribute your lessons

GitLab believes that all-remote is the future of work, and remote companies have a shared responsibility to show the way for other organizations who are embracing it. If you or your company has an experience that would benefit the greater world, consider creating a merge request and adding a contribution to this page.


FBI says there’s no evidence Chinese hackers used Equifax data, but consumers can’t be complacent

Posted on

Equifax Breach Pic

Originally seen on CNBC on Feb 10th, 2020 by Megan Leonhardt

The Justice Department announced Monday that it’s indicting four members of the Chinese military for the 2017 Equifax data hack, which exposed the personal information of 147 million Americans.

The department’s painstaking investigation also found there’s no evidence the data stolen has been used “at this time,” FBI Deputy Director David Bowdich said during a press conference Monday.

Yet Bowdich urged consumers to remain vigilant when it comes to protecting their information. “As American citizens, we cannot be complacent about protecting our sensitive, personal data,” he says.

The Equifax data breach, first announced in September 2017, is one of the largest in history, with 147 million consumers affected, according to the Federal Trade Commission. Hackers were able to get access to a multitude of consumers’ private information, including names, Social Security numbers, dates of birth, credit card numbers and even driver’s license numbers.

During the investigation into the breach, Equifax admitted the company was informed in March 2017 that hackers could exploit a vulnerability in its system, but it failed to install the necessary patches.

Last summer, Equifax agreed to pay $700 million to settle federal and state investigations into how it handled the massive data breach. As part of the settlement individual consumers were able to claim up to $20,000 for any losses or fraud caused by the breach or free credit-monitoring services. If you already had credit monitoring in place, you could submit a claim for up to $125 cash payment.

The settlement received final approval last month. If you’re still unsure if your data was part of the Equifax breach, you can enter your name and the last 4 digits of your Social Security number in a search here.

The best ways to protect your information

Although none of the stolen Equifax data has been detected yet, that doesn’t mean that it will never surface, cyber-security expert Joseph Steinberg tells CNBC Make It.

That’s especially true since much of the information that was stolen in the Equifax breach, including Social Security numbers, does not change with time. In fact, this type of data can become more valuable over time, aging like a fine wine, Steinberg says. “If the Chinese use the data a decade from now, few people will even be thinking about the Equifax breach.”

That said, Steinberg says the Chinese government is probably not stealing data in order to steal money, and identity theft is probably not its primary reason either. “The data might have tremendous value in terms of recruiting spies and other military-type purposes,” he says, adding that “the FBI would not have a clue if the data were used as such.”

To protect your data, Bowdich recommends Americans avoid clicking on links or opening attachments in emails, especially when you don’t know the sender.

Emails are a particularly common way for fraudsters to gain access to your credit card information or identity. Hackers send what’s called a phishing email. “Email is the number-one way cyber crime of all forms happens. If a bad guy can get you to click on a link in an email, he can do all manner of bad things to your online life,” says Dave Baggett, co-founder and CEO of anti-phishing start-up Inky.

Americans should also use two-factor authentication, which generally requires users to not only enter a password, but also confirm their identity by logging onto your phone or entering a code texted or emailed to you.

Last, people should check their credit report on a “fairly regular” basis, Bowdich said. Unlike a simple credit score, your entire credit report provides a comprehensive look at your credit history and activity. You can get a free copy of your report once a year from each of the three major credit bureaus: EquifaxExperian and Transunion.

“They should make sure their data and their information is secure,” Bowdich said.

Yahoo Breach Payout: How To Claim Up To $25,000 Before The Deadline

Posted on

Yahoo Article Image

Originally seen on Forbes on Feb 11th, 2020 by Kate O’Flaherty

The Yahoo breach is known as one of the worst of all time, partly because of its size. When the firm was hacked twice in 2016, all of Yahoo’s 3 billion users were affected. Worse still, hackers had stolen highly sensitive information including names, security questions and answers, and passwords.

It’s only fair, then, that those impacted are given compensation of some kind. Last year, Yahoo said it would pay up to $25,000 to each person affected by the breach, with $100 or free credit monitoring available to most users. It is part of a $117.5 million breach settlement for 194 million people.

The higher $25,000 is available if you can prove the financial damage you suffered due to the Yahoo hack. You are eligible for the $100 payout if you can prove you already have credit monitoring in place.

Last week, you might have received an email telling you more about the Yahoo payout. The deadline of July 20 this year is getting closer, so what better time to apply for your compensation? Here’s what you need to do.

How to apply for a Yahoo breach compensation payout

If you are based in the U.S. and had a Yahoo account between January 1 2012 and December 31 2016, you are eligible to make a claim. The first thing you need to do is visit Yahoo’s settlement website, where you can see whether you qualify for credit monitoring or the $100 cash payout. The cash payout might even go higher–if too few people apply in time, the money could go up to a more enticing $358.80 per claim.

However, you do need to prove that you have credit monitoring in place in order to qualify for the cash.

You also might be eligible for a payout of up to $25,000 in out of pocket losses. According to the Yahoo settlement website, this includes “lost time, that you believe you suffered or are suffering because of the data breaches.”

The settlement site explains that you can receive payment for up to fifteen hours of time at an hourly rate of $25.00 per hour or unpaid time off work at your actual hourly rate, whichever is greater. If your lost time is not documented, you can receive payment for up to five hours at that same rate, the site says.

Once you have worked out whether you are eligible, and how much you can apply for, you can file your claim via a form on the Yahoo settlement website. You will need to supply all the relevant documents.

Breach settlement payouts increase

The Yahoo and Equifax breaches are considered among the worst hacks of all time, and both firms have been the subject of class action lawsuits resulting in payouts. The initial Equifax claim deadline, which saw breach victims apply for up to $20,000, has just passed.

There is no doubt that companies such as these needed to take better care of customers’ data, and paying some kind of compensation is quite frankly, the least they can do.

Calibration Attack Drills Down on iPhone, Pixel Users

Posted on Updated on

apple iphone pixel calibration fingerprinting tracking

Originally seen: Threatpost on May 23rd, 2019 by Tara Seals

A new way of tracking mobile users creates a globally unique device fingerprint that browsers and other protections can’t stop.

A proof-of-concept for a new type of privacy attack, dubbed “calibration fingerprinting,” uses data from Apple iPhone sensors to construct a globally unique fingerprint for any given mobile user. Researchers said that this provides an unusually effective means to track people as they browse across the mobile web and move between apps on their phones.

Further, the approach also affects Pixel phones from Google, which run on Android.

A research team from the University of Cambridge in the UK released their findings this week, showing that data gathered from the accelerometer, gyroscope and magnetometer sensors found in the smartphones can be used to generate the calibration fingerprint in less than a second – and that it never changes, even after a factory reset.

The attack also can be launched by any website a person visits via a mobile browser, or any app, without needing explicit confirmation or consent from the target.

In Apple’s case, the issue results from a weakness in iOS 12.1 and earlier, so iPhone users should update to the latest OS version as soon as possible. Google has not yet addressed the problem, according to the researchers.

A device fingerprint allows websites to detect return visits or track users, and in its innocuous form, can be used to protect against identity theft or credit-card fraud; advertisers often also rely on this to build a user profile to serve targeted ads.

Fingerprints are usually built with pretty basic info: The name and version of your browser, screen size, fonts installed and so on. And browsers are increasingly using blocking mechanisms to thwart such efforts in the name of privacy: On Apple iOS for iPhone for instance, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings and eliminate cross-domain tracking.

However, any iOS devices with the iOS version below 12.2, including the latest iPhone XS, iPhone XS Max and iPhone XR, it’s possible to get around those protections, by taking advantage of the fact that motion sensors used in modern smartphones use something called microfabrication to emulate the mechanical parts found in traditional sensor devices, according to the paper.

“MEMS sensors are usually less accurate than their optical counterparts due to various types of error,” the team said. “In general, these errors can be categorized as deterministic and random. Sensor calibration is the process of identifying and removing the deterministic errors from the sensor.”

Websites and apps can access the data from sensors, without any special permission from the users. In analyzing this freely accessible information, the researchers found that it was possible to infer the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for these systematic manufacturing errors. That calibration data can then be used as the fingerprint, because despite perceived homogeneity, every Apple iPhone is just a little bit different – even if two devices are from the same manufacturing batch.

“We found that the gyroscope and magnetometer on iOS devices are factory-calibrated and the calibration data differs from device-to-device,” the researchers said. “Extracting the calibration data typically takes less than one second and does not depend on the position or orientation of the device.”

To create a globally unique calibration footprint requires adding in a little more information, however, for instance from traditional fingerprinting sources.

“We demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app — around 67 bits of entropy for the iPhone 6S,” they said. “Calibration fingerprints generated by a website are less unique (~42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices.”

A longitudinal study also showed that the calibration fingerprint, which the researchers dubbed “SensorID,” doesn’t change over time or vary with conditions.

“We have not observed any change in the SensorID of our test devices in the past half year,” they wrote. “Our dataset includes devices running iOS 9/10/11/12. We have tested compass calibration, factory reset, and updating iOS (up until iOS 12.1); the SensorID always stays the same. We have also tried measuring the sensor data at different locations and under different temperatures; we confirm that these factors do not change the SensorID either.”

In terms of how applicable the SensorID approach is, the research team found that both mainstream browsers (Safari, Chrome, Firefox and Opera) and privacy-enhanced browsers (Brave and Firefox Focus) are vulnerable to the attack, even with the fingerprinting protection mode turned on.

Further, motion sensor data is accessed by 2,653 of the Alexa top 100,000 websites, the research found, including more than 100 websites exfiltrating motion-sensor data to remote servers.

“This is troublesome since it is likely that the SensorID can be calculated with exfiltrated data, allowing retrospective device fingerprinting,” the researchers wrote.

However, it’s possible to mitigate the calibration fingerprint attack on the vendor side by adding uniformly distributed random noise to the sensor outputs before calibration is applied at the factory level – something Apple did starting with iOS 12.2.

“Alternatively, vendors could round the sensor outputs to the nearest multiple of the nominal gain,” the paper said.

Privacy-focused mobile browsers meanwhile can add an option to disable the access to motion sensors via JavaScript.

“This could help protect Android devices and iOS devices that no longer receive updates from Apple,” according to the paper.

Although most of the research focused on iPhone, Apple is not the only vendor affected: The team found that the accelerometer of Google Pixel 2 and Pixel 3 can also be fingerprinted by the approach.

That said, the fingerprint has less individual entropy and is unlikely to be globally unique – meaning other kinds of fingerprinting data would also need to be gathered for full device-specific tracking.

Also, the paper noted that other Android devices that are also factory calibrated might be vulnerable but were outside the scope of testing.

While Apple addressed the issue, Google, which was notified in December about the attack vector, is still in the process of “investigating this issue,” according to the paper.

Threatpost has reached out to the internet giant for comment.

Phishing targeting SaaS and webmail services increased to 36% of all phishing attacks

Posted on Updated on

Originally seen: Helpnetsecurity on May 20th, 2019

Users of Software-as-a-Service (SaaS) and webmail services are being targeted with increasing frequency, according to the APWG Q1 2019 Phishing Activity Trends Report.

SaaS webmail phishing increased

The category became the biggest target in Q1, accounting for 36 percent of all phishing attacks, for the first time eclipsing the payment-services category which suffered 27 percent of attacks recorded in the quarter.

Online SaaS applications have become fundamental business tools, since they are convenient to use and cost-effective. SaaS services include sales management, customer relationship management (CRM), human resource, billing and other office applications and collaboration tools.

“Phishers are interested in stealing logins to SaaS sites because they yield financial data and also personnel data, which can be leveraged for spear-phishing,” said Greg Aaron, APWG Senior Research Fellow.

Stefanie Ellis, AntiFraud Product & Marketing Manager at MarkMonitor said: “The total number of confirmed phishing sites increased in early 2019, with the biggest jump in March.”

The total number of phishing sites detected in 1Q of 2019 was 180,768. That was up notably from the 138,328 seen in the fourth quarter of 2018, and from the 151,014 seen in the third quarter of 2018.

Payment Services and Financial Institution phishing continued to suffer a high number of phishing attacks. But attacks against cloud storage and file hosting sites continued to drop, decreasing from 11.3 percent of all attacks in the first quarter of 2018 to just 2 percent in the first quarter of 2019.

Meanwhile, cybercriminals deployed HTTPS-protected phishing websites in record numbers, according to PhishLabs, posting a record high of nearly 60 percent of detected phishing websites in 1Q 2019 employing this data encryption protocol.

Phishers turn this security utility against users, leveraging the HTTPS protocols padlock icon that appears in the browser address bar to assure users that the website itself is trustworthy.

SaaS webmail phishing increased

“In Q1 2019, 58 percent of phishing sites were using SSL certificates, a significant increase from the prior quarter where 46 percent were using certificates,” said John LaCour, CTO of PhishLabs.

“There are two reasons we see more. Attackers can easily create free DV (Domain Validated) certificates, and more web sites are using SSL in general. More web sites are using SSL because browser warning users when SSL is not used. And most phishing is hosted on hacked, legitimate sites.”

The Nasty List Phishing Scam is Sweeping Through Instagram

Posted on

Originally seen on April 13, 2019: Bleepingcomputer by Lawrence Abrams

A new phishing scam called the “The Nasty List” is sweeping through Instagram and is targeting victim’s login credentials. If a user falls victim, the hackers will utilize their accounts to further promote the phishing scam.

The Nasty List scam is being spread through hacked accounts that send messages to their followers stating that they were spotted on a so-called “Nasty List”. These messages state something like “OMG your actually on here, @TheNastyList_34, your number is 15! its really messed up.”

Messages being sent from hacked accounts
Messages being sent from hacked accounts

According to screenshots shared with BleepingComputer, the scammers attempt to send these messages to all followers of a hacked account.

If a recipient visits the listed profile, it will be named something like “The Nasty”, “Nasty List”, or “YOUR ON HERE!!”. The profiles include a description similar to “People are really putting all of us on here, I’m already in 37th position, if your reading this you must be on it too.” or “WOW you are really on here, ranked 100! this is horrible, CANT WAIT TO REVEAL THE TOP 10!” as shown below.

Example Nasty List Scam Profiles
Example Nasty List Scam Profiles


These profile descriptions also include a link that supposedly allows you to see this Nasty List and why you are on it. For example, the above profiles are using the URL nastylist-instatop50[.]me, which  when visited will display what appears to be very legitimate looking Instagram login page.

Fake Instagram Login Page
Fake Instagram Login Page

While the above page looks real, it is important to pay attention to the URL listed at the top of the window as indicated by the red arrow in the image above. As you can see this login page is actually located at nastylist-instatop50[.]me, which is obviously not a legitimate Instagram site.

To avoid falling for an Instagram phishing scam like the Nasty List, if you are at a page that does not belong to the web site, never enter your login credentials.

What to do if you were hacked by this scam?

If you have been hacked by the “Nasty List” phishing scam and you still have access to your account, the first thing you should do is verify that your account is using the correct phone number and email address.

You can do this by going to your profile and selecting Edit Profile. Then scroll to the bottom to view your email address and phone number. If it’s not correct, try to change it to the correct information.

Once you have correct email and phone number listed, you want to change your password by following these instructions.

Once you have changed your password, all devices currently logged into your account will be logged off. You can then log back in to regain control of your account.

Facebook says it ‘unintentionally uploaded’ 1.5 million people’s email contacts without their consent

Posted on

Originally seen on April 17, 2019: Business Insider by Rob Price

Facebook harvested the email contacts of 1.5 million users without their knowledge or consent when they opened their accounts.

Since May 2016, the social-networking company has collected the contact lists of 1.5 million users new to the social network, Business Insider can reveal. The Silicon Valley company said the contact data was “unintentionally uploaded to Facebook,” and it is now deleting them.

The revelation comes after pseudononymous security researcher e-sushi noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, a move widely condemned by security experts. Business Insider then discovered that if you entered your email password, a message popped up saying it was “importing” your contacts without asking for permission first.

At the time, it wasn’t clear what was happening — but on Wednesday, Facebook disclosed to Business Insider that 1.5 million people’s contacts were collected this way and fed into Facebook’s systems, where they were used to improve Facebook’s ad targeting, build Facebook’s web of social connections, and recommend friends to add.

A Facebook spokesperson said before May 2016, it offered an option to verify a user’s account using their email password and voluntarily upload their contacts at the same time. However, they said, the company changed the feature, and the text informing users that their contacts would be uploaded was deleted — but the underlying functionality was not.

Facebook didn’t access the content of users’ emails, the spokesperson added. But users’ contacts can still be highly sensitive data — revealing who people are communicating with and connect to.

While 1.5 million people’s contact books were directly harvested by Facebook, the total number of people whose contact information was improperly obtained by Facebook may well be in the dozens or even hundreds of millions, as people sometimes have hundreds of contacts stored on their email accounts. The spokesperson could not provide a figure for the total number of contacts obtained this way.

Users weren’t given any warning before their contact data was grabbed

The screenshot below shows the password entry page users saw upon sign up. After they entered their password and clicked the blue “connect” button, Facebook would begin harvesting users’ email contact data without asking for permission.

facebook login password emailScreenshot/Business Insider

After clicking the blue “connect” button, a dialog box (screenshot below) popped up saying “importing contacts.” There was no way to opt out, cancel the process, or interrupt it midway through.

facebook authenticationScreenshot/Rob Price

Business Insider discovered this was happening by signing up for Facebook with a fake account before Facebook discontinued the password verification feature. In our test, after the authentication loading screen finished, a new box popped up saying it didn’t find any contacts, and then took us to the homescreen of the social network.

A user might have been able to infer from this that their contacts were being accessed — but there was no way to stop it happening, or advance notice ahead of time.

facebook email contactsBI

From one crisis to another

The incident is the latest privacy misstep from the beleaguered technology giant, which has lurched from scandal to scandal over the past two years.

Since the Cambridge Analytica scandal in early 2018, when it emerged that the political firm had illicitly harvested tens of millions of Facebook users’ data, the company’s approach to handling users’ data has come under intense scrutiny. More recently, in March 2019, the company disclosed that it was inadvertently storing hundreds of millions of users’ account passwords in plaintext, contrary to security best practices.

Facebook now plans to notify the 1.5 million users affected over the coming days and delete their contacts from the company’s systems.

“Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time. When we looked into the steps people were going through to verify their accounts we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” the spokesperson said in a statement.

“We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.”