Is Amazon Alexa Safe? Cybersecurity Researchers Uncover Serious Privacy Issues

Posted on Updated on

As seen on: by Jocelyn Grzeszczak on 8/13/20

A cybersecurity firm has uncovered serious privacy concerns in Amazon’s popular “Alexa” device, leading to questions about its safety.

Check Point, the California- and Israel-based technology company, published a report Thursday detailing “vulnerabilities found on Amazon’s Alexa,” including a hacker’s access to the user’s voice history and personal information, as well as the ability to silently install or remove skills on the user’s account.

“In effect, these exploits could have allowed an attacker to remove/install skills on the targeted victim’s Alexa account, access their voice history and acquire personal information through skill interaction when the user invokes the installed skill,” according to the report. “Successful exploitation would have required just one click on an Amazon link that has been specially crafted by the attacker.”

Amazon’s Alexa line is powered by artificial intelligence (AI) technology, and the conglomerate had sold more than 200 million Alexa devices by the end of 2019, CNET reported. The Alexa essentially functions as a virtual assistant to its user, able to take voice commands, play music, set alarms, and offer weather or news reports.

Developers are continually working on new programs to make the devices even more user-friendly. Just a few weeks ago, for instance, Amazon announced Alexa Conversations was moving into its beta phase, and would now be able to provide an AI-driven element to voice interactions, making conversations flow more naturally.


Amazon Alexa
Amazon highlights how its Alexa digital assitant can be integrated into various smart home devices at its exhibit at the Consumer Electronics Show in Las Vegas, Nevada, January 11, 2019. Cybersecurity firm Check Point uncovered serious privacy concerns in the Alexa device in a report published August 13.

In its report, Check Point described how an attacker could hack into a user’s Amazon account to compromise their Alexa device, including a breakdown of the code needed to carry out such an action. In one example of how an attack could occur, the user would click on a malicious link provided by the hacker, allowing them to inject their code into the user’s account.

Check Point also detailed how an attacker could get the device’s entire voice history, which could expose banking information, home addresses or phone numbers, as all interactions with the device are recorded.

Virtual assistants provide relatively easy targets for attackers wishing to steal sensitive information or disrupt a user’s smart home device, according to the report. Check Point’s research found a weak spot in Amazon’s security technology, the report stated.

“What we do know is that Alexa had a significant period of time where it was vulnerable to hackers,” Check Point spokesman Ekram Ahmed told Fox News. “Up until Amazon patched, it’s possible that personal and sensitive information was extracted by hackers via Alexa. Check Point does not know the answer to whether that occurred yet or not, or to the degree to which that happened.”

In an emailed statement to Newsweek, an Amazon spokesperson wrote that security of its devices is a top priority for the company.

“We appreciate the work of independent researchers like Check Point who bring potential issues to us. We fixed this issue soon after it was brought to our attention, and we continue to further strengthen our systems,” according to the statement. “We are not aware of any cases of this vulnerability being used against our customers or of any customer information being exposed.”

To ensure Alexa devices are secure, Check Point recommends that users avoid unfamiliar apps, think twice before sharing information with a smart speaker and conduct research on any downloaded apps, a company spokesperson wrote in an email to Newsweek.

Update (08/13/20, 11:52 a.m.): This article has been updated to include responses from Amazon and Check Point.

Budget request emphasizes cyber, network security efforts

Posted on Updated on

As seen on: by Derek B. Johnson on 2/10/2020

The Trump administration’s proposed budget for fiscal year 2021 would spend $18.8 billion on cybersecurity programs across the federal government, with approximately $9 billion dedicated to civilian agencies for network security, protecting critical infrastructure, boosting the cybersecurity workforce and other priorities.

The overall cybersecurity funding at the Department of Homeland Security is listed at $2.6 billion. That includes $1.1 billion for DHS and its component, the Cybersecurity and Infrastructure Security Agency, to defend government networks and critical infrastructure from cyber threats, including for tools like EINSTEIN and Continuous Diagnostics and Mitigation. According to the Office of Management and Budget, the funding would increase the number of DHS-led network risk assessments from 1,800 to 6,500 and allow for more state and local governments to utilize the department’s services.

The administration has also put a heavy emphasis on bolstering the government’s cybersecurity workforce, releasing an executive order and strategic plan last year. The budget includes funding for DHS’ Cyber Talent Management System, a personnel system designed to bring hundreds of new cybersecurity professionals into the federal workforce under special hiring rules, as well as a CISA-managed cybersecurity workforce initiative and an interagency rotational program that temporarily details cyber personnel to other agencies to gain more holistic experience.

It also proposes to transfer the U.S. Secret Service, which investigates a number of cyber-enabled financial crimes, from DHS to the Department of Treasury.

The Department of Energy would get $665 million for cybersecurity, including $185 million for the Office of Cybersecurity, Energy Security and Emergency (CESR), part of which would go towards funding early research and development of methods to better protect the energy supply chain. At the same time, the plan would eliminate a number of grant programs and research organizations, such as the Advanced Research Projects Agency-Energy.

“The private sector has the primary role in taking risks to finance the deployment of commercially viable projects and Government’s best use of taxpayer funding is in earlier stage R&D,” the budget states.

It would also invest in a number of emerging technologies, setting aside $5 million to stand up Energy’s new Artificial Intelligence Technology Office, along with an additional $125 million for AI and machine learning research. Other research funding includes $475 million for the Office of Science supercomputing research and $237 million for quantum computing research.

On securing the supply chain, the budget would set aside $35 million for the Department of Treasury to implement the Foreign Investment Risk Review Modernization Act passed in 2018, which created a new layer of review by the Committee on Foreign Investment in the United States for foreign investments in U.S. companies that produce critical technologies. The administration is also implementing the Secure Technologies Act, which created a new Federal Acquisition Supply Chain Security Council charged with developing procurement regulations to prevent U.S. agencies from buying compromised computer parts, components and software.

Here’s what the Twitter hack tells us about potential security risks of working from home

Posted on Updated on

As seen on: on 7/27/20 by Clare Duffy

New York (CNN Business)The Twitter hack that compromised the accounts of Barack Obama, Kanye West and other figures earlier this month was one of the more prominent cybersecurity breaches in recent memory — and it was all the more dramatic as it played out live on the platform while users watched.

It was the first major breach reported since March, when many companies rapidly transitioned to remote working because of coronavirus.
For Twitter (TWTR), the hack was certainly not a good look. CEO Jack Dorsey apologized for it on the company’s earnings call last week, saying: “Last week was a really tough week for all of us at Twitter, and we feel terrible about the security incident.”
For other companies, the hack could serve as a reminder that even at a moment when there is much else to worry about (like the economic recession and ongoing pandemic), cybersecurity threats are still an issue. It may be more true now than usual — experts say that having many people working from home presents unique security risks, especially given that many companies made the transition practically overnight.
It’s not clear whether remote working policies at Twitter, which has said it will allow some employees to continue working from home “forever” if they choose, had anything to do with the hack. But it’s something other companies should be aware of.
“The way (the transition to remote working) happened, instantly, there was no warning, and all of a sudden people were just told, ‘you’re not going back to work tomorrow,'” said Anu Bourgeois, an associate professor of computer science at Georgia State University. “Everybody became vulnerable at that point.”

Security risks from remote working

Only about 29% of workers had the option to work from home from 2017 to 2018, according to the most recent data available from the Bureau of Labor Statistics.
When coronavirus hit the United States, employers had to scramble to get a huge percentage of the country’s workforce to transition to remote working for the first time, a massive task that may have involved corner-cutting when it came to security.
There are a number of ways companies could have gone during the transition. In the hurry to keep employees safe but still maintain their workflow, companies might have given out laptops not equipped with the proper security software or asked them to use their own personal devices for work, Bourgeois said.
That issue was likely heightened for employees and families who can’t afford multiple devices and suddenly found themselves working from home while kids attended school remotely.
“They’re having to juggle different people using that device,” Bourgeois said. “Whereas at work you’re just one person, your kids may be having to use the device you use for work for their school or entertainment. You have that vulnerability of different people on your machine.”
Companies that were accustomed to having employees work only out of the office likely also had to develop new “access controls.” Whereas workers may have only been able to access their company’s servers and data from inside the office, they now may have to sign into a virtual private network (VPN) or other portal to securely access the information needed to do their jobs.
Deploying proper cybersecurity protocols for a remote workforce, “especially for a large scale company, is going to be really time consuming and difficult to do,” said Bourgeois.
She added that even with existing security software, companies could run into issues. Some security systems track employee habits — such as the normal days, times and duration of time that they typically access company systems — to identify potential hackers. But such systems may be confused by people’s changing work habits during the pandemic, and therefore could be less likely to catch breaches.

What we know about the Twitter hack

It’s unclear whether the Twitter hack had anything to do with remote working policies the company put in place in response to the pandemic.
Former Twitter employees examining the incident acknowledged that it’s a possibility, but there’s no evidence that Twitter relaxed its security to accommodate working from home. Twitter declined to comment on its remote work policies.
Twitter said the breach was the result of a coordinated “social engineering” attack that targeted workers who had administrative privileges, with the aim of taking control of the accounts.
Experts say social engineering may also be easier when people are working from home, where they may be distracted or let their guard down.
“You have people scrambling, in a different environment, and that mindset is not the same when you’re working from home versus the office,” Bourgeois said. “So many people are juggling their kids and are distracted and may be trying to quickly get through whatever task they need to get through. (They) may not be as sensitive to looking for these social engineering tactics, like phishing emails or phone calls.”
Some have also warned that hackers may try to exploit people’s fear of coronavirus in an attempt to carry out hacks or phishing attempts.
“As the world’s anxiety regarding coronavirus continues to escalate, the likelihood that otherwise more cautious digital citizens will click on a suspicious link is much higher,” the Electronic Frontier Foundation wrote in a March blog post.
The EFF cautioned people to look out for suspicious messages promising information or offers related to coronavirus, especially ones that sound too good to be true, like an offer to submit personal information in exchange for a free coronavirus vaccine.
For companies looking to avoid being the next target of an attack — in addition to implementing antivirus software and two-factor authentication — “the number one thing is education,” according to Bourgeois.
“Unless your employees are well versed in all of these different types of attacks and what to be aware of, it doesn’t matter what else you do, that person is vulnerable. Educating the workforce is key,” Bourgeois said.
–CNN’s Brian Fung contributed to this report.

Akamai CSO: 5G is a whole new cyber-security nightmare

Posted on

As seen on by Adam Janofsky Feb 7, 2020

Andy Ellis, chief security officer at Akamai, says businesses are struggling to protect themselves against connected devices. With 5G, the problem is only going to get worse.

From smart aquariums to remotely controlled HVAC systems, the proliferation of internet-connected devices with lackluster security controls presents a constant challenge for cybersecurity professionals.

Andy Ellis, Akamai’s chief security officer, says 5G is only going to make it worse.

“5G enables more devices to be online at a time where we don’t really have a plan to secure them in the future,” Ellis told Protocol. “It’s basically the creation of a debt to service the future. We buy this world full of connected devices, and the mortgage is that at some point we have to secure them before they cause more problems for us.”

Sign up for Protocol newsletters

Ellis has spent the last two decades in cybersecurity roles at Akamai, which operates one of the largest content-delivery networks in the world. He got his start in cyber as an information warfare specialist in the U.S. Air Force, which he joined after earning a computer science degree at MIT.

He talked with Protocol last month about 5G, connected devices, and what cybersecurity professionals should be focused on in 2020.

This conversation has been edited for length and clarity.

How will the proliferation of connected devices affect cybersecurity?

When it comes to connected devices, we’re at a fascinating touchpoint. Everything is becoming connected: your garage door, the fitness tracker on your wrist, the thermos you drink coffee out of. These items used to be bespoke. They had custom-made electronics and were designed to do one thing: With the garage door, you would press a button and it would transmit a signal to open or close. But they’re not bespoke devices anymore. They’re computers that can talk on the internet, and that fundamentally changes things. It creates a dramatically different level of complexity.

If you have a connected garage door, you access it through an app on your phone, which sends a signal into the cloud, talks to someone else’s server, transmits it down to another computer running inside your garage that can open and close the door. In the past, you could maybe spoof frequencies and get a garage door to open, or you could trigger a manual release and open it from the outside — those were the vulnerabilities you had to deal with.

But now you have to worry about what malware might be running on your phone, how is the phone authenticating itself on this cloud-based server, how is the server protected, how are the passwords secured, can people on the internet get access to the computer in your garage. There are many more vulnerabilities in this system than in a traditional garage door, and the devices can also be misused in other ways. With botnets, hackers compromise cameras and other connected devices by entering default passwords — like username: admin, password: admin — and use that network to harm someone else. The compromised devices can all try to access a target at once, flooding it with traffic until it becomes inaccessible.

Is the problem getting better or worse?

The problem is absolutely growing. There are billions of connected devices. We’ve probably already passed the point where connected devices have outnumbered handheld computing devices like laptops, tablets and phones.

We need to be concerned about 5G and the growth in IoT devices. The big promise with 5G — and news stories suggest we’re not quite there with this promise — is that the capacity for more devices in a given location is much higher than it has been with 2G, 3G and 4G. In the past, if you tried to connect 35 devices to your home network, they would stop working properly. With 5G you can have that, and we’re going to see an explosion in the number of IoT devices because of that.

You mentioned a lot of consumer devices, like garage doors and fitness trackers. Is this also a business problem?

I was talking earlier about a garage in a house, but parking garages rely on a similar system. There’s a transponder in my car, the system reads it, it queries a server on the internet, sees that I’m still employed, and opens the gate so I can get into the company garage. In my office I have lighting systems, thermostats, video conferencing — all these connected devices in offices don’t look like computers and aren’t treated like computers, but that’s what they are. And a common difference between consumer-grade and commercial-grade devices is whether or not you’re building them into your system. In commercial buildings, a lot of these systems are installed by someone else, and you have to coexist with them until the building is torn down.

What industries are most affected by vulnerabilities in connected devices?

Pick any sector and I’ll tell you how they are deeply at risk. In the medical sector, hospitals are now filled with connected devices. In fact, human bodies are starting to be full of connected devices. There, you have a special risk where human life is on the line if a device is compromised.

If you talk about agriculture, more and more connected devices are used for farming — imagine the damage that could be done if an adversary was able to target machines and adjust the fertilizer recipe so that instead of 1 parts in 10 of a particular ingredient it’s 1 part in 3, and now you’re burning whatever you’re trying to grow on an industrial scale. In the satellite industry, you have some really interesting problems because you can’t service the devices at all.

There has been research into attacks on pacemakers and insulin pumps, where you can cause them to use up their batteries or medicine. What if you performed that kind of attack on a satellite, where you cause it to burn its thrusters or crash? Kevin Fu, at the University of Michigan, is a fantastic researcher in this area. In every industry, you have a case like that where you don’t do anything fancy to the device, but you get it to do its function more or less frequently until something like the battery dies. That’s a kind of threat that many people don’t think about. Pacemakers are designed so that when you walk into your doctor’s office, they can run diagnostics to check things like the battery life and how it’s operating. For manufacturers that didn’t secure that interface, a hacker could theoretically sit next to them, continuously ask for the data, and the device’s lifespan is shortened from years to months. These are the interesting problems people need to think about.

How are cybersecurity professionals at health care organizations handling these kinds of risks?

From talking to hospital CISOs, a lot of them struggle with connected devices. A challenge is that the device may be completely out of date and horribly vulnerable, but it’s high-revenue for them. Or there simply might not be an update; the manufacturer might not support the device anymore and they want you to buy the next $3 million device even though yours works well and is used 24/7.

So in many cases, the CISOs have to functionally disconnect a lot of their devices. They create an enclave just for the device so it can operate but not talk to anything else on your network, because it’s not safe. I feel that hospital CISOs have one of the more challenging jobs in my industry. They’re more of a landlord than an enterprise. Maybe their physicians don’t actually work for them. They come in, do their procedures and expect the devices to work. They need to have electronic records, so there has to be this interchange: Someone goes in, gets an X-ray, and other physicians need to see it so you can’t completely disconnect the X-ray machine. They also have to deal with celebrity customers who have valuable data; a lot of people would pay big money for that information.

What’s the worst-case scenario of an attack on a connected device?

The worst-case scenario is going to vary by organization. When you think about IoT, the question is what is the prevalence in my organization and how exposed are we. We have room-booking devices on our conference rooms, and one researcher bought a bunch of them on eBay and took them apart, and his discovery led us to pull the devices because we couldn’t secure them in a reasonable way. People didn’t really like the system anyway, so it wasn’t the end of the world. But you have to look at where you have certain devices, ask if they have credentials on your network, and figure out the worst thing that could happen. To most companies today, IoT is a distraction. You need to pay some attention to it, but it’s not your biggest worry. The data breach worry is probably much larger.

But for some organizations, the worst-case scenario for IoT devices is life safety, but not in the way that some people might think about it. Imagine if someone could mess with the traffic lights in New York City, for example. The likelihood that someone could kill someone directly with that attack is pretty low. Make an intersection all green, and people could possibly have a couple accidents but pretty soon no one is driving through the affected intersections. But indirectly what you’ve done is now New York doesn’t have streets. What happens when people need ambulances? We certainly saw that with the NotPetya cyberattacks in 2017. It took down dispatch networks in the U.K. The scheduling software was down, surgeries were postponed. How many people are indirectly killed or had their quality of life degraded? We don’t have good numbers for those, but in a complex system, incidents where you lose critical infrastructure have a huge impact.

Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.

What’s the biggest challenge with securing IoT devices?

The real challenge is the upgrade cycles of these devices. If you had an iPhone 1, upgrades really sucked. You had to plug it into iTunes and manually download the new configuration, back up your phone because you didn’t know if it was going to work, install the new operating system and pray. For the new iPhones, it’s totally different. You go to bed and wake up in the morning and your iPhone says: “By the way, a new iOS is installed, have a nice day.” The change from the old model to the new model required serious hardware changes. There have been security protocol changes that would make today’s process impossible on the iPhone 1. Apple was willing to say that when they update the iOS, they won’t support hardware that’s several generations old; it’s past its shelf life, get rid of it. Basically, the iPhone — as expensive as it is — needs to be treated as a disposable technology.

The challenge we have on most devices is they’re not treated as disposable. You buy a thermostat and you don’t have a long-term relationship with whoever you bought it from. It’s lower quality than your iPhone, but you attach it to your wall, and it runs for 10 or 20 years until it dies. So my main worry about connected devices isn’t about the pace we deploy them, it’s about the pace we update them, which is approximately zero. Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.

What should you do if the thermostat manufacturer goes out of business shortly after you buy the device?

You have to toss the device in the trash. That’s what you have to do. At the corporate level, we have the staff and infrastructure to take that challenge on. It’s a different issue for houses of worship, small enterprises, nonprofits — they don’t really have the bandwidth to worry about that kind of problem. They keep going forward, and that’s not necessarily the wrong thing in many cases. You have to ask if the benefit is worth the risk you’re taking. If you’re running a synagogue in America today, you might want surveillance cameras outside. If the risk is someone else seeing what’s on the cameras, or having the cameras get used for DDoS attacks, that’s probably still worth being able to detect and record acts of antisemitism. It’s a trade-off, and at some point, you have to pick.

Is there a way to keep vulnerable devices off of shelves?

You could try to ban systems, but when you look at a lot of the innovation, you often see someone come up with an idea, and by the time they bring their device to market, there’s like 150 knockoffs and they’re built as cheaply as possible and often by a shell brand. They build one device and never make another. It makes it really hard when a lot of manufacturers are not in the U.S. The challenge is the consumers aren’t differentiating on the quality and security of software. If you’re buying a security camera system, you’re probably most concerned with things like resolution of the cameras, whether they work at night and outdoors, and how much storage you have. Maybe you say you want to manage them from an iPhone or web browser. The consumer isn’t incentivizing the manufacturer to secure that system. Our hopes rest on a larger brand saying we’ll provide you devices with all of this and security baked in, because we have our brand and we want to maintain a long-term relationship with you. But that’s a really big branch to hang our hopes on.

Do you think security standards for connected devices could help solve this problem?

Standards do exist. But it’s really hard to find a standard that’s comprehensive enough to be useful without it being so comprehensive that its inordinately painful. We see standards like PCI security standard for systems that deal with payment card information. There’s FedRAMP for federal government systems. Many of these are very cumbersome and overweight and are designed for environments that don’t change regularly. I don’t see a near-term path for a good IoT security standard that will be genuinely meaningful.

Returning to the Workplace — Cybersecurity Concerns Post-COVID-19

Posted on Updated on

Originally seen: on May 18th, 2020 by Peter Adams

As states begin to lift stay-at-home orders, many offices are re-opening their doors. They are re-establishing their operations and balancing recalling essential staff to the office and evaluating the future of working from home (WFH).

Office life will find a new normal, but that reality will require flexible and strategic leaders.

Reinventing Office Space

Forget business as it was. Social distancing is here to stay and will force a reinvention of the office space. Cubicles will become more like desk hotels. Employees need more room to work and higher walls to keep everyone safe. Similarly, shared spaces, such as bathrooms and breakrooms, will require a redesign.

  • Will people take a number or make an appointment for the breakroom?
  • What about bathrooms or hallways? When staff only has 5-feet of space, how can they maintain the recommended 6-feet of social distance?
  • How will elevators be kept clean and safe?

These are big questions, and they require answers, equally important is the larger issue of cybersecurity concerns in the age of COVID-19.

A New Normal in Office Technology

Specific technologies have already started to become obsolete. The desk phone is finally dead in many industries. The demands on mobility increased, and that caused other tools to become ubiquitous. The needs of the COVID-19 world pushed many companies to route calls to mobile phones or adopt softphone technology, where a computer or smartphone can function as the primary communication device. At the same time, virtual meetings and teleconferences became the way people connect.

Together, those technologies introduced new efficiencies. Cameras, headsets, and microphones will become the new standard in business operations. Still, that reality presents a new concern: how secure is your conversation?

WFH and Compromised Security

With the nearly instant shift from Work-From-Office (WFO) staff to WFH staff, IT departments in all businesses did whatever had to be done. Security was secondary to enabling the workforce and business. In many companies, IT departments compromised security to get their employees up and running.

Productivity is priority number one. Security is 1.1. Everything else is secondary.

Those security compromises must be addressed in a way that doesn’t close off the WFH employee but enables them to shift to WFO AND WFH model. Further, the compromised security that occurred initially must be addressed in earnest.

Smart Security Measures

Rethink how your services, applications, and systems are accessed from insecure networks (e.g., home networks). While it is unlikely that an organization can take responsibility for individual home networks, the need for a strong security posture still stands. A more enduring approach is to design your systems to support access from various networks. This requires strategic thinking and a sound fundamental understanding of business technology.

For instance, how are you going to protect the corporate data that people downloaded to their home computers after they return to the office? The data is still there.

Your company needs to implement the right security measures before making additional staff moves.

Start with the Basics

Updating all operating systems is an effective and simple place to begin. These are frequently outdated, and create vulnerabilities. In April 2020, Microsoft released 113 security updates to Windows 10. Most of these apply to Windows 7, but Windows 7 is no longer supported. Given that 26% of the computers in the world still run Windows 7, and most of those are in people’s homes according to Netmarketshare, there are now 113 new ways that someone could compromise those Windows 7 systems.

Adopt a Password Policy

Traditional passwords are outdated. The number of characters in your password will drive more security than complexity. We recommend that users pick a 16-character passphrase as the new minimum. A passphrase like “my dog has fleas” is a 16-character passphrase that would currently take over a thousand years to crack and is easy to remember.

Change this passphrase once every six months to maintain effective security practices. Passphrases and their respective policies must be implemented for every account, even the executive staff.

Adopt Multi-Factor Authentication

Multi-factor authentication (MFA) or two-factor authentication (2FA) leverages an existing user device, like a smartphone or token, to verify a known quantity, like a passphrase. Once the MFA/2FA is establishing, it only requires the passphrase. Unrecognized devices will require an authentication process that involves sending a code to a registered device.

Taking Action

Not everyone will be returning to the office. While increased productivity and lower costs will incentivize some organizations, others will still need to support dual office and remote environments. I expect many roles will never come back to the office.

Companies of all sizes will need to prepare for a new office landscape after COVID-19 and implementing new cybersecurity measures to support both WFO and WFH should be the first place they start. If you have questions about how to manage a secure WFO and WHF environment, reach out to your Aldrich Technology Advisor today.

The Benefits of MultiFactor Authentication – A Definitive Guide

Posted on Updated on

Originally seen: on May 1st, 2019

There is no question cybercrime is on the rise; over 1.76 billion user records were leaked in January 2019 alone. Even worse, a recent Gallup study revealed more Americans are now afraid of cybercrime than violent offences.

It’s important to understand that cybercriminals are just as sophisticated and innovative as modern IT security solutions. Often working in teams, hackers have a number of tools and resources at their disposal to access confidential data, some of which help them easily defeat traditional data security controls.


The Purpose of Multifactor Authentication

Multifactor Authentication (or MFA) has become a critical, preventative security measure for businesses and organizations of all sizes, and any individual who uses a smart device in their daily life. It offers an added layer of security that compliments how passwords are used to protect private data, thereby making it more difficult for potential hackers to exploit and obtain personal data, or to breach company networks.

To explain it simply, an authentication factor is a credential used to verify the identity of a person, entity or system.  When multifactor authentication is in place,  more than one credential is required prior to granting access to private systems or data.

Incidents such as the Facebook security breach in 2018, which exposed the personal information of over 50 million users, have forced companies to add a layer of security to their platforms. Tech giants including Twitter and Google have since adopted MFA to protect their users, and their data.


Commonly Utilized Authentication Factors

When it comes to identifying individual users, a combination of three  authentication factors are traditionally used:

  • Knowledge Factor – This is information that is known only to the user – for example, a series of security questions, PIN codes, or unique usernames and passwords
  • Possession Factor – This refers to something that a user owns – for example, a smart card, a smartphone, or an OTP (one-time passcode)
  • Inherence Factor – This refers to something that is exclusive to an individual user – for example, fingerprints, facial biometrics, voice controlled locks, or eye scans – any biometric element that can prove the user’s identity.

Typically, multifactor authentication combines at least two of the factors mentioned above – and in some cases, all three can be combined for added security.


Advantages of Multifactor Authentication for Businesses


Enhancing Compliance and Mitigating Legal Risks

Apart from data encryption, state and federal governments have also made it mandatory for certain businesses to implement multi-factor authentication into standard operating procedures at the end-user level.

For example, businesses who have employees that work with PII (Personally Identifiable Information), Social Security, or financial information, are bound by state and federal statutes to integrate multi-factor authentication into their security protocols. MFA is actually required  to meet mandatory compliance standards.


Making the Login Process Less Daunting

Many non-regulated businesses resist MFA implementations,  fearing  a more complex login process for employees and customers.

However, this extra layer of security enables organizations to redefine and reimagine their login processes on the road to enhanced security.

Setting Security Expectations

Identifying security requirements and expectations at your organization is an important part of any MFA implementation.  For example, your industry, business model,  applicable compliance regulations (if any) and the type of data you capture, utilize, and store to conduct normal business operations are important considerations.  An MFA implementation is an opportunity for every organization to identify and classify common business scenarios  based on risk level and determine when MFA login is required.

Based on a combination of factors, organizations might optionally decide that MFA is only required in certain high-risk scenarios, when accessing certain applications or databases, or  when employees login remotely, offsite, or when accessing internal systems for the first time using a new device.

MFA can also be used to set a limit on where a user can access your information from. If your employees are out in the field, and they use their own devices for work, your data is at a higher risk of theft, particularly when employees connect from external WIFI networks that are not secure.

MFA can be used to restrict user access based on their location. This means that if a user tries to access company data from an off-site location, you can easily verify whether or not they are actually an employee by requiring biometric authentication.

Single-Sign-On Solutions

Organizations who are considering MFA often decide to implement more sophisticated logins – for example, a single-login or sign-on; which is not only secure, but actually makes signing in to multiple systems easy, using one set of login credentials.

Single-sign-on authenticates the person accessing the information via MFA. Once it is confirmed that a user is authorized to access the content, they are automatically granted access to other systems associated with their user profile. This means they have access to multiple applications, without the need to log in to each one separately.

Many people now believe that passwords are dead – and for good reason. Aside from obvious risk factors involved with writing down login credentials or sharing them with unauthorized users, managing different and complex passwords for all your applications and devices means employees need to remember all of them – not exactly an easy job. This is exactly why corporate help-desks are bogged down with password reset requests – and why people have a hard time following best-practices for frequently resetting their passwords in apps that don’t require it.  When an organization selects a SSO solution that features biometric authentication, it’s an opportunity to eliminate employee passwords completely.

For these reasons, a SSO type of solution is very practical – especially since the most challenging  part of successfully implementing MFA is simplifying the login process.


MFA Is a Vital Aspect of Effective Cybersecurity

As cybercrimes continue to increase, organizations are beginning to realize the full scope of the threats that they now face. Modern cyber criminals don’t just target big corporations. 31% of businesses with an employee count of less than 250 have been popular targets of cybercrime.

It is also important to understand that cybercriminals aren’t just stealing critical data. Often, they aim to corrupt your data, or destroy it entirely. This is often carried out by installing difficult-to-detect malicious software (malware) that disrupts business and services, and spreads fear and propaganda.

As a result, the market for multifactor authentication is expected to reach $12.51 billion in the next 4 years.


A Great Step Towards Enhancing Mobile Engagement

Like it or not, we are in the middle of a digital transformation that’s not slowing down; and we are in it for the long haul.  (If you’re part of the vast majority that can’t go anywhere without their smart phone, we’re willing to bet that you like it.) As part of all this, we have, collectively,  become used to having access to all the resources and information we want and need – on the go, from anywhere in the world, any time we want it.  This is the height of digital convenience, and something that has brought about many positive changes in the world of business and in society It also continues to introduce new challenges in terms of data security. MFA offers a streamlined method of ensuring user authentication –allowing you to ensure security with greater certainty, without sacrificing ease of access.

Microsoft: Russians targeted conservative think tanks, U.S. Senate

Posted on

Originally Seen: on August 21, 2018 by Sean Lyngaas

The Russian intelligence office that breached the Democratic National Committee in 2016 has spoofed websites associated with the U.S. Senate and conservative think tanks in a further attempt to sow discord, according to new research from Microsoft.

The tech giant last week executed a court order and shut down six internet domains set up by the Kremlin-linked hacking group known as Fancy Bear or APT 28, Microsoft President Brad Smith said.

“We have now used this approach 12 times in two years to shut down 84 fake websites associated with this group,” Smith wrote in a blog post. “We’re concerned that these and other attempts pose security threats to a broadening array of groups connected with both American political parties in the run-up to the 2018 elections.”

The domains were constructed to look like they belonged to the Hudson Institute and International Republican Institute, but were in fact phishing websites meant to steal credentials.

The two think tanks are conservative, yet count many critics of U.S. President Donald Trump and Russian President Vladimir Putin among their members. The International Republican Institute lists Sen. John McCain, R-Ariz, and former Republican presidential candidate Mitt Romney as board members. The Hudson Institute and International Republican Institute also have programs that promote democracy and good governance worldwide.

There is no evidence that the domains had been used to carry out successful cyberattacks, according to Microsoft. The company says it continues to work with both think tanks and the U.S. Senate to guard against any further attacks.

The attacks come as more and more instances of cyberattacks directed at the 2018 midterm elections come to light. Last month, Russian intelligence targeted Sen. Claire McCaskill, a critic of Moscow and a red-state Democrat who faces a tough reelection bid in Missouri. Additionally, a number of election websites have been hit with DDoS attempts during their primary elections.

“We are concerned by the continued activity targeting these and other sites and directed toward elected officials, politicians, political groups and think tanks across the political spectrum in the United States,” Microsoft’s blog post read. “Taken together, this pattern mirrors the type of activity we saw prior to the 2016 election in the United States and the 2017 election in France.”

Smith also announced that Microsoft was providing cybersecurity protection for candidates, campaigns and political institutions that use Office 365 at no additional cost.

Greg Otto contributed to this story. 

Click on this iOS phishing scam and you’ll be connected to “Apple Care”

Posted on Updated on

Scam website launched phone call, connected victims to “Lance Roger at Apple Care.”

Originally seen on ArsTechnica by:  – 

India-based tech support scams have taken a new turn, using phishing emails targeting Apple users to push them to a fake Apple website. This phishing attack also comes with a twist—it pops up a system dialog box to start a phone call. The intricacy of the phish and the formatting of the webpage could convince some users that their phone has been “locked for illegal activity” by Apple, luring users into soon clicking to complete the call.

Scammers are following the money. As more people use mobile devices as their primary or sole way of connecting to the Internet, phishing attacks and other scams have increasingly targeted mobile users. And since so much of people’s lives are tied to mobile devices, they’re particularly attractive targets for scammers and fraudsters.

“People are just more distracted when they’re using their mobile device and trust it more,” said Jeremy Richards, a threat intelligence researcher at the mobile security service provider Lookout. As a result, he said, phishing attacks against mobile devices have a higher likelihood of succeeding.

This particular phish, targeted at email addresses associated with Apple’s iCloud service, appears to be linked to efforts to fool iPhone users into allowing attackers to enroll them into rogue mobile device management services that allow bad actors to push compromised applications to the victim’s phones as part of a fraudulent Apple “security service.”

I attempted to bluff my way through a call to the “support” number to collect intelligence on the scam. The person answering the call, who identified himself as “Lance Roger from Apple Care,” became suspicious of me and hung up before I could get too far into the script.

Running down the scam

In a review of spam messages I’ve received this weekend, I found an email with the subject line, “[username], Critical alert for your account ID 7458.” Formatted to look like an official cloud account warning (but easily, by me at least, discernable as a phish), the email warned, “Sign-in attempt was blocked for your account [email address]. Someone just used your password to try to sign in to your profile.” A “Check Activity” button below was linked to a webpage on a compromised site for a men’s salon in southern India.

That page, using an obfuscated JavaScript, forwards the victim to another website, which in turn forwards to the site—a fake Apple Support page. JavaScript on that pagethen used a programmed “click” event to activate a link on the page that uses the tel:// uniform resource identifier (URI) handler. On an iPhone, this initiates a dialog box to start a phone call; on iPads and other Apple devices, this attempts to launch a FaceTime session.

Meanwhile, an animated dialog box on the screen urged the target to make the call because their phone had been “locked due to illegal activity.” Script on the site scrapes data from the “user agent” data sent by the browser to determine what type of device the page was visited from:

window.defaultText='Your |%model%| has been locked due to detected illegal activity! Immediately call Apple Support to unlock it!';

While the site is still active, it is now marked as deceptive by Google and Apple. I passed technical details of the phishing site to an Apple security team member.

The scam is obviously targeted at the same sort of audience as Windows tech support scamswe’ve reported on. But it doesn’t take too much imagination to see how schemes like this could be used to target people at a specific company, customers of a particular bank, or users of a certain cloud platform to perform much more tailored social engineering attacks.

HP keylogger: How did it get there and how can it be removed?

Posted on Updated on

Originally seen: October 2017 TechTarget.

 keylogging flaw found its way into dozens of Hewlett Packard laptops. Nick Lewis explains how the HP keylogger works and what can be done about it.

More than two dozen models of Hewlett Packard laptops were found to contain a keylogger that recorded keystrokes into a log file. HP released patches to remove the keylogger and the log files. How did the HP keylogger vulnerability get embedded in the laptops? And is there anything organizations can do to test new endpoint devices?

When it comes to security, having high expectations for security vendors and large vendors with deep pockets is reasonable given that customers usually pay a premium believing the vendors will devote significant resources to secure their products. Unfortunately, as with most other security teams, companies often don’t have enough resources or organizational fortitude to ensure security is incorporated into all of the enterprise’s software development.

But even the most secure software development can enable security issues to slip through the cracks. When you add in an outsourced hardware or software development team, it’s even easier for something to go unnoticed.

So while vendors might talk a good talk when it comes to security, monitoring them to ensure they uphold their end of your agreement is absolutely necessary.

One case where a vulnerability apparently escaped notice was uncovered when researchers at Modzero AG, an information security company based in Winterthur, Switzerland, found that a bug had been introduced into HP laptops by a third-party driver installed by default.

But even the most secure software development can enable security issues to slip through the cracks.

The vulnerability was discovered in the Conexant HD Audio Driver package, where the driver monitors for certain keystrokes used to mute or unmute audio. The keylogging functionality, complete with the ability to write all keystrokes to a log file, was probably introduced to help the developers debug the driver.

We can hope that the HP keylogger vulnerability was left in inadvertently when the drivers were released to customers. Modzero found metadata indicating the HP keylogger capability was present in HP computers since December 2015, if not earlier.

It’s difficult to know whether static or dynamic code analysis tools could have detected this vulnerability. However, given the resources available to HP in 2015, including a line of business related to application and code security, as well as the expectations of their customers, it might be reasonable to assume HP could have incorporated these tools into their software development practices. However, the transfer of all of HP’s information security businesses to a new entity, Hewlett Packard Enterprise, began in November 2015, and was completed in September 2017, when Micro Focus merged with HPE.

It’s possible that Modzero found the HP keylogger vulnerability while evaluating a potential new endpoint for an enterprise customer. They could have been monitoring for open files, or looking for which processes had the files open to determine what the process was doing. They could have been profiling the individual processes running by default on the system to see which binaries to investigate for vulnerabilities. They could even have been monitoring to see if any processes were monitoring keystrokes.

Enterprises can take these steps on their own or rely on third parties to monitor their vendors. Many enterprises will install their own image on an endpoint before deploying it on their network — the known good images used for developing specific images for target hardware could have their unique aspects analyzed with a dynamic or runtime application security tool to determine if any common vulnerabilities are present.


Posted on Updated on

Originally Seen: March 12, 2018 on Wired.

DISTRIBUTED DENIAL OF service attacks, in which hackers use a targeted hose of junk traffic to overwhelm a service or take a server offline, have been a digital menace for decades. But in just the last 18 months, the public picture of DDoS defense has evolved rapidly. In fall 2016, a rash of then-unprecedented attacks caused internet outages and other service disruptions at a series of internet infrastructure and telecom companies around the world. Those attacks walloped their victims with floods of malicious data measured up to 1.2 Tbps. And they gave the impression that massive, “volumetric” DDOS attacks can be nearly impossible to defend against.

The past couple of weeks have presented a very different view of the situation, though. On March 1, Akamai defended developer platform GitHub against a 1.3 Tbps attack. And early last week, a DDOS campaign against an unidentified service in the United States topped out at a staggering 1.7 Tbps, according to the network security firm Arbor Networks. Which means that for the first time, the web sits squarely in the “terabit attack era,” as Arbor Networks put it. And yet, the internet hasn’t collapsed.

One might even get the impression from recent high-profile successes that DDoS is a solved problem. Unfortunately, network defenders and internet infrastructure experts emphasize that despite the positive outcomes, DDoS continues to pose a serious threat. And sheer volume isn’t the only danger. Ultimately, anything that causes disruption and affects service availability by diverting a digital system’s resources or overloading its capacity can be seen as a DDoS attack. Under that conceptual umbrella, attackers can generate a diverse array of lethal campaigns.

“DDoS will never be over as a threat, sadly,” says Roland Dobbins, a principal engineer at Arbor Networks. “We see thousands of DDoS attacks per day—millions per year. There are major concerns.”

Getting Clever

One example of a creative interpretation of a DDoS is the attack Netflix researchers tried out against the streaming service itself in 2016. It works by targeting Netflix’s application programming interface with carefully tailored requests. These queries are built to start a cascade within the middle and backend application layers the streaming service is built on—demanding more and more system resources as they echo through the infrastructure. That type of DDoS only requires attackers to send out a small amount of malicious data, so mounting the offensive would be cheap and efficient, but clever execution could cause internal disruptions or a total meltdown.

“What creates the nightmare situations are the smaller attacks that overwork applications, firewalls, and load balancers,” says Barrett Lyon, head of research and development at Neustar Security Solutions. “The big attacks are sensational, but it’s the well-crafted connection floods that have the most success.”

‘We see thousands of DDoS attacks per day—millions per year.’


These types of attacks target specific protocols or defenses as a way of efficiently undermining broader services. Overwhelming the server that manages firewall connections, for example, can allow attackers to access a private network. Similarly, deluging a system’s load balancers—devices that manage a network’s computing resources to improve speed and efficiency—can cause backups and overloads. These types of attacks are “as common as breathing,” as Dobbins puts it, because they take advantage of small disruptions that can have a big impact on an organization’s defenses.

Similarly, an attacker looking to disrupt connectivity on the internet in general can target the exposed protocols that coordinate and manage data flow around the web, rather than trying to take on more robust components.

That’s what happened last fall to Dyn, an internet infrastructure company that offers Domain Name System services (essentially the address book routing structure of the internet). By DDoSing Dyn and destabilizing the company’s DNS servers, attackers caused outages by disrupting the mechanism browsers use to look up websites. “The most frequently attacked targets for denial of service is web severs and DNS servers,” says Dan Massey, chief scientist at the DNS security firm Secure64 who formerly worked on DDoS defense research at the Department of Homeland Security. “But there are also so many variations on and so many components of denial of service attacks. There’s no such thing as one-size-fits-all defense.”

Memcached and Beyond

The type of DDoS attack hackers have been using recently to mount enormous attacks is somewhat similar. Known as memcached DDoS, these attacks take advantage of unprotected network management servers that aren’t meant to be exposed on the internet. And they capitalize on the fact that they can send a tiny customized packet to a memcached server, and elicit a much larger response in return. So a hacker can query thousands of vulnerable memcached servers multiple times per second each, and direct the much larger responses toward a target.

This approach is easier and cheaper for attackers than generating the traffic needed for large-scale volumetric attacks using a botnet—the platforms typically used to power DDoS assaults. The memorable 2016 attacks were famously driven by the so-called “Mirai” botnet. Mirai infected 600,000 unassuming Internet of Things products, like webcams and routers, with malware that hackers could use to control the devices and coordinate them to produce massive attacks. And though attackers continued to refine and advance the malware—and still use Mirai-variant botnets in attacks to this day—it was difficult to maintain the power of the original attacks as more hackers jockeyed for control of the infected device population, and it splintered into numerous smaller botnets.

‘There’s no such thing as one-size-fits-all defense.’


While effective, building and maintaining botnets requires resources and effort, whereas exploiting memcached servers is easy and almost free. But the tradeoff for attackers is that memcached DDOS is more straightforward to defend against if security and infrastructure firms have enough bandwidth. So far, the high-profile memcached targets have all been defended by services with adequate resources. In the wake of the 2016 attacks, foreseeing that volumetric assaults would likely continue to grow, defenders seriously expanded their available capacity.

As an added twist, DDoS attacks have also increasingly incorporated ransom requests as part of hackers’ strategies. This has especially been the case with memcached DDoS. “It’s an attack of opportunity,” says Chad Seaman, a senior engineer on the security intelligence response team at Akamai. “Why not try and extort and maybe trick someone into paying it?”

The DDoS defense and internet infrastructure industries have made significant progress on DDoS mitigation, partly through increased collaboration and information-sharing. But with so much going on, the crucial point is that DDoS defense is still an active challenge for defenders every day. “

When sites continue to work it doesn’t mean it’s easy or the problem is gone.” Neustar’s Lyon says. “It’s been a long week.”