In a recent letter to insurers, the New York State Department of Financial Services (“NYDFS”) acknowledged the key role cyber insurance plays in managing and reducing cyber risk – while also warning insurers that they could be writing policies that have the “perverse effect of increasing cyber risk.” If a cyber insurance policy does not incentivize the insured to maintain a robust cyber security program, the insurer can end up bearing excessive risk when the customer leans on the policy as their business continuity plan.
You may be wondering “What does this have to do with my business? I don’t do any business in NY state.” However, your insurer might be subject to the NYDFS cybersecurity regulation (23 NYCRR 500) and, if so, likely received this letter.
According to NYDFS, every cyber insurer should have a formal strategy that incentivizes their insureds – through more appropriately priced plans – to “create a financial incentive to fill [cybersecurity] gaps to reduce premiums.” Below is our take on five of the key practices outlined in the NYDFS letter that have potential implications for insureds.
Manage and Eliminate Exposure to Silent Cyber Insurance Risk. Up to now, many organizations have leveraged clauses in standard policies to cover ransomware attacks, such as those covering general liability, theft, malpractice and errors. NYDFS advises that “insurers should eliminate silent risk by making clear in any policy that could be subject to a cyber claim whether that policy provides or excludes coverage for cyber-related losses.” When you next renew your policy, read the fine print carefully to determine if there are any exemptions for cyber-related losses – even if you have a standalone cyber insurance policy. An insurer that was left ‘holding the bag’ for covering a ransomware attack under a policy that wasn’t priced to cover cyber losses is incentivized to update that policy language at the soonest opportunity.
Evaluate Systemic Risk. Here, insurers are being advised to “stress test” their coverage to ensure they would remain solvent while covering potentially “catastrophic” cyber events impacting multiple insureds. If you are a cloud or managed services provider and/or are part of other organizations’ supply chains, you should expect to receive more scrutiny from your insurer on the strength of your cyber security program.
Rigorously Measure Insured Risk. No surprises here, unless you haven’t been filling out detailed questionnaires about your cyber security program. Expect more scrutiny of your program, and possibly the involvement of auditors to validate your claims. Check your insurance policy to see if investing in a certification program – such as ISO 27001 or HITRUST – might improve your policy premium.
Educate Insureds and Insurance Providers. This practice states that “insurers should also incentivize the adoption of better cybersecurity measures by pricing policies based on the effectiveness of each insured’s cybersecurity program.” Take advantage of any educational opportunities your provider offers on cybersecurity best practices and improvements. They might be trying to tell you how you can lower risk – and your rates.
Require Notice to Law Enforcement. While this is a best practice, NYDFS is recommending this be more formally required in the policy language. Involving law enforcement is important when responding to cyber incidents, especially when it comes to investigating the incident and attempting to recover funds. Make sure you involve legal counsel and have a plan for engaging law enforcement in the event of a breach.
Even if your insurer hasn’t received this guidance, they are certainly aware that cyber risk, and the cost of underwriting cyber insurance, continue to increase. With the cyber insurance market estimated to exceed $20 billion by 2025, and the risk that intermediaries – including insurers – can be liable for ransom payments made to entities sanctioned by the Office of Foreign Assets Control, business leaders should expect that their insurers will be more closely scrutinizing their cyber security plans and controls. Rebuilding encrypted systems and restoring from backup, as opposed to paying ransoms, will need to be the first plan of action.
If your organization is still struggling with the decision whether to invest more in IT security and architecture improvements or continue to rely on insurance as your cyber security plan, the guidance in the NYDFS Cyber Insurance Risk Framework merits a closer look.
While cyber insurance can be essential to helping your organization recover from a data breach, it should not take the place of a strong cyber security program. At minimum your cyber security program should include a Cyber Security Plan, Business Continuity and Disaster Recovery Plan and an Incident Response Plan. These plans should be tested, reviewed and updated at least annually, preferably in conjunction with a penetration test and vulnerability assessment from a qualified third party.
The massive hack into government systems through a software contractor would have remained unknown by the public if not for one company’s decision to be transparent about a breach of its systems, Microsoft President Brad Smith told lawmakers at a hearing Tuesday.
Smith’s testimony highlights how cybersecurity incidents can potentially go undisclosed.
He planned to tell lawmakers that private sector companies should be required to be transparent about significant breaches of their systems.
The massive hack into government systems through a software contractor would have remained unknown by the public if not for one company’s decision to be transparent about a breach of its systems, Microsoft President Brad Smith told lawmakers at a hearing Tuesday.
“The fact that we are here today, discussing this attack, dissecting what went wrong, and identifying ways to mitigate future risk, is occurring only because my fellow witness, Kevin Mandia, and his colleagues at FireEye, chose to be open and transparent about what they found in their own systems, and to invite us at Microsoft to work with them to investigate the attack,” Smith told the Senate Select Committee on Intelligence, according to his prepared remarks.
“Without this transparency, we would likely still be unaware of this campaign. In some respect, this is one of the most powerful lessons for all of us. Without this type of transparency, we will fall short in strengthening cybersecurity.”
Smith’s testimony highlights how many cybersecurity incidents can go undisclosed. Smith told lawmakers that private sector companies should be required to be transparent about significant breaches of their systems. He compared the “patchwork” of disclosure requirements in the U.S. to more consistent obligations in places like the European Union.
FireEye disclosed in a regulatory filing in December that it had been hacked by what it believed to be a state-sponsored actor who mainly sought information related to its government customers. The company said the attack was unusually advanced, employing “a novel combination of techniques not witnessed by us or our partners in the past.”
Soon after, Reuters reported that hackers possibly linked to Russia accessed email systems at the U.S. Commerce and Treasury departments through SolarWinds software updates. The Defense Department, State Department and Department of Homeland Security were also affected, The New York Times later reported. Reuters reported, citing sources, that the SolarWinds attack was related to the FireEye incident.
A few days later, Reuters reported that Microsoft was also hacked. U.S. agencies later shared that Russian actors were likely the source of the attack. Smith said in his written testimony that Microsoft does not dispute that assessment while he said, “Microsoft is not able to make a definitive attribution based on the data we have seen.”
Smith told Congress that Microsoft notified 60 customers, mainly in the U.S., that they were compromised in connection to the attack. But he warned lawmakers that there are certainly more victims that have yet to be identified. A White House cybersecurity advisor estimated last week that nine government agencies and roughly 100 private companies were affected by the attack. Smith told Congress that Microsoft identified further government and private sector victims outside the U.S. that were impacted.
Smith proposed that in addition to requiring more disclosures from private companies, government should provide “faster and more comprehensive sharing” with the security community.
“A private sector disclosure obligation will foster greater visibility, which can in turn strengthen a national coordination strategy with the private sector which can increase responsiveness and agility,” Smith said in his written remarks. “The government is in a unique position to facilitate a more comprehensive view and appropriate exchange of indicators of comprise and material facts about an incident.”
But Mandia, FireEye’s CEO, told CNBC’s Eamon Javers in an interview ahead of the hearing Tuesday that disclosure is “a damn complex issue.”
“The reason it’s a complex issue is because of all the liabilities companies face when they go public about a disclosure,” Mandia said. “They have shareholder lawsuits, they have lots of considerations of business impact. You also don’t want to unnecessarily create a lot of fear, uncertainty and doubt.”
Intelligence Committee Chairman Mark Warner, D-Va., said in his opening remarks Tuesday that it may be worth considering greater disclosure requirements, even if it means creating liability protection for companies that follow those disclosure obligations.
In an alert Wednesday, Oct. 28, 2020, the FBI and other federal agencies warned that cybercriminals are unleashing a wave of data-scrambling extortion attempts against the U.S. healthcare system that could lock up their information systems just as nationwide cases of COVID-19 are spiking. (AP Photo/Jose Luis Magana, File)
BOSTON (AP) — Federal agencies warned that cybercriminals could unleash a wave of data-scrambling extortion attempts against the U.S. health care system, an effort that, if successful, could paralyze hospital information systems just as nationwide cases of COVID-19 are spiking.
In a joint alert Wednesday, the FBI and two federal agencies said they had credible information of “an increased and imminent cybercrime threat” to U.S. hospitals and health care providers. The alert said malicious groups are targeting the sector with attacks aiming for “data theft and disruption of healthcare services.”
The impact of the expected attack wave, however, is difficult to assess.
It involves a particular strain of ransomware, which scrambles a target’s data into gibberish until they pay up. Previous such attacks on health care facilities have impeded care and, in one case in Germany, led to the death of a patient. But such consequences are still rare.
The federal warning itself could help stave off the worst consequences, either by leading hospitals to take additional precautions or by expanding efforts to knock down the systems cybercriminals use to launch such attacks.
The offensive coincides with the U.S. presidential election, although there is no immediate indication the cybercriminals involved are motivated by anything but profit. The federal alert was co-authored by the Department of Homeland Security and the Department of Health and Human Services.
Independent security experts say the ransomware, called Ryuk, has already impacted at least five U.S. hospitals this week and could potentially affect hundreds more. Four health care institutions have been reported hit by ransomware so far this week, three belonging to the St. Lawrence Health System in upstate New York and the Sky Lakes Medical Center in Klamath Falls, Oregon.
Sky Lakes said in an online statement that it had no evidence patient information was compromised and that emergency and urgent care “remain available.” The St. Lawrence system said Thursday that no patient or employee data appeared to have been accessed or compromised. Matthew Denner, the emergency services director for St. Lawrence County, told the Adirondack Daily Enterprise that the hospital owner instructed the county to divert ambulances from two of the affected hospitals for a few hours Tuesday, when the attack occurred. Neither Denner nor the company replied to requests for comment on that report.
Alex Holden, CEO of Hold Security, which has been closely tracking Ryuk for more than a year, said the attack wave could be unprecedented in magnitude for the U.S. In a statement, Charles Carmakal, chief technical officer of the security firm Mandiant, called the cyberthreat the “most significant” the country has ever seen.
The U.S. has seen a plague of ransomware over the past 18 months or so, with major cities from Baltimore to Atlanta hit and local governments and schools walloped especially hard.
In September, a ransomware attack hobbled all 250 U.S. facilities of the hospital chain Universal Health Services, forcing doctors and nurses to rely on paper and pencil for record-keeping and slowing lab work. Employees described chaotic conditions impeding patient care, including mounting emergency room waits and the failure of wireless vital-signs monitoring equipment.
Holden said the Russian-speaking group behind recent attacks was demanding ransoms well above $10 million per target and that criminals involved on the dark web were discussing plans to try to infect more than 400 hospitals, clinics and other medical facilities.
While no one has proven suspected ties between the Russian government and gangs that use the Trickbot platform that distributes Ryuk and other malware, Holden said he has “no doubt that the Russian government is aware of this operation.” Microsoft has been engaged since early October in trying to knock Trickbot offline.
Dmitri Alperovitch, co-founder and former chief technical officer of the cybersecurity firm Crowdstrike, said there are “certainly lot of connections between Russian cyber criminals and the state,” with Kremlin-employed hackers sometimes moonlighting as cyber criminals.
Increasingly, ransomware criminals are stealing data from their targets before encrypting networks, using it for extortion. They often sow the malware weeks before activating it, waiting for moments when they believe they can extract the highest payments, said Brett Callow, an analyst at the cybersecurity firm Emsisoft.
A total of 59 U.S. health care providers or systems have been impacted by ransomware in 2020, disrupting patient care at up to 510 facilities, Callow said.
Hospitals and clinics have been rapidly expanding data collection and adding internet-enabled medical devices, many of which are poorly secured. Hospital administrators, meanwhile, have been slow to update software, encrypt data, train staff in cyber hygiene and recruit security specialists, leaving them vulnerable to cyber-attacks.
And as hospitals respond to the coronavirus crisis, privacy and security protocols fall by the wayside, leaving patients open to identity theft, said Larry Ponemon, a data security expert. “The bad guys smell the problem.”
Associated Press writers Michael Hill in Albany, N.Y., and Marion Renault in New York City contributed to this report.
Over a quarter of organisations which fall victim to ransomware attacks opt to pay the ransom as they feel as if they have no other option than to give into the demands of cyber criminals – and the average ransom amount is now over $1 million.
A Crowdstrike study based on responses from thousands of information security professionals and IT decision makers across the globe found that 27 percent said their organisation had paid the ransom after their network got encrypted with ransomware.
However, not only does paying the bitcoin ransom just encourage ransomware gangs to continue campaigns because they know they’re profitable, there’s also no guarantee that the hackers will actually restore the network in full.
But infecting networks with ransomware is proving to be highly lucrative for cyber criminals, with figures in the report suggesting the average ransom amount paid per attack is $1.1 million.
In addition to the cost of paying the ransom, it’s also likely that an organisation which comes under a ransomware attack will lose revenue because of lost operations during downtime, making falling victim to these campaigns a costly endeavour.
However, falling foul of a ransomware attack does serve as a wakeup call for the majority of victims; over three-quarters or respondents to the survey say that in the wake of a successful ransomware attack, their organisation upgraded its security software and infrastructure in order to reduce the risk of future attacks, while two-thirds made changes to their security staff with the same purpose in mind.
It’s unclear why almost a quarter of those who fall victim to ransomware attacks don’t plan to make any changes to their cybersecurity plans, but by leaving things unchanged, they’re likely putting themselves at risk from falling victim to future attacks.
“In a remote working situation the attack surface has increased many times and security cannot be secondary business priority,” said Zeki Turedi, Chief Technology Officer for EMEA at CrowdStrike.
To avoid falling victim to ransomware attacks, it’s recommended that organisations ensure that systems are updated with the latest security patches, something which can prevent cyber criminals taking advantage of known vulnerabilities to deliver ransomware.
It’s also recommended that two-factor authentication is deployed throughout the organisation, so that in the event of criminal hackers breaching the perimeter, it’s harder for them to move laterally around the network and compromise more of it with ransomware or any other form of malware.
Like any other IT environment, there are potential cyber-risks to the International Space Station (ISS), though the station is quite literally like no environment on Earth.
In a session on August 9 at the Aerospace Village within the DEFCON virtual security conference, former NASA astronaut Pamela Melroy outlined the cybersecurity lessons learned from human spaceflight and what still remains a risk. Melroy flew on two space shuttle missions during her tenure at NASA and visited ISS. Hurtling high above the Earth, ISS is loaded full of computing systems designed to control the station, conduct experiments and communicate with the ground.
“Space is incredibly important in our daily lives,” Melroy said.
She noted that GPS, weather tracking and communications are reliant on space-based technology. In Melroy’s view, the space industry has had somewhat of a complacent attitude about satellite security, because physical access was basically impossible once the satellite was launched.
“Now we know that our key infrastructure is at risk on the ground as it is in space, from both physical and cyber-threats,” Melroy stated.
The Real Threats to Space Today
Attacks against space-based infrastructure including satellites are not theoretical either.
Melroy noted that the simplest type of attack is a Denial of Service (DoS) which is essentially a signal jamming activity. She added that it already happens now, sometimes inadvertently, that a space-based signal is blocked. There is also a more limited risk that a data transmission could be intercepted and manipulated by an attacker.
What isn’t particularly likely though is some kind of attack where an adversary attempts to direct one satellite to hit another. That said, Melory said that there could be a risk from misconfiguring a control system that would trigger a satellite to overheat or shut down.
How the ISS Secures its Network
During her presentation, Melroy outlined the many different steps that NASA and its international partners have taken to help secure the IT systems on-board ISS.
The entire network by which NASA controllers at Mission Control communicate with ISS is a private network, operated by NASA. Melroy emphasized that the control does not go over the open internet at any point.
There is also a very rigorous verification system for any commands and data communications that are sent from the ground to ISS. Melroy noted that the primary idea behind the verification is not necessarily about malicious hacking, but rather about limiting the risk of a ground controller sending a bad command to space.
“There’s a very rigorous certification process required for controllers in the International Space Station Mission Control Center (MCC) to allow them to send commands to the space station,” she explained. “In addition there are screening protocols both before a message ever leaves MCC going up to the ISS and once it’s on board ISS, to check and make sure that the command will not inadvertently do some damage to the station.”
Using Twitter in Space
ISS also makes use of a highly distributed architecture such that different sets of systems and networks are isolated from one another.
For station operations, Melroy said that astronauts make use of technology known as Portable Computer Systems (PCS) which are essentially remote terminals to send commands to the station’s primary computing units.
There is also a local area network on the station with support computers used for limited internet access including email and social media like Twitter. While the local ISS network has internet access, it is not directly connected to the public internet.
Melroy explained that there is a proxy computer inside the firewall at the Johnson Space Center, in Houston, Texas, that is connected with ISS. As such, the space station support computers talk to the proxy computer, which then goes out onto the public internet.
“Now of course, just like any computer, it’s still subject potentially to malware,” Melory said. “However, the most important thing is that the station support computers in no way shape or form are networked to the actual commanding of the station, they’re completely separate systems and they don’t talk to each other.”
Areas of Concern for Spaceflight Security
While ISS has multiple layers of security, Melroy commented that there are still some areas of concern for spaceflight and space cybersecurity.
For satellites, she noted that the uplink and downlink to most satellites is encrypted, though the data on-board the satellite often is not. Additionally, she expressed concern about ground-based control systems for satellites. Melroy explained that satellite ground systems have the same cybersecurity risks as any enterprise IT system.
“The most serious problem I think we have in space is complacency, many people in space think that their systems are not vulnerable to cyber-attacks,” Melroy said. “We are going to have to figure out how to insert cybersecurity and an awareness of that into the values and the culture of aerospace, all the way from the beginning in design and through to operations.”
A cybersecurity firm has uncovered serious privacy concerns in Amazon’s popular “Alexa” device, leading to questions about its safety.
Check Point, the California- and Israel-based technology company, published a report Thursday detailing “vulnerabilities found on Amazon’s Alexa,” including a hacker’s access to the user’s voice history and personal information, as well as the ability to silently install or remove skills on the user’s account.
“In effect, these exploits could have allowed an attacker to remove/install skills on the targeted victim’s Alexa account, access their voice history and acquire personal information through skill interaction when the user invokes the installed skill,” according to the report. “Successful exploitation would have required just one click on an Amazon link that has been specially crafted by the attacker.”
Amazon’s Alexa line is powered by artificial intelligence (AI) technology, and the conglomerate had sold more than 200 million Alexa devices by the end of 2019, CNET reported. The Alexa essentially functions as a virtual assistant to its user, able to take voice commands, play music, set alarms, and offer weather or news reports.
Developers are continually working on new programs to make the devices even more user-friendly. Just a few weeks ago, for instance, Amazon announced Alexa Conversations was moving into its beta phase, and would now be able to provide an AI-driven element to voice interactions, making conversations flow more naturally.
In its report, Check Point described how an attacker could hack into a user’s Amazon account to compromise their Alexa device, including a breakdown of the code needed to carry out such an action. In one example of how an attack could occur, the user would click on a malicious link provided by the hacker, allowing them to inject their code into the user’s account.
Check Point also detailed how an attacker could get the device’s entire voice history, which could expose banking information, home addresses or phone numbers, as all interactions with the device are recorded.
Virtual assistants provide relatively easy targets for attackers wishing to steal sensitive information or disrupt a user’s smart home device, according to the report. Check Point’s research found a weak spot in Amazon’s security technology, the report stated.
“What we do know is that Alexa had a significant period of time where it was vulnerable to hackers,” Check Point spokesman Ekram Ahmed told Fox News. “Up until Amazon patched, it’s possible that personal and sensitive information was extracted by hackers via Alexa. Check Point does not know the answer to whether that occurred yet or not, or to the degree to which that happened.”
The technology company reported its findings to Amazon in June 2020, and Amazon “subsequently fixed the issue,” according to Check Point.
In an emailed statement to Newsweek, an Amazon spokesperson wrote that security of its devices is a top priority for the company.
“We appreciate the work of independent researchers like Check Point who bring potential issues to us. We fixed this issue soon after it was brought to our attention, and we continue to further strengthen our systems,” according to the statement. “We are not aware of any cases of this vulnerability being used against our customers or of any customer information being exposed.”
To ensure Alexa devices are secure, Check Point recommends that users avoid unfamiliar apps, think twice before sharing information with a smart speaker and conduct research on any downloaded apps, a company spokesperson wrote in an email to Newsweek.
Update (08/13/20, 11:52 a.m.): This article has been updated to include responses from Amazon and Check Point.
As seen on: fcw.com by Derek B. Johnson on 2/10/2020
The Trump administration’s proposed budget for fiscal year 2021 would spend $18.8 billion on cybersecurity programs across the federal government, with approximately $9 billion dedicated to civilian agencies for network security, protecting critical infrastructure, boosting the cybersecurity workforce and other priorities.
The overall cybersecurity funding at the Department of Homeland Security is listed at $2.6 billion. That includes $1.1 billion for DHS and its component, the Cybersecurity and Infrastructure Security Agency, to defend government networks and critical infrastructure from cyber threats, including for tools like EINSTEIN and Continuous Diagnostics and Mitigation. According to the Office of Management and Budget, the funding would increase the number of DHS-led network risk assessments from 1,800 to 6,500 and allow for more state and local governments to utilize the department’s services.
The administration has also put a heavy emphasis on bolstering the government’s cybersecurity workforce, releasing an executive order and strategic plan last year. The budget includes funding for DHS’ Cyber Talent Management System, a personnel system designed to bring hundreds of new cybersecurity professionals into the federal workforce under special hiring rules, as well as a CISA-managed cybersecurity workforce initiative and an interagency rotational program that temporarily details cyber personnel to other agencies to gain more holistic experience.
It also proposes to transfer the U.S. Secret Service, which investigates a number of cyber-enabled financial crimes, from DHS to the Department of Treasury.
The Department of Energy would get $665 million for cybersecurity, including $185 million for the Office of Cybersecurity, Energy Security and Emergency (CESR), part of which would go towards funding early research and development of methods to better protect the energy supply chain. At the same time, the plan would eliminate a number of grant programs and research organizations, such as the Advanced Research Projects Agency-Energy.
“The private sector has the primary role in taking risks to finance the deployment of commercially viable projects and Government’s best use of taxpayer funding is in earlier stage R&D,” the budget states.
It would also invest in a number of emerging technologies, setting aside $5 million to stand up Energy’s new Artificial Intelligence Technology Office, along with an additional $125 million for AI and machine learning research. Other research funding includes $475 million for the Office of Science supercomputing research and $237 million for quantum computing research.
On securing the supply chain, the budget would set aside $35 million for the Department of Treasury to implement the Foreign Investment Risk Review Modernization Act passed in 2018, which created a new layer of review by the Committee on Foreign Investment in the United States for foreign investments in U.S. companies that produce critical technologies. The administration is also implementing the Secure Technologies Act, which created a new Federal Acquisition Supply Chain Security Council charged with developing procurement regulations to prevent U.S. agencies from buying compromised computer parts, components and software.
For Twitter (TWTR), the hack was certainly not a good look. CEO Jack Dorsey apologized for it on the company’s earnings call last week, saying: “Last week was a really tough week for all of us at Twitter, and we feel terrible about the security incident.”
For other companies, the hack could serve as a reminder that even at a moment when there is much else to worry about (like the economic recession and ongoing pandemic), cybersecurity threats are still an issue. It may be more true now than usual — experts say that having many people working from home presents unique security risks, especially given that many companies made the transition practically overnight.
“The way (the transition to remote working) happened, instantly, there was no warning, and all of a sudden people were just told, ‘you’re not going back to work tomorrow,'” said Anu Bourgeois, an associate professor of computer science at Georgia State University. “Everybody became vulnerable at that point.”
When coronavirus hit the United States, employers had to scramble to get a huge percentage of the country’s workforce to transition to remote working for the first time, a massive task that may have involved corner-cutting when it came to security.
There are a number of ways companies could have gone duringthe transition. In the hurry to keep employees safe but still maintain their workflow, companies might have given out laptops not equipped with the proper security software or asked them to use their own personal devices for work, Bourgeois said.
That issue was likely heightened for employees and families who can’t afford multiple devices and suddenly found themselves working from home while kids attended school remotely.
“They’re having to juggle different people using that device,” Bourgeois said. “Whereas at work you’re just one person, your kids may be having to use the device you use for work for their school or entertainment. You have that vulnerability of different people on your machine.”
Companies that were accustomedto having employees work only out of the office likely also had to develop new “access controls.” Whereas workers may have only been able to access their company’s servers and data from inside the office, they now may have to sign into a virtual private network (VPN) or other portal to securely access the information needed to do their jobs.
Deploying proper cybersecurity protocols for a remote workforce, “especially for a large scale company, is going to be really time consuming and difficult to do,” said Bourgeois.
She added that even with existing security software, companies could run into issues. Some security systems track employee habits — such as the normal days, times and duration of time that they typically access company systems — to identify potential hackers. But such systems may be confused by people’s changing work habits during the pandemic, and therefore could be less likely to catch breaches.
What we know about the Twitter hack
It’s unclear whether the Twitter hack had anything to do with remote working policies the company put in place in response to the pandemic.
Former Twitter employees examining the incident acknowledged that it’s a possibility, but there’s no evidence that Twitter relaxed its security to accommodate working from home. Twitter declined to comment on its remote work policies.
Twitter said the breach was the result of a coordinated “social engineering” attack that targeted workers who had administrative privileges, with the aim of taking control of the accounts.
Experts say social engineering may alsobe easier when people are working from home, where they may be distracted or let their guard down.
“You have people scrambling, in a different environment, and that mindset is not the same when you’re working from home versus the office,” Bourgeois said. “So many people are juggling their kids and are distracted and may be trying to quickly get through whatever task they need to get through. (They) may not be as sensitive to looking for these social engineering tactics, like phishing emails or phone calls.”
Some have also warned that hackers may try to exploit people’s fear of coronavirus in an attempt to carry out hacks or phishing attempts.
“As the world’s anxiety regarding coronavirus continues to escalate, the likelihood that otherwise more cautious digital citizens will click on a suspicious link is much higher,” the Electronic Frontier Foundation wrote in a March blog post.
The EFF cautioned people to look out for suspicious messages promising information or offers related to coronavirus, especially ones that sound too good to be true, like an offer to submit personal information in exchange for a free coronavirus vaccine.
For companies looking to avoid being the next target of an attack — in addition to implementing antivirus software and two-factor authentication — “the number one thing is education,” according to Bourgeois.
“Unless your employees are well versed in all of these different types of attacks and what to be aware of, it doesn’t matter what else you do, that person is vulnerable. Educating the workforce is key,” Bourgeois said.
Misconceptions about this cybersecurity model abound. Here’s what’s true and what’s not.
It may be getting a lot of renewed attention lately, but zero trust is not a new concept. Security professionals have been promoting it for almost 20 years. Yet there remains confusion about what exactly it is and how it works.
At its core, zero trust means what the term implies: It is the end of implicit trust, where people or systems were trusted simply because of where they were — on campus, on private wireless, on a VPN, in the data center and so on. Instead, the zero-trust model says to trust no one and require everyone and everything to be controlled, authenticated and authorized.
Fallacy: Zero Trust Means Endless Logins For Users
Zero-trust models definitely require that users be authenticated every time they do anything. But that doesn’t have to be done with a login page and password. Instead, single sign-on systems — integrated with browsers, client operating systems and VPN tools — are used to reduce the number of login steps visible to users. Users are still being authenticated and authorized many times, but it’s happening behind the scenes without bothering users.
If done incorrectly, zero trust is a fast track to user dissatisfaction. But a well-planned zero-trust deployment, combined with an identity and access management program, both increases the quality of the user authentication (by shifting from passwords to something stronger, such as multifactor or digital certificates) and the granularity of the controls that the security team has to grant or restrict access.
Fact: The Cloud Simplifies Zero-Trust Transitions
Zero trust requires that you rethink the connections between everyone and everything, including systems sitting next to each other in a data center. You can definitely build a zero-trust security model in an existing on-premises data center, if your network and application teams can cooperate.
However, many IT groups find that adding in security barriers to replace a network free-for-all inside an office building or an existing data center to be very challenging. When applications are forklifted out of the data center and moved to the cloud, it presents a natural opportunity to put in the security barriers that zero trust requires. For forward-looking IT groups, a cloud deployment is the ideal time to start deploying a zero-trust model at both the network and the application layers.
Fact: Zero Trust Makes A VPN Unnecessary
With zero trust, all user-to-server communication channels should be controlled, authenticated and authorized. (The same goes for server-to-server communications as well.) In the 1990s, the standard tool to do this was an IPSec VPN, and that tool still has its place in the IT manager’s toolbox to solve problems with legacy applications or very small or specialized user communities.
But the zero-trust idea of control, authentication and authorization doesn’t really overlay perfectly with typical IPSec VPN implementations, because they typically have weak controls, broad-based authentication and no authorization model at all.
Instead, application-specific encryption provides protection against eavesdropping or man-in-the-middle attacks, while also delivering a strong authentication model. Of course, you can always layer that on top of a VPN connection — and many IT leaders may choose to do that during a transition period or to accommodate legacy applications. But over the long term, the combination of application-specific authentication and encryption along with a move of many applications to cloud hosting services spells the end of VPNs for general purpose access to corporate networks.
Fallacy: Zero Trust Is a User-Focused Security Initiative
Zero trust is not just about users. It’s about not trusting anyone or anything just because of where they are. What this means is that users who are on corporate Wi-Fi shouldn’t be trusted any more than users who are connecting from their home offices.
In early days of networked computing, security professionals rallied around the expression “a crunchy shell around a soft, chewy center” to describe network security. Firewalls were used to provide the crunch in the form of access controls. Things outside the “chewy center” had strong access controls, but everything inside the firewalls was implicitly trusted.
Zero trust sweeps away this idea. Instead, every server, every network access point and every application should have its own crunchy shell that provides the services of access control, typically coupled with authentication and authorization.
Fallacy: Zero Trust Is Just Another Buzzword Designed to Sell Security Products
Zero trust isn’t a marketing ploy. Companies around the globe are being hit hard with data breaches and break-ins. Post-mortems around most of these security incidents come to a simple conclusion: We trusted someone or something that we shouldn’t have, and that’s how the breach occurred.
In the data center, not every server joined to a Windows domain is equally well managed and protected — but when the weakest server becomes an entry point for cybercriminals, the nature of the trust relationship in the data center makes it easy for attackers to move laterally to other systems, escalating privileges and access as they go.
The same is true for end users. Just because an end user’s PC is connected to the network in your headquarters doesn’t mean the user can be trusted to connect to every bit of network and server infrastructure on the corporate campus.
Getting rid of this overly generous model of trust in corporate networks dramatically reduces the risk of data breach and system compromise. That’s no buzz — it’s a better way to design and run an organization’s applications and infrastructure.
Andy Ellis, Akamai’s chief security officer, says 5G is only going to make it worse.
“5G enables more devices to be online at a time where we don’t really have a plan to secure them in the future,” Ellis told Protocol. “It’s basically the creation of a debt to service the future. We buy this world full of connected devices, and the mortgage is that at some point we have to secure them before they cause more problems for us.”
Ellis has spent the last two decades in cybersecurity roles at Akamai, which operates one of the largest content-delivery networks in the world. He got his start in cyber as an information warfare specialist in the U.S. Air Force, which he joined after earning a computer science degree at MIT.
He talked with Protocol last month about 5G, connected devices, and what cybersecurity professionals should be focused on in 2020.
This conversation has been edited for length and clarity.
How will the proliferation of connected devices affect cybersecurity?
When it comes to connected devices, we’re at a fascinating touchpoint. Everything is becoming connected: your garage door, the fitness tracker on your wrist, the thermos you drink coffee out of. These items used to be bespoke. They had custom-made electronics and were designed to do one thing: With the garage door, you would press a button and it would transmit a signal to open or close. But they’re not bespoke devices anymore. They’re computers that can talk on the internet, and that fundamentally changes things. It creates a dramatically different level of complexity.
If you have a connected garage door, you access it through an app on your phone, which sends a signal into the cloud, talks to someone else’s server, transmits it down to another computer running inside your garage that can open and close the door. In the past, you could maybe spoof frequencies and get a garage door to open, or you could trigger a manual release and open it from the outside — those were the vulnerabilities you had to deal with.
But now you have to worry about what malware might be running on your phone, how is the phone authenticating itself on this cloud-based server, how is the server protected, how are the passwords secured, can people on the internet get access to the computer in your garage. There are many more vulnerabilities in this system than in a traditional garage door, and the devices can also be misused in other ways. With botnets, hackers compromise cameras and other connected devices by entering default passwords — like username: admin, password: admin — and use that network to harm someone else. The compromised devices can all try to access a target at once, flooding it with traffic until it becomes inaccessible.
Is the problem getting better or worse?
The problem is absolutely growing. There are billions of connected devices. We’ve probably already passed the point where connected devices have outnumbered handheld computing devices like laptops, tablets and phones.
We need to be concerned about 5G and the growth in IoT devices. The big promise with 5G — and news stories suggest we’re not quite there with this promise — is that the capacity for more devices in a given location is much higher than it has been with 2G, 3G and 4G. In the past, if you tried to connect 35 devices to your home network, they would stop working properly. With 5G you can have that, and we’re going to see an explosion in the number of IoT devices because of that.
You mentioned a lot of consumer devices, like garage doors and fitness trackers. Is this also a business problem?
I was talking earlier about a garage in a house, but parking garages rely on a similar system. There’s a transponder in my car, the system reads it, it queries a server on the internet, sees that I’m still employed, and opens the gate so I can get into the company garage. In my office I have lighting systems, thermostats, video conferencing — all these connected devices in offices don’t look like computers and aren’t treated like computers, but that’s what they are. And a common difference between consumer-grade and commercial-grade devices is whether or not you’re building them into your system. In commercial buildings, a lot of these systems are installed by someone else, and you have to coexist with them until the building is torn down.
What industries are most affected by vulnerabilities in connected devices?
Pick any sector and I’ll tell you how they are deeply at risk. In the medical sector, hospitals are now filled with connected devices. In fact, human bodies are starting to be full of connected devices. There, you have a special risk where human life is on the line if a device is compromised.
If you talk about agriculture, more and more connected devices are used for farming — imagine the damage that could be done if an adversary was able to target machines and adjust the fertilizer recipe so that instead of 1 parts in 10 of a particular ingredient it’s 1 part in 3, and now you’re burning whatever you’re trying to grow on an industrial scale. In the satellite industry, you have some really interesting problems because you can’t service the devices at all.
There has been research into attacks on pacemakers and insulin pumps, where you can cause them to use up their batteries or medicine. What if you performed that kind of attack on a satellite, where you cause it to burn its thrusters or crash? Kevin Fu, at the University of Michigan, is a fantastic researcher in this area. In every industry, you have a case like that where you don’t do anything fancy to the device, but you get it to do its function more or less frequently until something like the battery dies. That’s a kind of threat that many people don’t think about. Pacemakers are designed so that when you walk into your doctor’s office, they can run diagnostics to check things like the battery life and how it’s operating. For manufacturers that didn’t secure that interface, a hacker could theoretically sit next to them, continuously ask for the data, and the device’s lifespan is shortened from years to months. These are the interesting problems people need to think about.
How are cybersecurity professionals at health care organizations handling these kinds of risks?
From talking to hospital CISOs, a lot of them struggle with connected devices. A challenge is that the device may be completely out of date and horribly vulnerable, but it’s high-revenue for them. Or there simply might not be an update; the manufacturer might not support the device anymore and they want you to buy the next $3 million device even though yours works well and is used 24/7.
So in many cases, the CISOs have to functionally disconnect a lot of their devices. They create an enclave just for the device so it can operate but not talk to anything else on your network, because it’s not safe. I feel that hospital CISOs have one of the more challenging jobs in my industry. They’re more of a landlord than an enterprise. Maybe their physicians don’t actually work for them. They come in, do their procedures and expect the devices to work. They need to have electronic records, so there has to be this interchange: Someone goes in, gets an X-ray, and other physicians need to see it so you can’t completely disconnect the X-ray machine. They also have to deal with celebrity customers who have valuable data; a lot of people would pay big money for that information.
What’s the worst-case scenario of an attack on a connected device?
The worst-case scenario is going to vary by organization. When you think about IoT, the question is what is the prevalence in my organization and how exposed are we. We have room-booking devices on our conference rooms, and one researcher bought a bunch of them on eBay and took them apart, and his discovery led us to pull the devices because we couldn’t secure them in a reasonable way. People didn’t really like the system anyway, so it wasn’t the end of the world. But you have to look at where you have certain devices, ask if they have credentials on your network, and figure out the worst thing that could happen. To most companies today, IoT is a distraction. You need to pay some attention to it, but it’s not your biggest worry. The data breach worry is probably much larger.
But for some organizations, the worst-case scenario for IoT devices is life safety, but not in the way that some people might think about it. Imagine if someone could mess with the traffic lights in New York City, for example. The likelihood that someone could kill someone directly with that attack is pretty low. Make an intersection all green, and people could possibly have a couple accidents but pretty soon no one is driving through the affected intersections. But indirectly what you’ve done is now New York doesn’t have streets. What happens when people need ambulances? We certainly saw that with the NotPetya cyberattacks in 2017. It took down dispatch networks in the U.K. The scheduling software was down, surgeries were postponed. How many people are indirectly killed or had their quality of life degraded? We don’t have good numbers for those, but in a complex system, incidents where you lose critical infrastructure have a huge impact.
Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.
What’s the biggest challenge with securing IoT devices?
The real challenge is the upgrade cycles of these devices. If you had an iPhone 1, upgrades really sucked. You had to plug it into iTunes and manually download the new configuration, back up your phone because you didn’t know if it was going to work, install the new operating system and pray. For the new iPhones, it’s totally different. You go to bed and wake up in the morning and your iPhone says: “By the way, a new iOS is installed, have a nice day.” The change from the old model to the new model required serious hardware changes. There have been security protocol changes that would make today’s process impossible on the iPhone 1. Apple was willing to say that when they update the iOS, they won’t support hardware that’s several generations old; it’s past its shelf life, get rid of it. Basically, the iPhone — as expensive as it is — needs to be treated as a disposable technology.
The challenge we have on most devices is they’re not treated as disposable. You buy a thermostat and you don’t have a long-term relationship with whoever you bought it from. It’s lower quality than your iPhone, but you attach it to your wall, and it runs for 10 or 20 years until it dies. So my main worry about connected devices isn’t about the pace we deploy them, it’s about the pace we update them, which is approximately zero. Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.
What should you do if the thermostat manufacturer goes out of business shortly after you buy the device?
You have to toss the device in the trash. That’s what you have to do. At the corporate level, we have the staff and infrastructure to take that challenge on. It’s a different issue for houses of worship, small enterprises, nonprofits — they don’t really have the bandwidth to worry about that kind of problem. They keep going forward, and that’s not necessarily the wrong thing in many cases. You have to ask if the benefit is worth the risk you’re taking. If you’re running a synagogue in America today, you might want surveillance cameras outside. If the risk is someone else seeing what’s on the cameras, or having the cameras get used for DDoS attacks, that’s probably still worth being able to detect and record acts of antisemitism. It’s a trade-off, and at some point, you have to pick.
Is there a way to keep vulnerable devices off of shelves?
You could try to ban systems, but when you look at a lot of the innovation, you often see someone come up with an idea, and by the time they bring their device to market, there’s like 150 knockoffs and they’re built as cheaply as possible and often by a shell brand. They build one device and never make another. It makes it really hard when a lot of manufacturers are not in the U.S. The challenge is the consumers aren’t differentiating on the quality and security of software. If you’re buying a security camera system, you’re probably most concerned with things like resolution of the cameras, whether they work at night and outdoors, and how much storage you have. Maybe you say you want to manage them from an iPhone or web browser. The consumer isn’t incentivizing the manufacturer to secure that system. Our hopes rest on a larger brand saying we’ll provide you devices with all of this and security baked in, because we have our brand and we want to maintain a long-term relationship with you. But that’s a really big branch to hang our hopes on.
Do you think security standards for connected devices could help solve this problem?
Standards do exist. But it’s really hard to find a standard that’s comprehensive enough to be useful without it being so comprehensive that its inordinately painful. We see standards like PCI security standard for systems that deal with payment card information. There’s FedRAMP for federal government systems. Many of these are very cumbersome and overweight and are designed for environments that don’t change regularly. I don’t see a near-term path for a good IoT security standard that will be genuinely meaningful.