Like any other IT environment, there are potential cyber-risks to the International Space Station (ISS), though the station is quite literally like no environment on Earth.
In a session on August 9 at the Aerospace Village within the DEFCON virtual security conference, former NASA astronaut Pamela Melroy outlined the cybersecurity lessons learned from human spaceflight and what still remains a risk. Melroy flew on two space shuttle missions during her tenure at NASA and visited ISS. Hurtling high above the Earth, ISS is loaded full of computing systems designed to control the station, conduct experiments and communicate with the ground.
“Space is incredibly important in our daily lives,” Melroy said.
She noted that GPS, weather tracking and communications are reliant on space-based technology. In Melroy’s view, the space industry has had somewhat of a complacent attitude about satellite security, because physical access was basically impossible once the satellite was launched.
“Now we know that our key infrastructure is at risk on the ground as it is in space, from both physical and cyber-threats,” Melroy stated.
The Real Threats to Space Today
Attacks against space-based infrastructure including satellites are not theoretical either.
Melroy noted that the simplest type of attack is a Denial of Service (DoS) which is essentially a signal jamming activity. She added that it already happens now, sometimes inadvertently, that a space-based signal is blocked. There is also a more limited risk that a data transmission could be intercepted and manipulated by an attacker.
What isn’t particularly likely though is some kind of attack where an adversary attempts to direct one satellite to hit another. That said, Melory said that there could be a risk from misconfiguring a control system that would trigger a satellite to overheat or shut down.
How the ISS Secures its Network
During her presentation, Melroy outlined the many different steps that NASA and its international partners have taken to help secure the IT systems on-board ISS.
The entire network by which NASA controllers at Mission Control communicate with ISS is a private network, operated by NASA. Melroy emphasized that the control does not go over the open internet at any point.
There is also a very rigorous verification system for any commands and data communications that are sent from the ground to ISS. Melroy noted that the primary idea behind the verification is not necessarily about malicious hacking, but rather about limiting the risk of a ground controller sending a bad command to space.
“There’s a very rigorous certification process required for controllers in the International Space Station Mission Control Center (MCC) to allow them to send commands to the space station,” she explained. “In addition there are screening protocols both before a message ever leaves MCC going up to the ISS and once it’s on board ISS, to check and make sure that the command will not inadvertently do some damage to the station.”
Using Twitter in Space
ISS also makes use of a highly distributed architecture such that different sets of systems and networks are isolated from one another.
For station operations, Melroy said that astronauts make use of technology known as Portable Computer Systems (PCS) which are essentially remote terminals to send commands to the station’s primary computing units.
There is also a local area network on the station with support computers used for limited internet access including email and social media like Twitter. While the local ISS network has internet access, it is not directly connected to the public internet.
Melroy explained that there is a proxy computer inside the firewall at the Johnson Space Center, in Houston, Texas, that is connected with ISS. As such, the space station support computers talk to the proxy computer, which then goes out onto the public internet.
“Now of course, just like any computer, it’s still subject potentially to malware,” Melory said. “However, the most important thing is that the station support computers in no way shape or form are networked to the actual commanding of the station, they’re completely separate systems and they don’t talk to each other.”
Areas of Concern for Spaceflight Security
While ISS has multiple layers of security, Melroy commented that there are still some areas of concern for spaceflight and space cybersecurity.
For satellites, she noted that the uplink and downlink to most satellites is encrypted, though the data on-board the satellite often is not. Additionally, she expressed concern about ground-based control systems for satellites. Melroy explained that satellite ground systems have the same cybersecurity risks as any enterprise IT system.
“The most serious problem I think we have in space is complacency, many people in space think that their systems are not vulnerable to cyber-attacks,” Melroy said. “We are going to have to figure out how to insert cybersecurity and an awareness of that into the values and the culture of aerospace, all the way from the beginning in design and through to operations.”
A cybersecurity firm has uncovered serious privacy concerns in Amazon’s popular “Alexa” device, leading to questions about its safety.
Check Point, the California- and Israel-based technology company, published a report Thursday detailing “vulnerabilities found on Amazon’s Alexa,” including a hacker’s access to the user’s voice history and personal information, as well as the ability to silently install or remove skills on the user’s account.
“In effect, these exploits could have allowed an attacker to remove/install skills on the targeted victim’s Alexa account, access their voice history and acquire personal information through skill interaction when the user invokes the installed skill,” according to the report. “Successful exploitation would have required just one click on an Amazon link that has been specially crafted by the attacker.”
Amazon’s Alexa line is powered by artificial intelligence (AI) technology, and the conglomerate had sold more than 200 million Alexa devices by the end of 2019, CNET reported. The Alexa essentially functions as a virtual assistant to its user, able to take voice commands, play music, set alarms, and offer weather or news reports.
Developers are continually working on new programs to make the devices even more user-friendly. Just a few weeks ago, for instance, Amazon announced Alexa Conversations was moving into its beta phase, and would now be able to provide an AI-driven element to voice interactions, making conversations flow more naturally.
In its report, Check Point described how an attacker could hack into a user’s Amazon account to compromise their Alexa device, including a breakdown of the code needed to carry out such an action. In one example of how an attack could occur, the user would click on a malicious link provided by the hacker, allowing them to inject their code into the user’s account.
Check Point also detailed how an attacker could get the device’s entire voice history, which could expose banking information, home addresses or phone numbers, as all interactions with the device are recorded.
Virtual assistants provide relatively easy targets for attackers wishing to steal sensitive information or disrupt a user’s smart home device, according to the report. Check Point’s research found a weak spot in Amazon’s security technology, the report stated.
“What we do know is that Alexa had a significant period of time where it was vulnerable to hackers,” Check Point spokesman Ekram Ahmed told Fox News. “Up until Amazon patched, it’s possible that personal and sensitive information was extracted by hackers via Alexa. Check Point does not know the answer to whether that occurred yet or not, or to the degree to which that happened.”
The technology company reported its findings to Amazon in June 2020, and Amazon “subsequently fixed the issue,” according to Check Point.
In an emailed statement to Newsweek, an Amazon spokesperson wrote that security of its devices is a top priority for the company.
“We appreciate the work of independent researchers like Check Point who bring potential issues to us. We fixed this issue soon after it was brought to our attention, and we continue to further strengthen our systems,” according to the statement. “We are not aware of any cases of this vulnerability being used against our customers or of any customer information being exposed.”
To ensure Alexa devices are secure, Check Point recommends that users avoid unfamiliar apps, think twice before sharing information with a smart speaker and conduct research on any downloaded apps, a company spokesperson wrote in an email to Newsweek.
Update (08/13/20, 11:52 a.m.): This article has been updated to include responses from Amazon and Check Point.
As seen on: fcw.com by Derek B. Johnson on 2/10/2020
The Trump administration’s proposed budget for fiscal year 2021 would spend $18.8 billion on cybersecurity programs across the federal government, with approximately $9 billion dedicated to civilian agencies for network security, protecting critical infrastructure, boosting the cybersecurity workforce and other priorities.
The overall cybersecurity funding at the Department of Homeland Security is listed at $2.6 billion. That includes $1.1 billion for DHS and its component, the Cybersecurity and Infrastructure Security Agency, to defend government networks and critical infrastructure from cyber threats, including for tools like EINSTEIN and Continuous Diagnostics and Mitigation. According to the Office of Management and Budget, the funding would increase the number of DHS-led network risk assessments from 1,800 to 6,500 and allow for more state and local governments to utilize the department’s services.
The administration has also put a heavy emphasis on bolstering the government’s cybersecurity workforce, releasing an executive order and strategic plan last year. The budget includes funding for DHS’ Cyber Talent Management System, a personnel system designed to bring hundreds of new cybersecurity professionals into the federal workforce under special hiring rules, as well as a CISA-managed cybersecurity workforce initiative and an interagency rotational program that temporarily details cyber personnel to other agencies to gain more holistic experience.
It also proposes to transfer the U.S. Secret Service, which investigates a number of cyber-enabled financial crimes, from DHS to the Department of Treasury.
The Department of Energy would get $665 million for cybersecurity, including $185 million for the Office of Cybersecurity, Energy Security and Emergency (CESR), part of which would go towards funding early research and development of methods to better protect the energy supply chain. At the same time, the plan would eliminate a number of grant programs and research organizations, such as the Advanced Research Projects Agency-Energy.
“The private sector has the primary role in taking risks to finance the deployment of commercially viable projects and Government’s best use of taxpayer funding is in earlier stage R&D,” the budget states.
It would also invest in a number of emerging technologies, setting aside $5 million to stand up Energy’s new Artificial Intelligence Technology Office, along with an additional $125 million for AI and machine learning research. Other research funding includes $475 million for the Office of Science supercomputing research and $237 million for quantum computing research.
On securing the supply chain, the budget would set aside $35 million for the Department of Treasury to implement the Foreign Investment Risk Review Modernization Act passed in 2018, which created a new layer of review by the Committee on Foreign Investment in the United States for foreign investments in U.S. companies that produce critical technologies. The administration is also implementing the Secure Technologies Act, which created a new Federal Acquisition Supply Chain Security Council charged with developing procurement regulations to prevent U.S. agencies from buying compromised computer parts, components and software.
For Twitter (TWTR), the hack was certainly not a good look. CEO Jack Dorsey apologized for it on the company’s earnings call last week, saying: “Last week was a really tough week for all of us at Twitter, and we feel terrible about the security incident.”
For other companies, the hack could serve as a reminder that even at a moment when there is much else to worry about (like the economic recession and ongoing pandemic), cybersecurity threats are still an issue. It may be more true now than usual — experts say that having many people working from home presents unique security risks, especially given that many companies made the transition practically overnight.
“The way (the transition to remote working) happened, instantly, there was no warning, and all of a sudden people were just told, ‘you’re not going back to work tomorrow,'” said Anu Bourgeois, an associate professor of computer science at Georgia State University. “Everybody became vulnerable at that point.”
When coronavirus hit the United States, employers had to scramble to get a huge percentage of the country’s workforce to transition to remote working for the first time, a massive task that may have involved corner-cutting when it came to security.
There are a number of ways companies could have gone duringthe transition. In the hurry to keep employees safe but still maintain their workflow, companies might have given out laptops not equipped with the proper security software or asked them to use their own personal devices for work, Bourgeois said.
That issue was likely heightened for employees and families who can’t afford multiple devices and suddenly found themselves working from home while kids attended school remotely.
“They’re having to juggle different people using that device,” Bourgeois said. “Whereas at work you’re just one person, your kids may be having to use the device you use for work for their school or entertainment. You have that vulnerability of different people on your machine.”
Companies that were accustomedto having employees work only out of the office likely also had to develop new “access controls.” Whereas workers may have only been able to access their company’s servers and data from inside the office, they now may have to sign into a virtual private network (VPN) or other portal to securely access the information needed to do their jobs.
Deploying proper cybersecurity protocols for a remote workforce, “especially for a large scale company, is going to be really time consuming and difficult to do,” said Bourgeois.
She added that even with existing security software, companies could run into issues. Some security systems track employee habits — such as the normal days, times and duration of time that they typically access company systems — to identify potential hackers. But such systems may be confused by people’s changing work habits during the pandemic, and therefore could be less likely to catch breaches.
What we know about the Twitter hack
It’s unclear whether the Twitter hack had anything to do with remote working policies the company put in place in response to the pandemic.
Former Twitter employees examining the incident acknowledged that it’s a possibility, but there’s no evidence that Twitter relaxed its security to accommodate working from home. Twitter declined to comment on its remote work policies.
Twitter said the breach was the result of a coordinated “social engineering” attack that targeted workers who had administrative privileges, with the aim of taking control of the accounts.
Experts say social engineering may alsobe easier when people are working from home, where they may be distracted or let their guard down.
“You have people scrambling, in a different environment, and that mindset is not the same when you’re working from home versus the office,” Bourgeois said. “So many people are juggling their kids and are distracted and may be trying to quickly get through whatever task they need to get through. (They) may not be as sensitive to looking for these social engineering tactics, like phishing emails or phone calls.”
Some have also warned that hackers may try to exploit people’s fear of coronavirus in an attempt to carry out hacks or phishing attempts.
“As the world’s anxiety regarding coronavirus continues to escalate, the likelihood that otherwise more cautious digital citizens will click on a suspicious link is much higher,” the Electronic Frontier Foundation wrote in a March blog post.
The EFF cautioned people to look out for suspicious messages promising information or offers related to coronavirus, especially ones that sound too good to be true, like an offer to submit personal information in exchange for a free coronavirus vaccine.
For companies looking to avoid being the next target of an attack — in addition to implementing antivirus software and two-factor authentication — “the number one thing is education,” according to Bourgeois.
“Unless your employees are well versed in all of these different types of attacks and what to be aware of, it doesn’t matter what else you do, that person is vulnerable. Educating the workforce is key,” Bourgeois said.
Misconceptions about this cybersecurity model abound. Here’s what’s true and what’s not.
It may be getting a lot of renewed attention lately, but zero trust is not a new concept. Security professionals have been promoting it for almost 20 years. Yet there remains confusion about what exactly it is and how it works.
At its core, zero trust means what the term implies: It is the end of implicit trust, where people or systems were trusted simply because of where they were — on campus, on private wireless, on a VPN, in the data center and so on. Instead, the zero-trust model says to trust no one and require everyone and everything to be controlled, authenticated and authorized.
Fallacy: Zero Trust Means Endless Logins For Users
Zero-trust models definitely require that users be authenticated every time they do anything. But that doesn’t have to be done with a login page and password. Instead, single sign-on systems — integrated with browsers, client operating systems and VPN tools — are used to reduce the number of login steps visible to users. Users are still being authenticated and authorized many times, but it’s happening behind the scenes without bothering users.
If done incorrectly, zero trust is a fast track to user dissatisfaction. But a well-planned zero-trust deployment, combined with an identity and access management program, both increases the quality of the user authentication (by shifting from passwords to something stronger, such as multifactor or digital certificates) and the granularity of the controls that the security team has to grant or restrict access.
Fact: The Cloud Simplifies Zero-Trust Transitions
Zero trust requires that you rethink the connections between everyone and everything, including systems sitting next to each other in a data center. You can definitely build a zero-trust security model in an existing on-premises data center, if your network and application teams can cooperate.
However, many IT groups find that adding in security barriers to replace a network free-for-all inside an office building or an existing data center to be very challenging. When applications are forklifted out of the data center and moved to the cloud, it presents a natural opportunity to put in the security barriers that zero trust requires. For forward-looking IT groups, a cloud deployment is the ideal time to start deploying a zero-trust model at both the network and the application layers.
Fact: Zero Trust Makes A VPN Unnecessary
With zero trust, all user-to-server communication channels should be controlled, authenticated and authorized. (The same goes for server-to-server communications as well.) In the 1990s, the standard tool to do this was an IPSec VPN, and that tool still has its place in the IT manager’s toolbox to solve problems with legacy applications or very small or specialized user communities.
But the zero-trust idea of control, authentication and authorization doesn’t really overlay perfectly with typical IPSec VPN implementations, because they typically have weak controls, broad-based authentication and no authorization model at all.
Instead, application-specific encryption provides protection against eavesdropping or man-in-the-middle attacks, while also delivering a strong authentication model. Of course, you can always layer that on top of a VPN connection — and many IT leaders may choose to do that during a transition period or to accommodate legacy applications. But over the long term, the combination of application-specific authentication and encryption along with a move of many applications to cloud hosting services spells the end of VPNs for general purpose access to corporate networks.
Fallacy: Zero Trust Is a User-Focused Security Initiative
Zero trust is not just about users. It’s about not trusting anyone or anything just because of where they are. What this means is that users who are on corporate Wi-Fi shouldn’t be trusted any more than users who are connecting from their home offices.
In early days of networked computing, security professionals rallied around the expression “a crunchy shell around a soft, chewy center” to describe network security. Firewalls were used to provide the crunch in the form of access controls. Things outside the “chewy center” had strong access controls, but everything inside the firewalls was implicitly trusted.
Zero trust sweeps away this idea. Instead, every server, every network access point and every application should have its own crunchy shell that provides the services of access control, typically coupled with authentication and authorization.
Fallacy: Zero Trust Is Just Another Buzzword Designed to Sell Security Products
Zero trust isn’t a marketing ploy. Companies around the globe are being hit hard with data breaches and break-ins. Post-mortems around most of these security incidents come to a simple conclusion: We trusted someone or something that we shouldn’t have, and that’s how the breach occurred.
In the data center, not every server joined to a Windows domain is equally well managed and protected — but when the weakest server becomes an entry point for cybercriminals, the nature of the trust relationship in the data center makes it easy for attackers to move laterally to other systems, escalating privileges and access as they go.
The same is true for end users. Just because an end user’s PC is connected to the network in your headquarters doesn’t mean the user can be trusted to connect to every bit of network and server infrastructure on the corporate campus.
Getting rid of this overly generous model of trust in corporate networks dramatically reduces the risk of data breach and system compromise. That’s no buzz — it’s a better way to design and run an organization’s applications and infrastructure.
Andy Ellis, Akamai’s chief security officer, says 5G is only going to make it worse.
“5G enables more devices to be online at a time where we don’t really have a plan to secure them in the future,” Ellis told Protocol. “It’s basically the creation of a debt to service the future. We buy this world full of connected devices, and the mortgage is that at some point we have to secure them before they cause more problems for us.”
Ellis has spent the last two decades in cybersecurity roles at Akamai, which operates one of the largest content-delivery networks in the world. He got his start in cyber as an information warfare specialist in the U.S. Air Force, which he joined after earning a computer science degree at MIT.
He talked with Protocol last month about 5G, connected devices, and what cybersecurity professionals should be focused on in 2020.
This conversation has been edited for length and clarity.
How will the proliferation of connected devices affect cybersecurity?
When it comes to connected devices, we’re at a fascinating touchpoint. Everything is becoming connected: your garage door, the fitness tracker on your wrist, the thermos you drink coffee out of. These items used to be bespoke. They had custom-made electronics and were designed to do one thing: With the garage door, you would press a button and it would transmit a signal to open or close. But they’re not bespoke devices anymore. They’re computers that can talk on the internet, and that fundamentally changes things. It creates a dramatically different level of complexity.
If you have a connected garage door, you access it through an app on your phone, which sends a signal into the cloud, talks to someone else’s server, transmits it down to another computer running inside your garage that can open and close the door. In the past, you could maybe spoof frequencies and get a garage door to open, or you could trigger a manual release and open it from the outside — those were the vulnerabilities you had to deal with.
But now you have to worry about what malware might be running on your phone, how is the phone authenticating itself on this cloud-based server, how is the server protected, how are the passwords secured, can people on the internet get access to the computer in your garage. There are many more vulnerabilities in this system than in a traditional garage door, and the devices can also be misused in other ways. With botnets, hackers compromise cameras and other connected devices by entering default passwords — like username: admin, password: admin — and use that network to harm someone else. The compromised devices can all try to access a target at once, flooding it with traffic until it becomes inaccessible.
Is the problem getting better or worse?
The problem is absolutely growing. There are billions of connected devices. We’ve probably already passed the point where connected devices have outnumbered handheld computing devices like laptops, tablets and phones.
We need to be concerned about 5G and the growth in IoT devices. The big promise with 5G — and news stories suggest we’re not quite there with this promise — is that the capacity for more devices in a given location is much higher than it has been with 2G, 3G and 4G. In the past, if you tried to connect 35 devices to your home network, they would stop working properly. With 5G you can have that, and we’re going to see an explosion in the number of IoT devices because of that.
You mentioned a lot of consumer devices, like garage doors and fitness trackers. Is this also a business problem?
I was talking earlier about a garage in a house, but parking garages rely on a similar system. There’s a transponder in my car, the system reads it, it queries a server on the internet, sees that I’m still employed, and opens the gate so I can get into the company garage. In my office I have lighting systems, thermostats, video conferencing — all these connected devices in offices don’t look like computers and aren’t treated like computers, but that’s what they are. And a common difference between consumer-grade and commercial-grade devices is whether or not you’re building them into your system. In commercial buildings, a lot of these systems are installed by someone else, and you have to coexist with them until the building is torn down.
What industries are most affected by vulnerabilities in connected devices?
Pick any sector and I’ll tell you how they are deeply at risk. In the medical sector, hospitals are now filled with connected devices. In fact, human bodies are starting to be full of connected devices. There, you have a special risk where human life is on the line if a device is compromised.
If you talk about agriculture, more and more connected devices are used for farming — imagine the damage that could be done if an adversary was able to target machines and adjust the fertilizer recipe so that instead of 1 parts in 10 of a particular ingredient it’s 1 part in 3, and now you’re burning whatever you’re trying to grow on an industrial scale. In the satellite industry, you have some really interesting problems because you can’t service the devices at all.
There has been research into attacks on pacemakers and insulin pumps, where you can cause them to use up their batteries or medicine. What if you performed that kind of attack on a satellite, where you cause it to burn its thrusters or crash? Kevin Fu, at the University of Michigan, is a fantastic researcher in this area. In every industry, you have a case like that where you don’t do anything fancy to the device, but you get it to do its function more or less frequently until something like the battery dies. That’s a kind of threat that many people don’t think about. Pacemakers are designed so that when you walk into your doctor’s office, they can run diagnostics to check things like the battery life and how it’s operating. For manufacturers that didn’t secure that interface, a hacker could theoretically sit next to them, continuously ask for the data, and the device’s lifespan is shortened from years to months. These are the interesting problems people need to think about.
How are cybersecurity professionals at health care organizations handling these kinds of risks?
From talking to hospital CISOs, a lot of them struggle with connected devices. A challenge is that the device may be completely out of date and horribly vulnerable, but it’s high-revenue for them. Or there simply might not be an update; the manufacturer might not support the device anymore and they want you to buy the next $3 million device even though yours works well and is used 24/7.
So in many cases, the CISOs have to functionally disconnect a lot of their devices. They create an enclave just for the device so it can operate but not talk to anything else on your network, because it’s not safe. I feel that hospital CISOs have one of the more challenging jobs in my industry. They’re more of a landlord than an enterprise. Maybe their physicians don’t actually work for them. They come in, do their procedures and expect the devices to work. They need to have electronic records, so there has to be this interchange: Someone goes in, gets an X-ray, and other physicians need to see it so you can’t completely disconnect the X-ray machine. They also have to deal with celebrity customers who have valuable data; a lot of people would pay big money for that information.
What’s the worst-case scenario of an attack on a connected device?
The worst-case scenario is going to vary by organization. When you think about IoT, the question is what is the prevalence in my organization and how exposed are we. We have room-booking devices on our conference rooms, and one researcher bought a bunch of them on eBay and took them apart, and his discovery led us to pull the devices because we couldn’t secure them in a reasonable way. People didn’t really like the system anyway, so it wasn’t the end of the world. But you have to look at where you have certain devices, ask if they have credentials on your network, and figure out the worst thing that could happen. To most companies today, IoT is a distraction. You need to pay some attention to it, but it’s not your biggest worry. The data breach worry is probably much larger.
But for some organizations, the worst-case scenario for IoT devices is life safety, but not in the way that some people might think about it. Imagine if someone could mess with the traffic lights in New York City, for example. The likelihood that someone could kill someone directly with that attack is pretty low. Make an intersection all green, and people could possibly have a couple accidents but pretty soon no one is driving through the affected intersections. But indirectly what you’ve done is now New York doesn’t have streets. What happens when people need ambulances? We certainly saw that with the NotPetya cyberattacks in 2017. It took down dispatch networks in the U.K. The scheduling software was down, surgeries were postponed. How many people are indirectly killed or had their quality of life degraded? We don’t have good numbers for those, but in a complex system, incidents where you lose critical infrastructure have a huge impact.
Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.
What’s the biggest challenge with securing IoT devices?
The real challenge is the upgrade cycles of these devices. If you had an iPhone 1, upgrades really sucked. You had to plug it into iTunes and manually download the new configuration, back up your phone because you didn’t know if it was going to work, install the new operating system and pray. For the new iPhones, it’s totally different. You go to bed and wake up in the morning and your iPhone says: “By the way, a new iOS is installed, have a nice day.” The change from the old model to the new model required serious hardware changes. There have been security protocol changes that would make today’s process impossible on the iPhone 1. Apple was willing to say that when they update the iOS, they won’t support hardware that’s several generations old; it’s past its shelf life, get rid of it. Basically, the iPhone — as expensive as it is — needs to be treated as a disposable technology.
The challenge we have on most devices is they’re not treated as disposable. You buy a thermostat and you don’t have a long-term relationship with whoever you bought it from. It’s lower quality than your iPhone, but you attach it to your wall, and it runs for 10 or 20 years until it dies. So my main worry about connected devices isn’t about the pace we deploy them, it’s about the pace we update them, which is approximately zero. Everything deployed that doesn’t have a path to secure itself is probably never going to get secure until the building is torn down. That’s our biggest challenge for the next couple decades.
What should you do if the thermostat manufacturer goes out of business shortly after you buy the device?
You have to toss the device in the trash. That’s what you have to do. At the corporate level, we have the staff and infrastructure to take that challenge on. It’s a different issue for houses of worship, small enterprises, nonprofits — they don’t really have the bandwidth to worry about that kind of problem. They keep going forward, and that’s not necessarily the wrong thing in many cases. You have to ask if the benefit is worth the risk you’re taking. If you’re running a synagogue in America today, you might want surveillance cameras outside. If the risk is someone else seeing what’s on the cameras, or having the cameras get used for DDoS attacks, that’s probably still worth being able to detect and record acts of antisemitism. It’s a trade-off, and at some point, you have to pick.
Is there a way to keep vulnerable devices off of shelves?
You could try to ban systems, but when you look at a lot of the innovation, you often see someone come up with an idea, and by the time they bring their device to market, there’s like 150 knockoffs and they’re built as cheaply as possible and often by a shell brand. They build one device and never make another. It makes it really hard when a lot of manufacturers are not in the U.S. The challenge is the consumers aren’t differentiating on the quality and security of software. If you’re buying a security camera system, you’re probably most concerned with things like resolution of the cameras, whether they work at night and outdoors, and how much storage you have. Maybe you say you want to manage them from an iPhone or web browser. The consumer isn’t incentivizing the manufacturer to secure that system. Our hopes rest on a larger brand saying we’ll provide you devices with all of this and security baked in, because we have our brand and we want to maintain a long-term relationship with you. But that’s a really big branch to hang our hopes on.
Do you think security standards for connected devices could help solve this problem?
Standards do exist. But it’s really hard to find a standard that’s comprehensive enough to be useful without it being so comprehensive that its inordinately painful. We see standards like PCI security standard for systems that deal with payment card information. There’s FedRAMP for federal government systems. Many of these are very cumbersome and overweight and are designed for environments that don’t change regularly. I don’t see a near-term path for a good IoT security standard that will be genuinely meaningful.
As states begin to lift stay-at-home orders, many offices are re-opening their doors. They are re-establishing their operations and balancing recalling essential staff to the office and evaluating the future of working from home (WFH).
Office life will find a new normal, but that reality will require flexible and strategic leaders.
Reinventing Office Space
Forget business as it was. Social distancing is here to stay and will force a reinvention of the office space. Cubicles will become more like desk hotels. Employees need more room to work and higher walls to keep everyone safe. Similarly, shared spaces, such as bathrooms and breakrooms, will require a redesign.
Will people take a number or make an appointment for the breakroom?
What about bathrooms or hallways? When staff only has 5-feet of space, how can they maintain the recommended 6-feet of social distance?
How will elevators be kept clean and safe?
These are big questions, and they require answers, equally important is the larger issue of cybersecurity concerns in the age of COVID-19.
A New Normal in Office Technology
Specific technologies have already started to become obsolete. The desk phone is finally dead in many industries. The demands on mobility increased, and that caused other tools to become ubiquitous. The needs of the COVID-19 world pushed many companies to route calls to mobile phones or adopt softphone technology, where a computer or smartphone can function as the primary communication device. At the same time, virtual meetings and teleconferences became the way people connect.
Together, those technologies introduced new efficiencies. Cameras, headsets, and microphones will become the new standard in business operations. Still, that reality presents a new concern: how secure is your conversation?
WFH and Compromised Security
With the nearly instant shift from Work-From-Office (WFO) staff to WFH staff, IT departments in all businesses did whatever had to be done. Security was secondary to enabling the workforce and business. In many companies, IT departments compromised security to get their employees up and running.
Productivity is priority number one. Security is 1.1. Everything else is secondary.
Those security compromises must be addressed in a way that doesn’t close off the WFH employee but enables them to shift to WFO AND WFH model. Further, the compromised security that occurred initially must be addressed in earnest.
Smart Security Measures
Rethink how your services, applications, and systems are accessed from insecure networks (e.g., home networks). While it is unlikely that an organization can take responsibility for individual home networks, the need for a strong security posture still stands. A more enduring approach is to design your systems to support access from various networks. This requires strategic thinking and a sound fundamental understanding of business technology.
For instance, how are you going to protect the corporate data that people downloaded to their home computers after they return to the office? The data is still there.
Your company needs to implement the right security measures before making additional staff moves.
Start with the Basics
Updating all operating systems is an effective and simple place to begin. These are frequently outdated, and create vulnerabilities. In April 2020, Microsoft released 113 security updates to Windows 10. Most of these apply to Windows 7, but Windows 7 is no longer supported. Given that 26% of the computers in the world still run Windows 7, and most of those are in people’s homes according to Netmarketshare, there are now 113 new ways that someone could compromise those Windows 7 systems.
Adopt a Password Policy
Traditional passwords are outdated. The number of characters in your password will drive more security than complexity. We recommend that users pick a 16-character passphrase as the new minimum. A passphrase like “my dog has fleas” is a 16-character passphrase that would currently take over a thousand years to crack and is easy to remember.
Change this passphrase once every six months to maintain effective security practices. Passphrases and their respective policies must be implemented for every account, even the executive staff.
Adopt Multi-Factor Authentication
Multi-factor authentication (MFA) or two-factor authentication (2FA) leverages an existing user device, like a smartphone or token, to verify a known quantity, like a passphrase. Once the MFA/2FA is establishing, it only requires the passphrase. Unrecognized devices will require an authentication process that involves sending a code to a registered device.
Not everyone will be returning to the office. While increased productivity and lower costs will incentivize some organizations, others will still need to support dual office and remote environments. I expect many roles will never come back to the office.
Companies of all sizes will need to prepare for a new office landscape after COVID-19 and implementing new cybersecurity measures to support both WFO and WFH should be the first place they start. If you have questions about how to manage a secure WFO and WHF environment, reach out to your Aldrich Technology Advisor today.
There is no question cybercrime is on the rise; over 1.76 billion user records were leaked in January 2019 alone. Even worse, a recent Gallup study revealed more Americans are now afraid of cybercrime than violent offences.
It’s important to understand that cybercriminals are just as sophisticated and innovative as modern IT security solutions. Often working in teams, hackers have a number of tools and resources at their disposal to access confidential data, some of which help them easily defeat traditional data security controls.
The Purpose of Multifactor Authentication
Multifactor Authentication (or MFA) has become a critical, preventative security measure for businesses and organizations of all sizes, and any individual who uses a smart device in their daily life. It offers an added layer of security that compliments how passwords are used to protect private data, thereby making it more difficult for potential hackers to exploit and obtain personal data, or to breach company networks.
To explain it simply, an authentication factor is a credential used to verify the identity of a person, entity or system. When multifactor authentication is in place, more than one credential is required prior to granting access to private systems or data.
Incidents such as the Facebook security breach in 2018, which exposed the personal information of over 50 million users, have forced companies to add a layer of security to their platforms. Tech giants including Twitter and Google have since adopted MFA to protect their users, and their data.
Commonly Utilized Authentication Factors
When it comes to identifying individual users, a combination of three authentication factors are traditionally used:
Knowledge Factor – This is information that is known only to the user – for example, a series of security questions, PIN codes, or unique usernames and passwords
Possession Factor – This refers to something that a user owns – for example, a smart card, a smartphone, or an OTP (one-time passcode)
Inherence Factor – This refers to something that is exclusive to an individual user – for example, fingerprints, facial biometrics, voice controlled locks, or eye scans – any biometric element that can prove the user’s identity.
Typically, multifactor authentication combines at least two of the factors mentioned above – and in some cases, all three can be combined for added security.
Advantages of Multifactor Authentication for Businesses
Enhancing Compliance and Mitigating Legal Risks
Apart from data encryption, state and federal governments have also made it mandatory for certain businesses to implement multi-factor authentication into standard operating procedures at the end-user level.
For example, businesses who have employees that work with PII (Personally Identifiable Information), Social Security, or financial information, are bound by state and federal statutes to integrate multi-factor authentication into their security protocols. MFA is actually required to meet mandatory compliance standards.
Making the Login Process Less Daunting
Many non-regulated businesses resist MFA implementations, fearing a more complex login process for employees and customers.
However, this extra layer of security enables organizations to redefine and reimagine their login processes on the road to enhanced security.
Setting Security Expectations
Identifying security requirements and expectations at your organization is an important part of any MFA implementation. For example, your industry, business model, applicable compliance regulations (if any) and the type of data you capture, utilize, and store to conduct normal business operations are important considerations. An MFA implementation is an opportunity for every organization to identify and classify common business scenarios based on risk level and determine when MFA login is required.
Based on a combination of factors, organizations might optionally decide that MFA is only required in certain high-risk scenarios, when accessing certain applications or databases, or when employees login remotely, offsite, or when accessing internal systems for the first time using a new device.
MFA can also be used to set a limit on where a user can access your information from. If your employees are out in the field, and they use their own devices for work, your data is at a higher risk of theft, particularly when employees connect from external WIFI networks that are not secure.
MFA can be used to restrict user access based on their location. This means that if a user tries to access company data from an off-site location, you can easily verify whether or not they are actually an employee by requiring biometric authentication.
Organizations who are considering MFA often decide to implement more sophisticated logins – for example, a single-login or sign-on; which is not only secure, but actually makes signing in to multiple systems easy, using one set of login credentials.
Single-sign-on authenticates the person accessing the information via MFA. Once it is confirmed that a user is authorized to access the content, they are automatically granted access to other systems associated with their user profile. This means they have access to multiple applications, without the need to log in to each one separately.
Many people now believe that passwords are dead – and for good reason. Aside from obvious risk factors involved with writing down login credentials or sharing them with unauthorized users, managing different and complex passwords for all your applications and devices means employees need to remember all of them – not exactly an easy job. This is exactly why corporate help-desks are bogged down with password reset requests – and why people have a hard time following best-practices for frequently resetting their passwords in apps that don’t require it. When an organization selects a SSO solution that features biometric authentication, it’s an opportunity to eliminate employee passwords completely.
For these reasons, a SSO type of solution is very practical – especially since the most challenging part of successfully implementing MFA is simplifying the login process.
MFA Is a Vital Aspect of Effective Cybersecurity
As cybercrimes continue to increase, organizations are beginning to realize the full scope of the threats that they now face. Modern cyber criminals don’t just target big corporations. 31% of businesses with an employee count of less than 250 have been popular targets of cybercrime.
It is also important to understand that cybercriminals aren’t just stealing critical data. Often, they aim to corrupt your data, or destroy it entirely. This is often carried out by installing difficult-to-detect malicious software (malware) that disrupts business and services, and spreads fear and propaganda.
As a result, the market for multifactor authentication is expected to reach $12.51 billion in the next 4 years.
A Great Step Towards Enhancing Mobile Engagement
Like it or not, we are in the middle of a digital transformation that’s not slowing down; and we are in it for the long haul. (If you’re part of the vast majority that can’t go anywhere without their smart phone, we’re willing to bet that you like it.) As part of all this, we have, collectively, become used to having access to all the resources and information we want and need – on the go, from anywhere in the world, any time we want it. This is the height of digital convenience, and something that has brought about many positive changes in the world of business and in society It also continues to introduce new challenges in terms of data security. MFA offers a streamlined method of ensuring user authentication –allowing you to ensure security with greater certainty, without sacrificing ease of access.
Originally Seen: CNBC on March 23rd, 2020 by Lindsey Jacobson
Companies are enabling remote work to keep business running while helping employees follow social distancing guidelines.
A typical company saves about $11,000 per half-time telecommuter per year, according to Global Workplace Analytics.
As companies adapt to their remote work structures, the coronavirus pandemic is having a lasting impact on how work is conducted.
With the U.S. government declaring a state of emergency due to the coronavirus, companies are enabling work-from-home structures to keep business running and help employees follow social distancing guidelines. However, working remotely has been on the rise for a while.
“The coronavirus is going to be a tipping point. We plodded along at about 10% growth a year for the last 10 years, but I foresee that this is going to really accelerate the trend,” Kate Lister, president of Global Workplace Analytics, told CNBC.
Gallup’s State of the American Workplace 2017 study found that 43% of employees work remotely with some frequency. Research indicates that in a five-day workweek, working remotely for two to three days is the most productive. That gives the employee two to three days of meetings, collaboration and interaction, with the opportunity to just focus on the work for the other half of the week.
Robert Kent | Lifesize | Getty Images
Remote work seems like a logical precaution for many companies that employ people in the digital economy. However, not all Americans have access to the internet at home, and many work in industries that require in-person work.
According to the Pew Research Center, roughly three-quarters of American adults have broadband internet service at home. However, the study found that racial minorities, older adults, rural residents and people with lower levels of education and income are less likely to have broadband service at home. In addition, 1 in 5 American adults access the internet only through their smartphone and do not have traditional broadband access.
Full-time employees are four times more likely to have remote work options than part-time employees. A typical remote worker is college-educated, at least 45 years old and earns an annual salary of $58,000 while working for a company with more than 100 employees, according to Global Workplace Analytics.
New York, California and other states have enacted strict policies for people to remain at home during the coronavirus pandemic, which could change the future of work.
“I don’t think we’ll go back to the same way we used to operate,” Jennifer Christie, chief HR officer at Twitter, told CNBC. “I really don’t.”
Due to global issues concerning Coronavirus (COVID-19), rising rents in concentrated urban areas, and the ongoing battle amongst organizations for recruiting and retaining top talent, there has been a noted shift in appetite for working remotely. Companies which were previously against remote work are suddenly considering remote, or implementing remote, with varying degrees of intentionality.
The reality is that almost every company is already a remote company. If you have more than one office, operate a company across more than one floor in a building, or conduct work while traveling, you are a remote company. It behooves all of these firms to adopt remote-first practices, even if some interactions occur in a shared physical space.
On this page, we’re detailing what not to do when transitioning to remote, or moving towards remote.
Is this advice any good?
GitLab is the world’s largest all-remote company. We are 100% remote, with no company-owned offices anywhere on the planet. We have over 1,200 team members in more than 65 countries. The primary contributor to this article (Darren Murph, GitLab’s Head of Remote) has over 14 years of experience working in and reporting on colocated companies, hybrid-remote companies, and all-remote companies of various scale.
The pages within, just like the entire GitLab handbook, are publicly accessible. Please consider studying these guides, implementing them, and contributing your learnings to make them better.
Do not replicate the in-office/colocated experience, remotely
It is vital to recognize and appreciate this point: an organization should not attempt to merely replicate the in-office/colocated experience, remotely.
Remote work is not traditional work which is simply conducted in a home office instead of a company office. There is a natural inclination for those who have not personally experienced remote work to assume that the core (or only) difference between in-office work and remote work is location (in-office vs. out-of-office). This is inaccurate, and if not recognized, can be damaging to the entire practice of working remotely.
The principles of remote work are different. The approach to conducting work is different. Just as multi-level office buildings required elevators and phones to be functional as workplaces, teams working remotely should embrace tools (GitLab, Figma, etc.) that enable asynchronous communication and should reconsider traditional thoughts on items such as meetings and informal communication.
What is happening en masse related to Coronavirus (COVID-19) is largely a temporary work-from-home phenomenon, where organizations are not putting remote work ideals into place, as they expect to eventually require their team members to resume commuting into an office.
Do not assume that everyone has access to an optimal workspace
While long-term remote workers have had years to tweak and iterate on their home office, those who are thrust into working from anywhere may be ill-prepared. Organizations should not expect team members to be masters in office design and ergonomics. Too, what works best for one person will look different than another person.
Some may find it useful to see examples of comparisons between colocated norms, and the most closely correlated remote recommendation. You will notice that many suggestions link back to asynchronous workflows, transparency, and working handbook-first, which are cornerstones to doing remote well.
Note that all of these suggestions are not exclusive to remote. Even for companies which intend to maintain offices or transition to a hybrid-remote company, implementing remote-first techniques ensure that all employees are viewed as first-class citizens and companies avoid the five dysfunctions of a team.
For companies who move into an office building, it’s unlikely that everything works perfectly on the first day. Signage may be missing, security gates may be erratic, elevators may be stuck, etc. Adapting to a workplace takes time, and polish comes with iteration.
The same is true when embracing remote work. Particularly for companies which were established with colocated norms, it is vital for leadership to recognize that the remote transition is a process, not a binary switch to be flipped. Leaders are responsible for embracing iteration, being open about what is and is not working, and messaging this to all employees.
Remote isn’t a structure that merely works or doesn’t work. Remote is a way of working that requires intentional and perpetual care and evaluation — just as you’d expect in an office environment. Working well remotely (or in-office, for that matter) is not something that is ever done or accomplished. There are always new tools to consider, new workflows to integrate, and new expertise to ingest.
Too, what works for a small remote team may not work for a remote team consisting of thousands of team members. All of this is equally true for colocated companies, though it tends to be less amenable to Band-aid (temporary) solutions in a remote environment.
Do not assume that remote management is drastically different
Remote forces you to do the things you should be doing way earlier and better. It forces discipline that sustains culture and efficiency at scale, particularly in areas which are easily deprioritized in small colocated companies.
Leaders should ensure that new remote hires read a getting started guide, and make themselves available to answer questions throughout one’s journey with the company.
Do not assume your existing values can remain static
To operate well as a remote enterprise, your values must be in support of this way of working. GitLab’s collection of values and sub-values contribute to a thriving all-remote environment. Consider studying the nuances of these values and adjusting or adding to your company’s existing values. Values that were established to support colocated norms may not apply to remote, particularly those which obstruct transparency.
Don’t be quick to brush values off as understood, either. For example, collaboration in a colocated space is routinely demonstrated by gathering people in a shared physical space in search of consensus. Collaboration in a remote setting is demonstrated by empowering the greatest amount of people to contribute insights asynchronously while enabling the DRI (directly responsible individual) to make decisions without explanation.
Contribute your lessons
GitLab believes that all-remote is the future of work, and remote companies have a shared responsibility to show the way for other organizations who are embracing it. If you or your company has an experience that would benefit the greater world, consider creating a merge request and adding a contribution to this page.