Month: April 2018

Watson’s Law: IBM preaches data stewardship as A.I. advances

Posted on

At IBM’s Think conference, executives discussed the importance of protecting and managing data as artificial intelligence offerings like Watson grow and touch more information.

Originally seen on: TechTarget

LAS VEGAS — IBM leadership as well as some of its top customers stressed the importance of data stewardship and trust this week amid growing concerns about how technology and social media companies are using consumers’ data.

At the IBM Think conference Tuesday, IBM chairman, CEO and president Ginni Rometty set the tone during her opening keynote about the potential for advancements in artificial intelligence (AI). Rometty discussed how AI-enabled services will enhance how businesses attain, analyze and use data, citing “Watson’s Law” which predicts an exponential shift as the convergence of AI and data continues to increase, in much the same way that Moore’s law predicted continued exponential growth in microprocessor power.

But with Watson’s Law, Rometty also warned of the potential for data to be misused and exposed. “I can’t have that conversation [about AI and data] without also covering this,” she said. “It will be the greatest opportunity of our time, but it has the potential to be the greatest issue of our time.”

Rometty said “data trust and responsibility” will become even more crucial if AI advances as predicted by Watson’s Law and more companies compile massive amounts of data on users and customers. “All of us have to act, not just tech companies,” she said. “In the end I think we’ll all be judged not just by how we use data…but if we’re a data steward.”

Rometty was joined on stage by IBM customers who echoed the importance of proper data usage and protection in the context of Watson’s Law. Dave McKay, president and CEO of the Royal Bank of Canada, which uses IBM’s cognitive and A.I. offerings, said companies need to be clear about what customer data is being used and how it’s being used. “You have to be clear and say ‘We think we create value if you share this information but we’ll keep it within these walls’,” McKay said. “But it has to be relevant and it has to create value to the customer at the end of the day.”

McKay also acknowledged that enterprises in general haven’t fostered a great deal of confidence in their ability to be good data stewards. “We’re stressed right now in that trust factor,” he said. “And I think institutions that truly live that trust will prosper in the future.

Lowell McAdam chairman and CEO of Verizon Communications, which also uses several Watson-driven services, told the audience that once a company loses the trust of its customers, “you’re never going to get it back.

“We’ve made very clear pledges to our customers that we will not use your data in any way that we haven’t made clear to you and that gives you the opportunity to opt in,” McAdam said. “We’ve seen the things that are going on in [Silicon] Valley now with companies and how they’re using data. We don’t ever want to be in that position.”

While the telecom giant did find itself in a similar position in 2013 after Edward Snowden revealed that the company was delivering phone records to the National Security Agency under a secret court order, Rometty and McAdam were referencing non-government related abuses of data. And while none of the speakers during IBM Think’s keynote sessions mentioned Facebook by name, it’s clear the social media giant’s recent controversy regarding exposed user data had cast a shadow over the proceedings.

Protecting data in the age of Watson’s Law

At IBM Think, Big Blue executives discussed the different ways the company keeps its ever-growing “data lake” from being misused or tapped by unauthorized third parties. During her keynote, Rometty emphasized the company’s data principles for “the era of data and A.I.,” which state that A.I.’s purpose is to augment man and machine rather than just machines; that IBM’s business model is “not about distributing data or in fact monetizing it”; and that IBM applies “advanced security” to protect that data, whether it’s pervasive encryption or future projects such as quantum-resistant encryption.

Dinesh Nirmal, vice president of IBM Analytics Development, told SearchSecurity the company has taken several steps in recent years to better monitor and regulate data as the company’s A.I. technology has grown. Those efforts included the hiring of Inderpal Bhandari in 2015 as IBM’s first global chief data officer and the examination of how IBM handled the data of its own employees internally in order to develop more effective policies.

“We looked at our own data lake, which has information from hundreds of thousands of employees over the course of 100-plus years,” Nirmal said. “The strategy for protecting data lakes is partially policy-based – restricting how data is used, who can access it and things like that – but policies don’t matter unless there is technology behind them to enforce the policies, and there is.”

IBM said it’s also taken a careful approach with API access. For example, IBM executives were asked during a press conference if the recent Facebook data exposure, which saw a third party extract unauthorized data via an API provided to it by Facebook, had caused Big Blue to rethink how it allows customers to access its A.I.-driven data. David Kenny, senior vice president of IBM Watson and Cloud Platforms, said it hasn’t changed IBM’s approach because Watson’s APIs provide access from applications to the A.I. service and not to the underlying data. Customers, he said, will possess their own data within their environments, but they are unable to reach into the A.I. services and touch other data.

 

CLOUD Act stirs tension between privacy advocates and big tech

Posted on

Privacy advocates criticize Congress for passing the CLOUD Act as part of the omnibus spending bill, while big tech companies have expressed support for the controversial legislation.

Congress has come under fire for passing the controversial CLOUD Act as part of the omnibus spending bill, rather than through traditional methods, with review and open hearings.

The aim of the Clarifying Lawful Overseas Use of Data Act (CLOUD Act) was to allow the president to enter into agreements with foreign governments to facilitate information and data sharing across international borders in law enforcement investigations. The CLOUD Act has been controversial, because although it would put in place a standardized process for sharing communication data across borders, some say it lacks oversight and infringes on user privacy.

The CLOUD Act (H.R. 4943) had wider support from big tech companies. Apple, Facebook, Google, Microsoft and Oath all signed a letter in February urging Congress to pass the CLOUD Act, because they saw it as a way to “create a concrete path for the U.S. government to enter into modern bilateral agreements with other nations that better protect customers.”

“The legislation would require baseline privacy, human rights and rule of law standards in order for a country to enter into an agreement. That will ensure customers and data holders are protected by their own laws and that those laws are meaningful. The legislation would further allow law enforcement to investigate cross-border crime and terrorism in a way that avoids international legal conflicts,” the companies wrote in the letter. “The CLOUD Act encourages diplomatic dialogue, but also gives the technology sector two distinct statutory rights to protect consumers and resolve conflicts of law if they do arise. The legislation provides mechanisms to notify foreign governments when a legal request implicates their residents, and to initiate a direct legal challenge when necessary.”

However, privacy advocates — including the American Civil Liberties Union (ACLU), Electronic Frontier Foundation (EFF) and Amnesty International — have widely panned the CLOUD Act for lacking oversight and due process.

The EFF and ACLU have charged that the CLOUD Act is unconstitutional, and it would allow foreign governments to wiretap U.S. citizens without complying with U.S. law and give the president the power to enter into agreements without congressional approval.

Jennifer Daskal, associate professor of law at American University Washington College of Law, and Peter Swire, chair of law and ethics at the Georgia Tech Scheller College of Business, said this view of the CLOUD Act was not accurate. Daskal and Swire wrote on the Lawfare blog that the CLOUD Act would make the process of sharing data on non-U.S. citizens easier and would allow the “U.S. government to review what foreign governments do with data once it is turned over.”

“If the foreign government wants to request the data of a U.S. citizen or resident, it still needs to employ the Mutual Legal Assistance (MLA) system. The bill sets forth a long list of privacy and human rights criteria as to the contours of those requests,” Daskal and Swire wrote. They added that governments naturally seek to sidestep the MLA system, and the inevitable result will be data localization laws forcing storage in those countries.

No debate allowed

Privacy advocates were angered Thursday because Congress sidestepped the potential for debate on these issues by passing the CLOUD Act as a rider on page 2,212 of the 2,232-page omnibus spending bill — similar to what Congress did with the controversial Cybersecurity Information Sharing Act in 2015. The House of Representatives approved the spending bill with a 256-167 vote, and the Senate passed the bill with a 65-32 vote. President Donald Trump has already promised to sign the bill into law.

Sens. Ron Wyden (D-Ore.) and Rand Paul (R-Ky.) both spoke out against including the CLOUD Act in the omnibus bill prior to the vote and expressed unhappiness with the process following the vote on Thursday.

“Tucked away in the omnibus spending bill is a provision that allows Trump, and any future president, to share Americans’ private emails and other information with countries he personally likes,” Wyden wrote in a public statement. “It is legislative malpractice that Congress, without a minute of Senate debate, is rushing through the CLOUD Act on this must-pass spending bill.”

Paul expressed a similar concern on Twitter.

Senator Rand Paul

@RandPaul

But guess what? Congress can’t vote to reject the CLOUD Act, because it just got stuck onto the Omnibus, with no prior legislative action or review. https://twitter.com/rachelbovard/status/976644303870156800?s=21 

Too Many Organizations Don’t Have a Plan to Respond to Incidents

Posted on

Originally seen on Poneman.

When a cyberattack occurs, most organizations are unprepared and do not have a consistent incident response plan.

That’s the major takeaway from our third annual “Cyber Resilient Organization” study, conducted by the Ponemon Institute. The study revealed that 77 percent of respondents still lack a formal cybersecurity incident response plan (CSIRP) that is applied consistently across the organization, a figure that is largely unchanged from the previous year’s study.

READ THE PONEMON INSTITUTE’S THIRD ANNUAL STUDY ON THE CYBER RESILIENT ORGANIZATION 

Incident Response Preparedness Lags Despite Growing Confidence in Cyber Resilience

Despite this, organizations reported feeling much more cyber-resilient than they did last year. Seventy-two percent said as much, which is a notable increase from just over half of respondents who said they felt more cyber-resilient the previous year.

Digging deeper into the data, however, that feeling may not be accurate. The following findings from the Ponemon study paint a different picture:

  • Fifty-seven percent of respondents said the time to resolve an incident has increased.
  • Only 29 percent reported having the ideal staffing level.
  • Just 31 percent reported having the proper budget for cyber resilience.
  • Lack of investment in important tools such as artificial intelligence (AI) and machine learning was ranked as the biggest barrier to cyber resilience.

Investing in Incident Response to Improve Cyber Resilience

It’s imperative that organizations address these challenges in 2018. Cyberattacks can have large costs associations, such as with WannaCry and NotPetya, and the General Data Protection Regulation (GDPR) is quickly approaching. Not only do organizations lack a consistent incident response plan — a GDPR requirement — but most reported low levels of confidence in complying with GDPR.

Security Intelligence - Ponemon Study - Reasons for Improved Cyber Resilience

Based on the findings of the Ponemon report, organizations can improve their cyber resilience by arming employees with the most modern tools available to aid their work, such as AI and machine learning. Implementing a strategy that orchestrates human intelligence with these tools can help organizations create effective incident response plans.

To learn more about the full results of the Ponemon report, download “The Third Annual Study on the Cyber Resilient Organization” and sign up for our March 27 webinar: “Growing Your Organization’s Cyber Resilience in 2018.”

READ THE PONEMON INSTITUTE’S THIRD ANNUAL STUDY ON THE CYBER RESILIENT ORGANIZATION 

Phishing

Posted on Updated on

Originally seen on Tech Target by: Margaret Rouse

Phishing is a form of fraud in which an attacker masquerades as a reputable entity or person in email or other communication channels. The attacker uses phishing emails to distribute malicious links or attachments that can perform a variety of functions, including the extraction of login credentials or account information from victims.

Phishing is popular with cybercriminals, as it is far easier to trick someone into clicking a malicious link in a seemingly legitimate phishing email than trying to break through a computer’s defenses.

How phishing works

Phishing attacks typically rely on social networking techniques applied to email or other electronic communication methods, including direct messages sent over social networks, SMS text messages and other instant messaging modes.

Phishers may use social engineering and other public sources of information, including social networks like LinkedIn, Facebook and Twitter, to gather background information about the victim’s personal and work history, his interests, and his activities.

Pre-phishing attack reconnaissance can uncover names, job titles and email addresses of potential victims, as well as information about their colleagues and the names of key employees in their organizations. This information can then be used to craft a believable email. Targeted attacks, including those carried out by advanced persistent threat (APT) groups, typically begin with a phishing email containing a malicious link or attachment.

phishing email TECHTARGET

Beware suspicious emails phishing for sensitive information.

Although many phishing emails are poorly written and clearly fake, cybercriminal groups increasingly use the same techniques professional marketers use to identify the most effective types of messages — the phishing hooks that get the highest open or click-through rate and the Facebook posts that generate the most likes. Phishing campaigns are often built around major events, holidays and anniversaries, or take advantage of breaking news stories, both true and fictitious.

Typically, a victim receives a message that appears to have been sent by a known contact or organization. The attack is carried out either through a malicious file attachment that contains phishing software, or through links connecting to malicious websites. In either case, the objective is to install malware on the user’s device or direct the victim to a malicious website set up to trick them into divulging personal and financial information, such as passwords, account IDs or credit card details.

Successful phishing messages, usually represented as being from a well-known company, are difficult to distinguish from authentic messages: a phishing email can include corporate logos and other identifying graphics and data collected from the company being misrepresented. Malicious links within phishing messages are usually also designed to make it appear as though they go to the spoofed organization. The use of subdomains and misspelled URLs (typosquatting) are common tricks, as is the use of other link manipulation techniques.

Types of phishingimages (8)

As defenders continue to educate their users in phishing defense and deploy anti-phishing strategies, cybercriminals continue to hone their skills at existing phishing attacks and roll out new types of phishing scams. Some of the more common types of phishing attacks include the following:

Spear phishing attacks are directed at specific individuals or companies, usually using information specific to the victim that has been gathered to more successfully represent the message as being authentic. Spear phishing emails might include references to coworkers or executives at the victim’s organization, as well as the use of the victim’s name, location or other personal information.Whaling attacks are a type of spear phishing attack that specifically targets senior executives within an organization, often with the objective of stealing large sums. Those preparing a spear phishing campaign research their victims in detail to create a more genuine message, as using information relevant or specific to a target increases the chances of the attack being successful.

A typical whaling attack targets an employee with the ability to authorize payments, with the phishing message appearing to be a command from an executive to authorize a large payment to a vendor when, in fact, the payment would be made to the attackers.

Pharming is a type of phishing that depends on DNS cache poisoning to redirect users from a legitimate site to a fraudulent one, and tricking users into using their login credentials to attempt to log in to the fraudulent site.

Clone phishing attacks use previously delivered, but legitimate emails that contain either a link or an attachment. Attackers make a copy — or clone — of the legitimate email, replacing one or more links or attached files with malicious links or malware attachments. Because the message appears to be a duplicate of the original, legitimate email, victims can often be tricked into clicking the malicious link or opening the malicious attachment.

This technique is often used by attackers who have taken control of another victim’s system. In this case, the attackers leverage their control of one system to pivot within an organization using email messages from a trusted sender known to the victims.

Phishers sometimes use the evil twin Wi-Fi attack by standing up a Wi-Fi access point and advertising it with a deceptive name that is similar to a legitimate access point. When victims connect to the evil twin Wi-Fi network, the attackers gain access to all the transmissions sent to or from victim devices, including user IDs and passwords. Attackers can also use this vector to target victim devices with their own fraudulent prompts for system credentials that appear to originate from legitimate systems.

Voice phishing, also known as vishing, is a form of phishing that occurs over voice communications media, including voice over IP (VoIP) or POTS (plain old telephone service). A typical vishing scam uses speech synthesis software to leave voicemails purporting to notify the victim of suspicious activity in a bank or credit account, and solicits the victim to respond to a malicious phone number to verify his identity — thus compromising the victim’s account credentials.

Another mobile device-oriented phishing attack, SMS phishing — also sometimes called SMishing or SMShing — uses text messaging to convince victims to disclose account credentials or to install malware.

Phishing techniques

Phishing attacks depend on more than simply sending an email to victims and hoping that they click on a malicious link or open a malicious attachment. Some phishing scams use JavaScript to place a picture of a legitimate URL over a browser’s address bar. The URL revealed by hovering over an embedded link can also be changed by using JavaScript.

For most phishing attacks, whether carried out by email or some other medium, the objective is to get the victim to follow a link that appears to go to a legitimate web resource, but that actually takes the victim to a malicious web resource.

Phishing campaigns generally use one or more of a variety of link manipulation techniques to trick victims into clicking, which go by many different names. Link manipulation is also often referred to as URL hiding and is present in many common types of phishing, and used in different ways depending on the attacker and the target.

The simplest approach to link manipulation is to create a malicious URL that is displayed as if it were linking to a legitimate site or webpage, but to have the actual link point to a malicious web resource. Users knowledgeable enough to hover over the link to see where it goes can avoid accessing malicious pages.

Another phishing tactic is to use link shortening services like Bitly to hide the link destination. Victims have no way of knowing whether the shortened URLs point to legitimate web resources or to malicious resources.

Homograph spoofing depends on URLs that were created using different logical characters to read exactly like a trusted domain. For example, attackers may register domains that use different character sets that display close enough to established, well-known domains. Early examples of homograph spoofing include the use of the numerals 0 or 1 to replace the letters O or l.

For example, attackers might attempt to spoof the microsoft.com domain with m!crosoft.com, replacing the letter i with an exclamation mark. Malicious domains may also replace Latin characters with Cyrillic, Greek or other character sets that display similarly.

One way attackers bypass phishing defenses is through the use of filter evasion techniques. For example, most phishing defenses scan emails for particular phrases or terms common in phishing emails — but by rendering all or part of the message as a graphical image, attackers can sometimes deliver their phishing emails.

Another phishing tactic relies on a covert redirect, where an open redirect vulnerability fails to check that a redirected URL is pointing to a trusted resource. In that case, the redirected URL is an intermediate, malicious page which solicits authentication information from the victim before forwarding the victim’s browser to the legitimate site.

How to prevent phishing

Phishing defense begins with educating users to identify phishing messages, but there are other tactics that can cut down on successful attacks.

A gateway email filter can trap many mass-targeted phishing emails and reduce the number of phishing emails that reach users’ inboxes.

Enterprise mail servers should make use of at least one email authentication standard to verify that inbound email is verified. These include the Sender Policy Framework (SPF) protocol, which can help reduce unsolicited email (spam); the DomainKeys Identified Mail (DKIM) protocol, which enables users to block all messages except for those that have been cryptographically signed; and the Domain-based Message Authentication, Reporting and Conformance (DMARC) protocol, which specifies that both SPF and DKIM be in use for inbound email, and which also provides a framework for using those protocols to block unsolicited email — including phishing email — more effectively.

A web security gateway can also provide another layer of defense by preventing users from reaching the target of a malicious link. They work by checking requested URLs against a constantly updated database of sites suspected of distributing malware.

There are several resources on the internet that provide help in combating phishing. The Anti-Phishing Working Group Inc. and the federal government’s OnGuardOnline.gov website both provide advice on how to spot, avoid and report phishing attacks. Interactive security awareness training aids, such as Wombat Security Technologies’ Anti-Phishing Training Suite or PhishMe, can help teach employees how to avoid phishing traps, while sites like FraudWatch International and MillerSmiles publish the latest phishing email subject lines that are circulating the internet.

How phishing got its name

The history of the term phishing is not entirely clear.

One common explanation for the term is that phishing is a homophone of fishing, and is so named because phishing scams use lures to catch unsuspecting victims, or fish.

Another explanation for the origin of phishing comes from a string — <>< — which is often found in AOL chat logs because those characters were a common HTML tag found in chat transcripts. Because it occurred so frequently in those logs, AOL admins could not productively search for it as a marker of potentially improper activity. Black hat hackers, the story goes, would replace any reference to illegal activity — including credit card or account credentials theft — with the string, which eventually gave the activity its name because the characters appear to be a simple rendering of a fish.