Cyber Security Intelligence

Twitter< Follow on Twitter >

August Newsletter #4 2014

Google search ‘predicts’ the next Financial Crisis

Predicting the rises and falls of the stock market may have become a whole lot easier: A new study suggests that publicly available data from Google Trends, a tool that tracks terms people plug into the search engine, can be used to forecast changes in stock prices.

The study found that Google users tend to increase their searches for certain keywords in the weeks proceeding a fall in the stock market.

Researchers at Boston University and the University of Warwick, in the United Kingdom, grouped the popularly searched keywords on Google into topics. They then used Google Trends to compare the search volume for these topics between 2004 and 2012 to fluctuations in the Standard & Poor's 500 Index (S&P 500), the stock market index for the 500 largest U.S.-based companies.

They found that, historically, Google users search more for topics related to business and politics in the weeks preceding a fall in the stock market. Searches related to other topics, such as music or weather, weren't found to have any significant connection to changes in stock prices.

Previously, the researchers had looked at how the volume of Google searches for finance-related terms "debt" or "bank," for instance, might be related to fluctuations in the stock market. They found that an increase in the volume of these types of searches could be used to predict a fall in stock prices.

In their new study, the researchers took a broader look at what people might be searching for in the weeks before a downward turn in the market. The researchers analyzed 100 topics, to see which ones correlated with changes in stock prices. They found that only business and political topics had any significant correlation to the market.

"Increases in searches relating to both politics and business could be a sign of concern about the state of the economy, which may lead to decreased confidence in the value of stocks, resulting in transactions at lower prices," said Suzie Moat, an assistant professor of behavioral science at Warwick Business School and co-author of the study.

Financial crises, such as the one that affected markets worldwide in 2007 and 2008, can arise in part from the interplay of decisions made by many individuals. But to understand this interplay, or "collective decision-making," it's helpful for researchers to first examine the information that drives decision-making.

The new study was published July 28 in the journal Proceedings of the National Academy of Sciences.

Gartner: ‘Cognizant Computing’ to Become a Force in Consumer IT

"Cognizant computing,” the next phase of the personal cloud movement, will become one of the strongest forces in consumer-focused IT, according to a new report from Gartner Inc. It will have an immense impact on mobile devices and apps, wearables, networking, services and cloud providers, the firm says.

Cognizant computing is a consumer experience in which data associated with individuals is used to develop services and activities according to simple rules, Gartner says. The services include alarms, bill payments, managing and monitoring health and fitness, and context-specific ads. Cognizant systems will provide services across multiple devices.

The practical application of cognizant computing helps business-to-consumer companies acquire deep insights into consumers' preferences and daily lives, Gartner says, and this will help create better, more-personalized services and offers.

"Cognizant computing is transforming personal clouds into highly intelligent collections of mobile apps and services," Jessica Ekholm, research director at Gartner, said in a statement. "Business-to-consumer providers must adapt their strategies to exploit this change to generate new revenue, find new ways to differentiate themselves and foster loyalty via mobile apps."

The firm predicts that cognizant computing will put the importance of applications, services and cloud at the forefront, making it one of the most important components of any customer retention strategy for B2C companies over the coming five years.

"Cognizant computing is already beginning to take shape via many mobile apps, smartphones and wearable devices that collect and sync information about users, their whereabouts and their social graph," Ekholm said. "Over the next two to five years, the Internet of Things and big data will converge with analytics. Hence, more data will make systems smarter."

Cyber War: The Next Threat to National Security and What to Do About It

Richard A. Clarke warned America once before about the havoc terrorism would wreak on our national security — and he was right. Now he warns us of another threat, silent but equally dangerous. Cyber War is a powerful book about technology, government, and military strategy, about criminals, spies, soldiers, and hackers. This is the first book about the war of the future — cyber war — and a convincing argument that we may already be in peril of losing it.

Cyber War goes behind the “geek talk” of hackers and computer scientists to explain clearly and convincingly what cyber war is, how cyber weapons work, and how vulnerable we are as a nation and as individuals to the vast and looming web of cyber criminals. From the first cyber crisis meeting in the White House a decade ago to the boardrooms of Silicon Valley and the electrical tunnels under Manhattan, Clarke and coauthor Robert K. Knake trace the rise of the cyber age and profile the unlikely characters and places at the epicenter of the battlefield. They recount the foreign cyber spies who hacked into the office of the Secretary of Defense, the control systems for U.S. electric power grids, and the plans to protect America’s latest fighter aircraft.

Economically and militarily, Clarke and Knake argue, what we’ve already lost in the new millennium’s cyber battles is tantamount to the Soviet and Chinese theft of our nuclear bomb secrets in the 1940s and 1950s. The possibilities of what we stand to lose in an all-out cyber war — our individual and national security among them — are just as chilling. Powerful and convincing, Cyber War begins the critical debate about the next great threat to national security.

Want to Be the Next Intelligence Whistleblower?

Imagine a CIA agent who witnessed behavior that violated the Constitution, the law, and core human-rights protections, like torturing a prisoner. What would we have her do? Government officials say that there are internal channels in place to protect whistleblowers and that intelligence employees with security clearances have a moral obligation to refrain from airing complaints publicly via the press.

In contrast, whistleblowers like Daniel Ellsberg, Chelsea, Manning and Edward Snowden—as well as journalistic entities like the Washington Post, The Guardian, and the New York Times—believe that questionable behavior by intelligence agencies should sometimes be exposed, even when classified, partly because internal whistleblower channels are demonstrably inadequate.

Reasonable people can disagree about whether a particular leak advances the public interest. There is always a moral obligation to keep some secrets (e.g., the names of undercover agents in North Korea). But if official channels afford little protection to those who report serious wrongdoing, or fail to remedy egregiously unlawful behavior, the case for working within the system rather than going to the press falls apart.

As Hilary Bok, writing as Hilzoy, pointed out in 2008, while defending a Bush-era whistleblower:
It is generally better for all concerned if whistle-blowers operate within the system, and it is dangerous when people freelance. But there’s one big exception to this rule: when the system has itself been corrupted. When you’re operating within a system in which whistle-blowers’ concerns are not addressed—where the likelihood that any complaint you make within the system will be addressed is near zero, while the likelihood that you will be targeted for reprisals is high—then no sane person who is motivated by a desire to have his or her concern addressed will work within that system. That means that if … you want whistleblowers to work within the system, you need to ensure that that system actually works.

Today, there is no credible argument that internal channels offer adequate protection to whistleblowers or remedy most serious misdeeds. U.S. officials claim otherwise. They know that no American system of official secrets can be legitimate if it serves to hide behavior that violates the Constitution, the law, or the inalienable rights articulated in the Declaration of Independence. 

That they defend the status quo without being laughed out of public life is a testament to public ignorance. Most Americans haven’t read the stories of Jesselyn Radack, Thomas Drake, or John Kiriakou; they’re unaware of the Espionage Act’s sordid history and its unprecedented use by the Obama administration; they don’t realize the scale of law-breaking under President Bush, or that President Obama’s failure to prosecute an official torture program actually violates the law; and they’re informed by a press that treats officials who get caught lying and misleading (e.g., James Clapper and Keith Alexander) as if they’re credible.

Still, every month, more evidence of the national-security state’s legitimacy problem comes to light. McClatchy Newspapers reports on the latest illustration that whistleblowers have woefully inadequate protection under current policy and practice:

The CIA obtained a confidential email to Congress about alleged whistleblower retaliation related to the Senate’s classified report on the agency’s harsh interrogation program, triggering fears that the CIA has been intercepting the communications of officials who handle whistleblower cases. The CIA got hold of the legally protected email and other unspecified communications between whistleblower officials and lawmakers this spring, people familiar with the matter told McClatchy. It’s unclear how the agency obtained the material.

At the time, the CIA was embroiled in a furious behind-the-scenes battle with the Senate Intelligence Committee over the panel’s investigation of the agency’s interrogation program, including accusations that the CIA illegally monitored computers used in the five-year probe. The CIA has denied the charges. The email controversy points to holes in the intelligence community’s whistleblower protection systems and raises fresh questions about the extent to which intelligence agencies can elude congressional oversight. The email related to allegations that the agency’s inspector general, David Buckley, failed to properly investigate CIA retaliation against an agency official who cooperated in the committee’s probe, said the knowledgeable people, who asked not to be further identified because of the sensitivity of the matter.

Today’s CIA employees have witnessed torturing colleagues in the agency get away with their crimes. They’ve watched Kiriakou go to jail after objecting to torture. Now, in the unlikely event that they weren’t previously aware of it, they’ve been put on notice that if they engage in whistleblowing through internal channels, during the course of a Senate investigation into past illegal behavior by the CIA, even then the protections theoretically owed them are little more than an illusion. Some in Congress have expressed understandable concern. Director of National Intelligence Clapperresponded in a letter stating, “the Inspector General of the Intelligence Community … is currently examining the potential for internal controls that would ensure whistleblower-related communications remain confidential.”

In other words, there aren’t adequate safeguards right now. This is partly because not enough legislators care about or even know enough to understand the problem. And it is partly because the problem starts right at the top, with Obama and his predecessor. As Marcy Wheeler persuasively argues, the CIA gains significant leverage over the executive branch every time it breaks the law together:

Wheeler adds, “This is, I imagine, how Presidential Findings are supposed to work: by implicating both parties in outright crimes, it builds mutual complicity. And Obama’s claimed opposition to torture doesn’t offer him an out, because within days of his inauguration, CIA was killing civilians in Presidentially authorized drone strikes that clearly violate international law.” Obama is similarly implicated in spying that violates the Fourth Amendment. When the president himself endorses illegal behavior, when there is no penalty for blatantly lying to Congress about that behavior, how can internal channels prompt reform? 

The public airing of classified information over national0security state objections has been indispensable in bygone instances like the Pentagon Papers, the Church Committee report (back when Congress was doing its job), and the heroic burglary of the COINTELPRO documents. I believe history will judge Manning and Snowden as wrongly persecuted patriots like Ellsberg. The notion that they should’ve raised their concerns internally won’t be taken seriously, because a dispassionate look at the evidence points to a single conclusion: The United States neither adequately protects whistleblowers nor keeps law-breaking national-security agencies accountable through internal channels. The next time a leak occurs, the national-security state’s defenders should blame themselves for failing to bring about a system that can adequately police itself. If their historical and recent track record weren’t so dismal they’d have a better case to make.

The Deep Web and TOR

The Deep Web and Tor has gained a lot of attention since the Edward Snowden revelations and the closure of the notorious narcotics trading site The Silk Road.

The general term “Deep Web” refers to any part of the Internet not indexed by standard search engines. Google in fact indexes less than 1 percent of what is online. Not all of the Deep Web is narcotics sales and assassination squads, think of Facebook databases and University libraries, but it is the illicit part that gets the most attention.

Tor (previously the acronym for The Onion Router) is an incredible piece of software that gives your computer access to an encrypted proxy-server network, allowing you to stay anonymous online. There are other encrypted Deep Web networks like i2p, but Tor is by far the largest.

Tor has a fascinating history as the US Naval Research Laboratory, who quickly realised that to make it workable and truly anonymous they had to release the source code to the world created it. This allowed independent cryptographic experts to determine there were no back doors, and gave journalists and terrorists an incredibly useful tool in the process. Subsequently the capability was developed to host encrypted servers and websites hidden in the Tor network, making them extremely difficult to close down.

80 percent of the funding for Tor comes from the US Government. Surprising, considering Tor is the NSAs biggest headache and they have been working to crack it for some time.

This clever technology, however, is let down by the stupid, fleshy bit-of-human at the other end. Technically allowing complete anonymity, it is always surprising how often users are identifiable on the Deep Web. The alleged owner of the Silk Road, Ross Ulbricht, knows this all too well as he now contemplates the foreseeable future in jail.

There has been much debate about whether the Deep Web and Tor is a good or a bad thing; I would argue that it is both and neither. Like a car can be used for good and bad purposes, it is the human using it that actually breaks the law.

I am not an Internet libertarian who believes in complete freedom to do anything online. I think the Deep Web should be policed, and when someone is breaking the law they should be tried in court and convicted if guilty.

When I say policed, I want to be specific. This is not the NSA and GCHQ program monitoring anything and everything, capturing vast swathes of pointless data; you don’t find a needle in a haystack by adding more hay. What I mean by policing is targeted, directed operations against individuals breaking the law in order to secure a conviction in court. You can’t blame the bad guys for using the best tools at their disposal, criminals move with technology therefore so should the police. The fact is we are very far from policing the Deep Web in any meaningful way.

The Deep web is called the “Internet Wild West”, but only because there are no police officers there. Rather than trying to blame a clever piece of technology (which does a lot of good keeping journalists and human rights campaigners safe) the police should recognise that it is their failings that make it the “wild west” and allocate the funding and tools needed to properly investigate the Deep Web.

Max Vetter

More Denial Attacks (DDoS) and getting Bigger

Distributed denial of service attacks have grown larger in scale, more sophisticated and harder to detect, according to three large technology vendors that have recently published analyses of attacks.

DDoS attacks, malicious streams of traffic that can take down a website and cause reputational and other damage to a company, were big news during the fall of 2012. A group called Izz ad-Din al-Qassam Cyber Fighters carried out a series of these attacks on U.S. banks' websites. Such exploits have not made the news much lately because few have been successful enough to bring down a bank's website for a noteworthy length of time.

This is partly because banks have invested in better DDoS mitigation technology and services, observers say. Another factor is that banks are being targeted less frequently — only about 10% of incidents. Gaming, technology and media companies have become more popular targets.

But attacks are still being launched against banks and other companies, and with greater force than ever, according to large information security providers such as Prolexic (which is now owned by Akamai), Verizon and Verisign. The three companies recently issued reports that shed light on the changing nature of DDoS attacks.

Close to 90% of the DDoS attacks conducted during the first half of 2014 were volumetric attacks, according to Rod Soto, senior security researcher at Akamai PLXsert. In other words, they sent high amounts of traffic to a website to overwhelm it and the company's network, so the site wouldn't work and the company couldn't serve its customers. (Eighty of the top 100 U.S. banks use his company's service, Soto said.)

One pattern Soto has observed in DDoS attacks on financial institutions is that they usually start at 9:00 a.m. EST and finish about 5:00 p.m. EST.

"Why? Because this will cause the most disruption possible and the media will pick it up," Soto said. "The effects of a successful DDoS campaign are amplified by the use and manipulation of the media." (By contrast, he said, attacks on casinos tend to occur in the late afternoon or at night, he said. Rival casinos that want to keep customers away from competitors, Soto said likely carry them out.)

Typically, customers of a bank under attack will complain over social media that they can't access their bank's website.

"Attackers will purposely watch social media for signals that their target is failing," Soto said. "Then they will try to underline that. They will retweet it. Once the media picks up on that, it amplifies the perception of the actual attack."

Most DDoS attacks are measured by the number of gigabytes of data hurled at a target each second. In the first quarter, Verisign observed an 83% increase in DDoS attack size over the previous quarter and a 6% increase from a year earlier, to 3.92 gigabytes per second. The Akamai/Prolexic study found peak bandwidth bombardment of 7.76 gigabytes per second in the second quarter, off from 9.7 gigabytes per second in the first quarter but close to double the average peak in the second quarter of 2013, 4.5 Gbps.

"Over the last five years, attacks have increased in size, not only in the size of the packets but also the packets per second," said Christopher Porter, managing principal of the Verizon Cyber Intelligence Center.

At the same time, the duration of the typical attack has shortened, researchers say. According to the Akamai/Prolexic report, the average attack lasts 17 hours.

One reason for the increased size of these attacks is the use of "amplification" techniques.
In an amplification attack, an attacker sends multiple servers a communication that appears to come from the victim's IP address, and the response back is larger, sometimes thousands of times larger, than the original message.

"It causes all sorts of havoc, especially as it converges down to the intended victim," Porter said. "It is coming from several different service providers, and as it gets closer to the intended victim, the sizes of those attacks get to be large. So there's usually a lot of collateral damage in those types of attacks. It's not just the victim that gets hit, because the closest gateway router to them may have 100 customers sitting on it, and that whole router could get overwhelmed."

These types of attacks are not new. But researchers found more of them in the first half of this year than in previous years. Attackers also recently began manipulating Network Time Protocol servers, which are used to synchronize computers in a network, where previously they mainly used domain name servers.

"If you do that for a lot of open NTP servers out there, you can create some havoc," Porter said. Some organizations have begun scanning for open NTP servers and working with the servers' owners to change their configurations so they're not vulnerable to this type of attack.

DDoS perpetrators have also taken advantage of cloud-based services, such as the Word Press content management system, to improve the effectiveness of their attacks. Bloggers who use such services aren't always conscientious about upgrading their software. Attackers know this and take advantage by infecting the users' computers and making them part of their botnets — networks of compromised computers used to launch attacks.

And attackers continue to incorporate more powerful computers in their botnets.
"Several years ago, DDoS attacks were mostly botnets on desktops, and those desktops had limited bandwidth because they were using DSL lines or limited cable modems," Porter said. But a new breed of game-changing "brobots" harness compromised web servers sitting in data centers that have massive amounts of bandwidth and computing power.

As consumers get higher bandwidth Internet service at home, the potency of botnets using home computers will increase, Porter said.

DDoS attackers show increasingly adaptive behavior. They continuously monitor the effectiveness of their attacks while underway, and then change attack techniques to work around applied mitigation strategies.

"The attackers research the possible defenses the target has, and based on that they will craft their payload," Soto said.

Attackers have resources and skills that are not available to low-level criminals, he said. "There are indications, at least during the very large campaigns against financial institutions, that nation-states are behind them."

In the DDoS underground, the state-backed hackers who took down many banks' websites in the fall of 2012 during Operation Ababil were considered a great success, Soto said. This was largely because of the media attention they received, as well as the fact that they were able to announce a target and then take it down as promised. Other states have tried to imitate this behavior, Soto said.

"If you look at the Department of Homeland Security's 16 critical infrastructure sectors, financial services is one of them," Soto pointed out. "If you're able to exert enough damage, where people cannot do their operations, that's pretty bad, that could have a crippling effect and cause panic as well."

And the brobot used in the Al Qassam Cyber Fighters' Operation Ababil, once thought to be defeated, "has been surreptitiously maintained, in some cases by changing the names and locations of attack files on the hosts," the Akamai/Prolexic report stated. It's been used in two attacks this year.

Cloud Computing: How It Works And Why Businesses Are Into It

The Internet proves to be more than just a vast reservoir of information – yet again – by offering a solution that makes business operations more efficient than ever.

The Internet proves to be more than just a vast reservoir of information – yet again – by offering a solution that makes business operations more efficient than ever.

Here comes cloud computing. describes it as a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications.

Simply put, it is the use of the Internet – instead of local hard drives – for managing files and running programs to be used in carrying out daily business tasks. Since time, money and people are the most important resources a company could possess, cloud computing offers a way to streamline processes and help achieve goals in the most cost-efficient manner.

This is obviously good for any business that has ample resources to acquire such a technology. But are there any other benefits that can make cloud computing really worth it?

Cloud computing usually comes with a provider, which means an outsourced firm would be taking care of the technology’s operations. This means less maintenance on the part of the client, since they don’t need to worry about hardware and bandwidth management. With that responsibility out of the way, the business would also reduce its spending on its IT since it would only pay for the services received.

Of course, this also means less expenses for training since there will be no need for extra personnel. Tasks would right away start running without having to wait for new people to get the hang of their responsibilities. Cloud computing providers are equipped with the right people and expertise to handle the job.

Then there’s the most significant gift of cloud computing: accessibility. Since all files and programs are stored in a dedicated cloud, accessibility is practically limitless regardless of location.

On the technical side, businesses would also benefit from the unlimited storage capacity that cloud-computing services (typically) offer. Aside from that, they would also enjoy the ability to integrate multiple business software according to their preferences and operational needs. In case the data are compromised, backup and recovery services are available as part of the deal.

Perhaps the most important benefit of cloud computing is the fact that it boosts the efficiency level up to unprecedented heights. Business functions are carried out with less people involved, less time consumed and less worries on the manager’s plate.

Six universities accredited by GCHQ to train the next James Bond generation of cyber spies

Government spy agency GCHQ is accrediting six universities to train the next generation of cyber spooks to combat rising levels of online crime.

The six accredited courses include ‘ethical hacking’, on offer at the University of Lancaster, where students attempt to break into systems to learn how to defend them.

And Napier University in Edinburgh has created a mock online bank, which students can hack into.

Francis Maude, minister for the Cabinet Office, which implements the national cyber security programme along with the Office of Cyber Security, officially announced the certification during a visit to GCHQ in Gloucestershire.

‘Cyber security is a crucial part of this government’s long-term plan for the British economy,’ he said.

‘Through the excellent work of GCHQ, in partnership with other government departments, the private sector and academia, we are able to counter threats and ensure together we are stronger and more aware.’

Mark Hughes, the president of BT’s security team, said there was a ‘skills gap’ for cyber security know-how in the UK and welcomed the arrival of GCHQ’s first accredited courses.

He said: ‘At BT we are acutely aware of the impact of the UK cyber skills gap and recruiting the right people with the right knowledge and skills is a big deal for us. As a leading Internet service provider we want to employ the very best.’

In a bid to make the UK one of the safest places in the world to do online business, the government is ploughing money into the courses.

To qualify for funding from the Engineering and Physical Sciences Research Council, universities must prove they are conducting world-class research.

The University of Oxford master’s degree in software and security systems, Edinburgh Napier University’s MSc in advanced security and digital forensics, the University of Lancaster’s master in cyber security and Royal Holloway University of London’s MSc in information security were all accredited by GCHQ.

A further two, Cranfield University’s master in cyber defence and the University of Surrey’s MSc in information security, have been granted provisional certification.

A spokesperson from GCHQ said it marked a significant step in the development of the UK’s knowledge, skills and capability in all fields of cyber security.

Chris Ensor, deputy director for the National Technical Authority for Information Assurance at GCHQ said: ‘As the National Technical Authority for Information Assurance, GCHQ recognises the critical role academia plays in developing the UK's skill and knowledge base.

‘I'd like to congratulate the universities which have been recognised as offering a Master’s degree which covers the broad range of subjects that underpin a good understanding of Cyber Security.’

Malaysian Boeing 777: Cybercriminals Capitalize

It’s not the first time, and unfortunately, it won’t be the last time. Cybercriminals have once again exploited a tragic situation in order to expand their reach in malware distribution. 

News stories with a high level of public interest are most vulnerable for this type of activity. We’ve seen it before: the Boston Marathon bombing, the Texas fertilizer plants explosion and (much happier news) the birth of the royal baby. 

This time, a well-crafted spam email claiming to deliver a news story about last week’s Malaysian Airlines tragedy in Ukraine is circulating. The spam emails uses image content pulled directly from CNN’s website to create a look and feel meant to be convincing enough to deceive internet users to click. Rather than delivering news content, the simple URLs point directly to the malware payloads to distribute malware via the Dyre banking Trojan.

The dangerous new Dyre banking Trojan uses man-in-the-browser tactics to steal private information from victims as they conduct their normal activities online. The stolen data is submitted directly to servers operated by the criminal and has the ability to use a man-in-the-middle attack to decrypt private data in order to make it easy to steal.

A couple of interesting quirks about this malware’s command and control infrastructure can be found in their selection of names for servers. The server responds to HTTP communications saying its name is “Stalin” while stolen data is submitted to a URL including the directory path /poroshenko/ assumed to be named after the current Ukranian president Petro Poroshenko.

The full web site is currently under development and will be available during 2014