Cyber Security Intelligence

Twitter< Follow on Twitter >

September Newsletter #2 2014

Want to Be the Next Intelligence Whistleblower?

Imagine a CIA agent who witnessed behavior that violated the Constitution, the law, and core human-rights protections, like torturing a prisoner. What would we have her do? Government officials say that there are internal channels in place to protect whistleblowers and that intelligence employees with security clearances have a moral obligation to refrain from airing complaints publicly via the press.

In contrast, whistleblowers like Daniel Ellsberg, Chelsea, Manning and Edward Snowden—as well as journalistic entities like the Washington Post, The Guardian, and the New York Times—believe that questionable behavior by intelligence agencies should sometimes be exposed, even when classified, partly because internal whistleblower channels are demonstrably inadequate.

Reasonable people can disagree about whether a particular leak advances the public interest. There is always a moral obligation to keep some secrets (e.g., the names of undercover agents in North Korea). But if official channels afford little protection to those who report serious wrongdoing, or fail to remedy egregiously unlawful behavior, the case for working within the system rather than going to the press falls apart.

As Hilary Bok, writing as Hilzoy, pointed out in 2008, while defending a Bush-era whistleblower:
It is generally better for all concerned if whistle-blowers operate within the system, and it is dangerous when people freelance. But there’s one big exception to this rule: when the system has itself been corrupted. When you’re operating within a system in which whistle-blowers’ concerns are not addressed—where the likelihood that any complaint you make within the system will be addressed is near zero, while the likelihood that you will be targeted for reprisals is high—then no sane person who is motivated by a desire to have his or her concern addressed will work within that system. That means that if … you want whistleblowers to work within the system, you need to ensure that that system actually works.

Today, there is no credible argument that internal channels offer adequate protection to whistleblowers or remedy most serious misdeeds. U.S. officials claim otherwise. They know that no American system of official secrets can be legitimate if it serves to hide behavior that violates the Constitution, the law, or the inalienable rights articulated in the Declaration of Independence.

That they defend the status quo without being laughed out of public life is a testament to public ignorance. Most Americans haven’t read the stories of Jesselyn Radack, Thomas Drake, or John Kiriakou; they’re unaware of the Espionage Act’s sordid history and its unprecedented use by the Obama administration; they don’t realize the scale of law-breaking under President Bush, or that President Obama’s failure to prosecute an official torture program actually violates the law; and they’re informed by a press that treats officials who get caught lying and misleading (e.g., James Clapper and Keith Alexander) as if they’re credible.

Still, every month, more evidence of the national-security state’s legitimacy problem comes to light. McClatchy Newspapers reports on the latest illustration that whistleblowers have woefully inadequate protection under current policy and practice:

The CIA obtained a confidential email to Congress about alleged whistleblower retaliation related to the Senate’s classified report on the agency’s harsh interrogation program, triggering fears that the CIA has been intercepting the communications of officials who handle whistleblower cases. The CIA got hold of the legally protected email and other unspecified communications between whistleblower officials and lawmakers this spring, people familiar with the matter told McClatchy. It’s unclear how the agency obtained the material.

At the time, the CIA was embroiled in a furious behind-the-scenes battle with the Senate Intelligence Committee over the panel’s investigation of the agency’s interrogation program, including accusations that the CIA illegally monitored computers used in the five-year probe. The CIA has denied the charges. The email controversy points to holes in the intelligence community’s whistleblower protection systems and raises fresh questions about the extent to which intelligence agencies can elude congressional oversight. The email related to allegations that the agency’s inspector general, David Buckley, failed to properly investigate CIA retaliation against an agency official who cooperated in the committee’s probe, said the knowledgeable people, who asked not to be further identified because of the sensitivity of the matter.

Today’s CIA employees have witnessed torturing colleagues in the agency get away with their crimes. They’ve watched Kiriakou go to jail after objecting to torture. Now, in the unlikely event that they weren’t previously aware of it, they’ve been put on notice that if they engage in whistleblowing through internal channels, during the course of a Senate investigation into past illegal behavior by the CIA, even then the protections theoretically owed them are little more than an illusion. Some in Congress have expressed understandable concern. Director of National Intelligence Clapperresponded in a letter stating, “the Inspector General of the Intelligence Community … is currently examining the potential for internal controls that would ensure whistleblower-related communications remain confidential.”

In other words, there aren’t adequate safeguards right now. This is partly because not enough legislators care about or even know enough to understand the problem. And it is partly because the problem starts right at the top, with Obama and his predecessor. As Marcy Wheeler persuasively argues, the CIA gains significant leverage over the executive branch every time it breaks the law together:-Torture was authorized by a Presidential Finding—a fact Obama has already gone to extraordinary lengths to hide

-CIA has implied that its actions got sanction from that Finding, not the shoddy OLC memos or even the limits placed in those memos, and so the only measure of legality is President Bush’s (and the Presidency generally) continued approval of them

-CIA helped the (Obama) White House withhold documents implicating the White House from the Senate

Wheeler adds, “This is, I imagine, how Presidential Findings are supposed to work: by implicating both parties in outright crimes, it builds mutual complicity. And Obama’s claimed opposition to torture doesn’t offer him an out, because within days of his inauguration, CIA was killing civilians in Presidentially authorized drone strikes that clearly violate international law.” Obama is similarly implicated in spying that violates the Fourth Amendment. When the president himself endorses illegal behavior, when there is no penalty for blatantly lying to Congress about that behavior, how can internal channels prompt reform?

The public airing of classified information over national0security state objections has been indispensable in bygone instances like the Pentagon Papers, the Church Committee report (back when Congress was doing its job), and the heroic burglary of the COINTELPRO documents. I believe history will judge Manning and Snowden as wrongly persecuted patriots like Ellsberg. The notion that they should’ve raised their concerns internally won’t be taken seriously, because a dispassionate look at the evidence points to a single conclusion: The United States neither adequately protects whistleblowers nor keeps law-breaking national-security agencies accountable through internal channels. The next time a leak occurs, the national-security state’s defenders should blame themselves for failing to bring about a system that can adequately police itself. If their historical and recent track record weren’t so dismal they’d have a better case to make.

http://www.defenseone.com/politics/2014/07/want-be-next-intelligence

Malaysian Boeing 777: Cybercriminals Capitalize

It’s not the first time, and unfortunately, it won’t be the last time. Cybercriminals have once again exploited a tragic situation in order to expand their reach in malware distribution.
News stories with a high level of public interest are most vulnerable for this type of activity. We’ve seen it before: the Boston Marathon bombing, the Texas fertilizer plants explosion and (much happier news) the birth of the royal baby.

This time, a well-crafted spam email claiming to deliver a news story about last week’s Malaysian Airlines tragedy in Ukraine is circulating. The spam emails uses image content pulled directly from CNN’s website to create a look and feel meant to be convincing enough to deceive internet users to click. Rather than delivering news content, the simple URLs point directly to the malware payloads to distribute malware via the Dyre banking Trojan.

The dangerous new Dyre banking Trojan uses man-in-the-browser tactics to steal private information from victims as they conduct their normal activities online. The stolen data is submitted directly to servers operated by the criminal and has the ability to use a man-in-the-middle attack to decrypt private data in order to make it easy to steal.

A couple of interesting quirks about this malware’s command and control infrastructure can be found in their selection of names for servers. The server responds to HTTP communications saying its name is “Stalin” while stolen data is submitted to a URL including the directory path /poroshenko/ assumed to be named after the current Ukranian president Petro Poroshenko.

http://blog.malcovery.com/blog/malaysian-boeing-777

NSA and GCHQ agents 'leak Tor bugs'

Spies from both countries have been working on finding flaws in Tor, a popular way of anonymously accessing "hidden" sites.

But the team behind Tor says other spies are tipping them off, allowing them to quickly fix any vulnerabilities. The agencies declined to comment.

The allegations were made in an interview given to the BBC by Andrew Lewman, who is responsible for all the Tor Project's operations.

He said leaks had come from both the UK Government Communications Headquarters (GCHQ) and the US National Security Agency (NSA).

By fixing these flaws, the project can protect users' anonymity, he said.

"There are plenty of people in both organisations who can anonymously leak data to us to say - maybe you should look here, maybe you should look at this to fix this," he said. "And they have."

Mr. Lewman is part of a team of software engineers responsible for the Tor Browser - software designed to prevent it being possible to trace users' Internet activity. The programs involved also offer access to otherwise hard-to-reach websites, some of which are used for illegal purposes.

The dark web, as it is known, has been used by paedophiles to share child abuse imagery, while online drug marketplaces are also hosted on the hidden sites.

The Tor Browser is designed to allow people to use the Internet anonymously. The Edward Snowden leaks have indicated that the NSA has tried to spy on Tor activity

A spokesman for GCHQ said: "It is long-standing policy that we do not comment on intelligence matters. Furthermore, all of GCHQ's work is carried out in accordance with a strict legal and policy framework, which ensures that our activities are authorised, necessary and proportionate."

The BBC understands, however, that GCHQ does attempt to monitor a range of anonymisation services to identify and track down suspects involved in the online sexual exploitation of children, among other crimes.

A security expert who has done consultancy work for GCHQ said he was amazed by Mr. Lewman's allegation, but added that it was not "beyond the bounds of possibility. "It's not surprising that agencies all over the world will be looking for weaknesses in Tor," said Alan Woodward. "But the fact that people might then be leaking that to the Tor Project so that it can undo it would be really very serious. So if that is happening, then those organisations are going to take this very seriously."

Tor was originally designed by the US Naval Research Laboratory, and continues to receive funding from the US State Department. Tor is used by the military, by activists, businesses and others to keep communications confidential and to aid free speech.

But it has also been used to organise the sale of illegal drugs, host malware, run money laundering services, and traffic images of child abuse and other illegal pornography.

http://www.bbc.co.uk/news/technology-28886462

Opinion: The Deep Web and TOR
By Max Vetter

The Deep Web and Tor has gained a lot of attention since the Edward Snowden revelations and the closure of the notorious narcotics trading site the Silk Road. And BB2 has just broadcast almost an hour-long Horizon documentary about Tor and the Silk Road called Inside the Dark Web (connection below).

The general term “Deep Web” refers to any part of the Internet not indexed by standard search engines. Google in fact indexes less than 1 percent of what is online. Not all of the Deep Web is narcotics sales and assassination squads, think of Facebook databases and University libraries, but it is the illicit part that gets the most attention.

Tor (previously the acronym for The Onion Router) is an incredible piece of software that gives your computer access to an encrypted proxy-server network, allowing you to stay anonymous online. There are other encrypted Deep Web networks like i2p, but Tor is by far the largest.

Tor has a fascinating history as the US Naval Research Laboratory, who quickly realised that to make it workable and truly anonymous they had to release the source code to the world created it. This allowed independent cryptographic experts to determine there were no back doors, and gave journalists and terrorists an incredibly useful tool in the process. Subsequently the capability was developed to host encrypted servers and websites hidden in the Tor network, making them extremely difficult to close down.

80 percent of the funding for Tor comes from the US Government. Surprising, considering Tor is the NSAs biggest headache and they have been working to crack it for some time.

This clever technology, however, is let down by the stupid, fleshy bit-of-human at the other end. Technically allowing complete anonymity, it is always surprising how often users are identifiable on the Deep Web. The alleged owner of the Silk Road, Ross Ulbricht, knows this all too well as he now contemplates the foreseeable future in jail.

There has been much debate about whether the Deep Web and Tor is a good or a bad thing; I would argue that it is both and neither. Like a car can be used for good and bad purposes, it is the human using it that actually breaks the law.

I am not an Internet libertarian who believes in complete freedom to do anything online. I think the Deep Web should be policed, and when someone is breaking the law they should be tried in court and convicted if guilty.

When I say policed, I want to be specific. This is not the NSA and GCHQ program monitoring anything and everything, capturing vast swathes of pointless data; you don’t find a needle in a haystack by adding more hay. What I mean by policing is targeted, directed operations against individuals breaking the law in order to secure a conviction in court. You can’t blame the bad guys for using the best tools at their disposal, criminals move with technology therefore so should the police. The fact is we are very far from policing the Deep Web in any meaningful way.

The Deep web is called the “Internet Wild West”, but only because there are no police officers there. Rather than trying to blame a clever piece of technology (which does a lot of good keeping journalists and human rights campaigners safe) the police should recognise that it is their failings that make it the “wild west” and allocate the funding and tools needed to properly investigate the Deep Web.

http://maxrvetter.com/2014/07/27/the-deep-web-and-tor/

BBC2 Horizon - Inside the Dark Web
http://www.bbc.co.uk/iplayer/episode/b04grp09/horizon-20142015-4

Hacked Celebrity iCloud Accounts – Private Pictures!

By now, you have probably heard about the digital exposure, so to speak, of nude photos of as many as 100 celebrities, taken from their Apple iCloud backups and posted to the "b" forum on 4Chan. Over the last day, an alleged perpetrator has been exposed by redditors, although the man has declared his innocence. The mainstream media have leapt on the story and have gotten reactions from affected celebrities including Oscar winner Jennifer Lawrence and model Kate Upton.

An interesting aspect of information security is how periodically it collides with other industries and subcultures. With more information than ever being stored and shared online and on connected devices hacking stories are frequent and are mainstream news. This was the case yesterday as dozens of celebrities fell victim to hackers who leaked hundreds of private photographs and videos stolen from web based storage services.

The summary of the story is that a number of personal and private nude images from high profile celebrities started appearing on online image boards and forums – most notably on anon-ib, 4chan and reddit.

The first pictures were posted nearly a week ago, but didn’t get much attention since they were being ransomed (censored previews being shared in the hope somebody would purchase them). It was only after a number of intermediaries purchased the images and posted complete nudes in public forums that the story exploded.

At least a dozen celebrities were affected by the photo dumps, with over 400 individual images and videos. A list of celebrity names published anonymously, and serving as something akin to a sales brochure, suggests that over 100 have had their personal data compromised.

Observations about the data:

1. Data from some accounts were consistent with a device backup. Given the circumstances, these were likely devices backed up to iCloud:

a. The file naming scheme of many files is consistent with the scheme used in iPhone backups
b. Video is not copied into iCloud via photo stream, but it is copied when backing up to iCloud
c. Some folder structures appeared to resemble third party application folders, and included Thumbs.db files as well as a redundant series of folder content.
d. Some media reports indicate some leaked photos had been deleted by the user, however device backups would not have removed a photo deleted off photo stream.

2. Some data hints at being harvested repeatedly over a period of time, for example the data mentioned in 1c.

3. Many of the files contained a “Microsoft Windows Photo Viewer” exif section, indicating they were organized with this application on 8/17/2014. The photo viewer application added these tags to the images.

4. Due to the diversity of the data’s file structure, exif information, and other metadata, it is believed that the data may have been individually collected by multiple parties, and consolidated / released by a single leaker.

Observations about the breach:

1. Apple has publicly stated that their iCloud systems were not compromised, however has indicated that usernames and passwords were attacked.

2. The iBrute tool was publicly released the night of the leaks, and reportedly addressed by Apple by early morning. The tool took advantage of a weakness in Apple’s FindMyiPhone APIs, which allowed it to avoid being rate limited while brute forcing passwords.

3. Because iPhone backups are not normally accessible by logging into iCloud, and based on chatter on 4chan, it is reasonably theorized that the attacker used a (possibly pirated) commercial forensics tool by Elcomsoft to scrape each victim’s iCloud backup and other data from their accounts, after deducing their credentials with iBrute.

4. The currently prevailing theory is that the accounts were first compromised by taking advantage of a rate limiting vulnerability in Apple’s servers to brute force with iBrute, then scraped with Elcomsoft Password Breaker, possibly repeatedly over a period of time.

Observations about Apple’s password policies

1. In their press release, Apple publicly encouraged users to enable 2FA (2-factor authentication) to protect their accounts.

2. Apple’s knowledge base indicates that 2FA would not have applied to this case, because it only kicks in if a user attempts to make changes to an account, or purchase content.

3. Apple presently does not send an SMS or email challenge code when restoring a device, or accessing iCloud content from a previously unknown network location. (iCloud device restore is also the technique used to scrape iCloud backup data)

4. It is theorized that other APIs may exist in Apple’s infrastructure that may not adequately rate limit authentication requests, and it is advised (by the author) that everyone change their iCloud passwords to exceed Apple’s minimum enforced requirements, and review your personal data retention policies for iCloud.

5. Apple does not explain the details / caveats / implications of photo stream or iCloud backups to the consumer when set up, and the features are enabled by default, without any user notification that their data will be copied off the device to remote storage.

Conclusions

Weaker passwords were likely those exploited in this series of compromises, however Apple could have managed their security policies in such a way that this breach could have been avoided or greatly reduced. Ensuring that proper rate limiting and account lockout was being enforced on all APIs would have dramatically reduced the possibility of successful brute force attacks. By deploying a better version of two factor authentication, a challenge could have rendered this attack unsuccessful (for example, sending an SMS or email with a secondary authentication code when a device is restored from the cloud, or if iCloud is accessed from a previously unseen network). Apple might also consider better educating users about the risks involved in use of photo stream and iCloud backups, and avoid having them turned on by default and without notification. Victims may not have even been aware their content was ever sent to iCloud, or still remained in it.

http://www.wired.co.uk/news/archive/2014-09/02/j-law-cloud-security

http://www.zdziarski.com/blog/?p=3783

https://www.nikcub.com/posts/notes-on-the-celebrity-data-theft/

Anonymous Criticised For Declaring 'Full Scale Cyber War' Against Isis

An Anonymous-affiliated Twitter account has declared a "full-scale cyber war" against the Islamic State (formerly known as Isis) with the aim of tracking down members of the group throughout the world who "continue to use Twitter for propaganda".

The new Operation Ice ISIS campaign was launched recently with a Twitter account declaring: "Welcome to Operation Ice #ISIS, where #Anonymous will do it's part in combating #ISIS's influence in social media and shut them down."

However as soon as the campaign was launched other members of the Anonymous collective voiced concerns that starting such a campaign could put members of Anonymous based in Syria and Iraq in danger.

Another influential Anonymous account - @YourAnonCentral - was among those strongly criticising the account, claiming that "people who have been friendly in the past or supportive of Anon can be targeted for it."

While the account has retweeted a picture of what looks like an IS Twitter account being hacked by Anonymous, the account in question is in fact a parody account. The account also posted a video claiming to show a member of IS taking part in the Ice Bucket Challenge, but this has also been showed to be a fake. It is unclear who is behind the account though some suspected that it was Commander X (aka Christopher Doyon), a well-known member of the online movement who last year claimed to have quit Anonymous. However this link has been denied by other accounts previously associated with Doyon.

The US government has been engaging in this type of campaign for some time, but last week established a new Facebook page with a mission to "expose the facts about terrorists and their propaganda. Don't be misled by those who break up families and destroy their true heritage."

https://uk.news.yahoo.com/anonymous-criticised-declaring-full-scale-cyber-war

Is cyberwar coming?

Not every battle takes place over rugged terrain, on the open sea or even in the air. These days, you'll find some of the fiercest fighting going on between computer networks. Rather than using bullets and bombs, the warriors in these confrontations use bits and bytes. But don't think that digital weaponry doesn't result in real world consequences. Nothing could be further from the truth.

Think about all the services and systems that we depend upon to keep society running smoothly. Most of them run on computer networks. Even if the network administrators segregate their computers from the rest of the Internet, they could be vulnerable to a cyber attack.

Cyber warfare is a serious concern. Unlike traditional warfare, which requires massive amounts of resources such as personnel, weapons and equipment, cyber warfare only needs someone with the right knowledge and computer equipment to wreak havoc. The enemy could be anywhere -- even within the victim nation's own borders. A powerful attack might only require half a dozen hackers using standard laptop computers.

Another frightening aspect of cyber warfare is that a cyber attack can come as part of a coordinated assault on a nation or it could just be a malicious hacker's idea of a joke. By the time a target figures out the nature of the attack, it may be too late. No matter what the motive, cyber attacks can cause billions of dollars in damages. And many nations are woefully unprepared to deal with cyber attacks.

Some people recognized the inherently dangerous nature of the Internet fairly early on. In 1997, the Department of Defense commissioned an experiment codenamed Eligible Receiver. While most of the details regarding Eligible Receiver remain classified, the main purpose of the exercise was to see if a group of hackers using readily available computers and software could infiltrate the Pentagon's computer systems. The results were sobering -- according to John Hamre, the deputy secretary of defense at the time, it took three days before anyone at the Pentagon became aware that the computer systems were under attack.

In fact, it seems that a real adversary managed to do just that only a year later. In an attack that the U.S. government called Moonlight Maze, someone managed to penetrate multiple computer systems at the Pentagon, NASA and other facilities and access classified information. U.S. officials discovered the probing attacks by accident in 2000 after going unnoticed for two years. The pilfered data included strategic maps, troop assignments and positions and other sensitive info. Government agents were able to trace the attacks back to Russia, but it's impossible to say if that was their true origin.

The United States isn't always on the defense in cyber warfare. The U.S. has used cyber warfare strategies against Iraq and Afghanistan. During the Kosovo war, the U.S. used computer-based attacks to compromise the Serbian air defense systems. The attacks distorted the images the systems generated, giving Serbian forces incorrect information during the air campaign. Security agents are also working to infiltrate terrorist cells and monitor them remotely.

The first major strategy is the Pearl Harbor attack, named after the surprise attack on the naval base at Pearl Harbor, Hawaii, in 1941. This kind of attack involves a massive cyber assault on major computer systems. Hackers would first infiltrate these systems and then sabotage them. They might shut down part or all of a nation's power grid or attack water and fuel lines.

Pearl Harbor attacks can be frightening all on their own, but some security experts worry that enemies could coordinate a cyber attack with a physical assault. Imagine your city's power supply winking out in an instant, and within moments you hear the sound of explosions going off in the distance. Such an attack could not only cause a lot of damage, it would be a powerful psychological tactic. Some experts worry that terrorist organizations like Al Qaeda are working on plans that follow this strategy.

Because cyber warfare is so different from traditional warfare, you can't rely on the same rules you'd use in a physical conflict. With the right techniques, a hacker can make an attack practically untraceable. It's not hard for a skilled hacker to create an entire army of zombie computers -- machines infected with a program that allows the hacker to control the computer remotely. A person owning one of these infected computers might not be aware of the intrusion at all. If a computer system comes under attack from an army of zombie computers, it might not be possible to find the hacker ultimately responsible.

Security experts like Richard Clark, former cyber security advisor to the United States, say that part of the responsibility falls on software companies. He has said that software companies often rush products to market without putting them through a rigorous quality control phase. In particular, he criticized Microsoft for its practices. Since then, Microsoft claims it spends more time and resources making sure its products have strong security features.

Another thing to consider is that private companies own most of the Internet's infrastructure. Unless the government implements regulations, it's up to these private companies to ensure the safety of their networks. Even experts like Richard Clark have said that regulation is not the right decision -- he argues that it inhibits innovation and lowers the bar for security across all industries.

While it might not be obvious to us in our every day life, there's no doubt that cyber warfare is going on right now between nations and factions around the world. So is cyberwar coming? It may already be underway.

http://computer.howstuffworks.com/cyberwar.htm

http://cyberwar.einnews.com/article/215158306/zS1Fq10_jwuF78IW

http://www.amazon.com/exec/obidos/tg/detail/-/0061962236/lockergnome

IBM uses Watson as part of new cloud service

IBM has announced that it's making its artificially intelligent computer system, Watson, available to researchers as a cloud service.

Scientists from universities, pharmaceutical companies and commercial research centers have been using Watson, which was built to understand human language, to analyze and test hypotheses in their data, along with data held in millions of scientific papers available in public databases.

Early adopters have been trying out the cloud service, but it's officially available today, according to Rob Merkel, vice president of IBM's Watson Healthcare Group. Merkel declined to talk about the cost of the service.

Watson gained mainstream fame early in 2011 when the supercomputer went up against Jeopardy champions in a special episode of the question-and-answer game show.

In the man-vs-machine dustup, Watson trounced its human opponents. The machine may have faltered in a few categories, but was faster to the buzzer and more knowledgeable than its challengers, who had won many games against knowledgeable opponents in regular matchups.
At the time, Watson was touted by some analysts as one of the biggest computing advancements in the past several decades.

What makes it stand apart from other supercomputers is not just its ability to make calculations. Watson was designed to essentially converse with humans, answering verbal questions and even beginning to understand colloquialisms and jokes.

Merkel said that natural language ability puts Watson in a good position for scientific research. For instance, a scientist could have Watson digitally ingest as much information - say, research papers, proprietary information and licensed information -- about a topic as possible.

Then the scientist could ask the super computer to find all the drugs that had been repurposed for a particular use in the past five years. Or the scientist could ask Watson to go through that information and find all of the known drugs with certain characteristics.

"It's about understanding human language, scientific language and images," said Merkel. "It could be used anywhere where huge bodies of information need to analyzed."

Patrick Moorhead, an analyst with Moor Insights & Strategy, said it's a good idea for universities or commercial research houses to use Watson as a cloud service.

"Going with software-as-a-service lets you try it out, lowering the risk of bringing infrastructure in-house before you test it," he explained. "This is a good way for universities and pharmaceutical companies to kick the tires and see if Watson does bring value and if it works at scale. Meaning if they would do a lot of work on it, possibly they would bring it in house."
That, according to Moorhouse, would be a great thing for IBM, which would like to sell more Watson-like systems.

Dan Olds, an analyst with The Gabriel Consulting Group, said using Watson as part of a cloud service actually is a great idea for IBM.

"Delivering Watson as a service is a much better way to get potential clients to give the technology a try," he said. "It's an easy first step that will allow customers to see if Watson is for them, without having to shell out a whole bunch of money for what is essentially a supercomputer."

http://www.computerworld.com/article/2599412/software-as-a-service/

Google search ‘predicts’ the next Financial Crisis

Predicting the rises and falls of the stock market may have become a whole lot easier: A new study suggests that publicly available data from Google Trends, a tool that tracks terms people plug into the search engine, can be used to forecast changes in stock prices.
The study found that Google users tend to increase their searches for certain keywords in the weeks proceeding a fall in the stock market.

Researchers at Boston University and the University of Warwick, in the United Kingdom, grouped the popularly searched keywords on Google into topics. They then used Google Trends to compare the search volume for these topics between 2004 and 2012 to fluctuations in the Standard & Poor's 500 Index (S&P 500), the stock market index for the 500 largest U.S.-based companies.

They found that, historically, Google users search more for topics related to business and politics in the weeks preceding a fall in the stock market. Searches related to other topics, such as music or weather, weren't found to have any significant connection to changes in stock prices.

Previously, the researchers had looked at how the volume of Google searches for finance-related terms "debt" or "bank," for instance, might be related to fluctuations in the stock market. They found that an increase in the volume of these types of searches could be used to predict a fall in stock prices.

In their new study, the researchers took a broader look at what people might be searching for in the weeks before a downward turn in the market. The researchers analyzed 100 topics, to see which ones correlated with changes in stock prices. They found that only business and political topics had any significant correlation to the market.

"Increases in searches relating to both politics and business could be a sign of concern about the state of the economy, which may lead to decreased confidence in the value of stocks, resulting in transactions at lower prices," said Suzie Moat, an assistant professor of behavioral science at Warwick Business School and co-author of the study.

Financial crises, such as the one that affected markets worldwide in 2007 and 2008, can arise in part from the interplay of decisions made by many individuals. But to understand this interplay, or "collective decision-making," it's helpful for researchers to first examine the information that drives decision-making.

The new study was published July 28 in the journal Proceedings of the National Academy of Sciences.

http://www.cbsnews.com/news/could-google-searches-predict

Six universities accredited by GCHQ to train the next James Bond generation of cyber spies

Government spy agency GCHQ is accrediting six universities to train the next generation of cyber spooks to combat rising levels of online crime.

The six accredited courses include ‘ethical hacking’, on offer at the University of Lancaster, where students attempt to break into systems to learn how to defend them.

And Napier University in Edinburgh has created a mock online bank, which students can hack into.

Francis Maude, minister for the Cabinet Office, which implements the national cyber security programme along with the Office of Cyber Security, officially announced the certification during a visit to GCHQ in Gloucestershire.

‘Cyber security is a crucial part of this government’s long-term plan for the British economy,’ he said.

‘Through the excellent work of GCHQ, in partnership with other government departments, the private sector and academia, we are able to counter threats and ensure together we are stronger and more aware.’

Mark Hughes, the president of BT’s security team, said there was a ‘skills gap’ for cyber security know-how in the UK and welcomed the arrival of GCHQ’s first accredited courses.
He said: ‘At BT we are acutely aware of the impact of the UK cyber skills gap and recruiting the right people with the right knowledge and skills is a big deal for us. As a leading Internet service provider we want to employ the very best.’

In a bid to make the UK one of the safest places in the world to do online business, the government is ploughing money into the courses.

To qualify for funding from the Engineering and Physical Sciences Research Council, universities must prove they are conducting world-class research.

The University of Oxford master’s degree in software and security systems, Edinburgh Napier University’s MSc in advanced security and digital forensics, the University of Lancaster’s master in cyber security and Royal Holloway University of London’s MSc in information security were all accredited by GCHQ.

A further two, Cranfield University’s master in cyber defence and the University of Surrey’s MSc in information security, have been granted provisional certification.

A spokesperson from GCHQ said it marked a significant step in the development of the UK’s knowledge, skills and capability in all fields of cyber security.

Chris Ensor, deputy director for the National Technical Authority for Information Assurance at GCHQ said: ‘As the National Technical Authority for Information Assurance, GCHQ recognises the critical role academia plays in developing the UK's skill and knowledge base.

http://www.dailymail.co.uk/news/article-2713972/A-Masters-James-Bond

The full web site is currently under development and will be available during 2014