Cyber Security Intelligence

Twitter< Follow on Twitter>

March Newsletter #4 2015

IT Workers are Prime Target of Spies says Snowden

Spies are increasingly targeting IT staff to gain access to key elements of internet infrastructure and sensitive databases, NSA contractor-turned whistleblower Edward Snowden has warned.

"It's not that they are looking for terrorists, it's not that they are looking for bad guys, it's that they are looking for people with access to infrastructure. They are looking for service providers, they are looking for systems administrators, they're looking for engineers," he said, speaking at the CeBIT technology show in Germany via a video link from Russia.

He added: "They are looking for the people who are in this room right now: you will be the target. Not because you are a terrorist, not because you are suspected of any criminal wrongdoing, but because you have access to systems, you have access to infrastructure, you have access to the private records, people's private lives. These are the things that they want. It is important for us to come together and prevent that from happening."

Snowden isn't the only one to warn that IT staff can be the target of spies, although mostly the finger is being pointed at foreign intelligence agencies. For example, the UK's M15 security service warned last year that IT workers have been recruited to help overseas spies gain sensitive personnel information, steal corporate or national secrets and even upload malware to compromise the network. IT staff have also been warned to beware of 'honey pot' sex stings.
Snowden said the best way to protect privacy was through technology, because that remains a constant across geographical or political boundaries. "That means end-to-end encryption; we have to protect communications while they are in transit, we have to improve the security of the endpoints and make this transparent to users," he said.

When we look back at 2013 a decade from now, the one technology story that's likely to have the biggest long-term impact is the Edward Snowden revelations.

While there were major password breaches at Adobe, Evernote, and Twitter as well as the debacle, nothing rocked the IT world more than the 200,000 classified documents that Snowden leaked to the press, uncovering the NSA's startling digital surveillance programs that reach more broadly across the internet than even many of the most extreme conspiracy theorists would have feared.

While the U.S. government defends the program as court-supervised and a powerful tool that has thwarted terrorist attacks and protected citizens, there's no doubt that the Snowden revelations have had a chilling effect on the technology world.

Here are the three biggest impacts:

1. Organizations are re-thinking how to effectively encrypt their most sensitive data.
2. International organizations are looking at ways to do less business with U.S. companies, since the NSA has direct backdoors into many of them.
3. The brakes are being put on cloud computing by some organizations, as they consider whether they want their data so easily accessible to surveillance agencies.

Despite Snowden leaks, Internet use is largely unchanged

And so finally thanks to the Snowden documents, first covered by The Guardian and The Washington Post, the world learned about the Prism program, which allegedly gave the NSA access to communications from nine tech companies, including Yahoo and Google. At the time, it was also revealed that the NSA systematically collected the telephone records of millions of U.S. customers of Verizon Communication.

Since then, some Internet users worried about protecting their privacy have made basic changes to their online activities, like adjusting their privacy settings or deleting rogue apps. But most people have carried on as usual, uninterested in using encryption or identity-cloaking browsers like Tor, according to the study. Roughly a third of respondents didn't even know what Tor is.

The Pew survey, conducted online between this past November and January, is the research center's first look at how people have changed their online behaviors to avoid government surveillance. In a related study late last year, Pew researchers found that the majority of Americans felt they had lost control of their personal data.

Some people have taken action. Roughly 30 percent of respondents said they had taken at least one step to hide or shield information from the government, according to the study's findings, which were based on a survey of 475 American adults. For example, among those who said they were aware of government surveillance programs, 17 percent said they have changed their privacy settings on social media. Thirteen percent said they have uninstalled certain apps since the Snowden leaks. Twenty-five percent said they now use more complex passwords.

Some 15 percent said they now use social media less often, while 11 percent have refrained from using certain terms in search engines they thought would trigger scrutiny.

But many more responded by saying they have not made wholesale changes to their online activities, or said they were not aware of other tools for more comprehensive online privacy. For example, among those who said they were aware of government surveillance programs, 40 percent said they have not used or considered using anonymity software like Tor.

Nearly half said they have not used email encryption technology like PGP (Pretty Good Privacy), which scrambles people's messages either en route or while at rest on company servers. Nearly a third said they did not know that technology existed. Over the past couple of years, more messaging apps like WhatsApp have baked encryption into their products, while others like Google and now Yahoo have released source code for encrypted messaging.

Fewer than half of respondents said they have used or considered using a search engine that doesn't keep logs of users' search history. (DuckDuckGo, for example, is a privacy-oriented search engine that does save searches, but not people's IP addresses or other unique identifiers.)

More than 40 percent said they have not used or were not aware of browser plugins like Privacy Badger for blocking tracking.

Overall, the majority of respondents said it would be difficult to find tools or implement strategies to help them be more private. The findings show that activists and companies making privacy-oriented products still have much to do in educating consumers about the strongest ways to secure their digital communications.

Or, the results may show that some people just don't care.

zd net



Internet of Lousy Things

The world of ubiquitous connected devices is almost here, and it's so eagerly anticipated that it becoming a reality seems inevitable. Anticipation, however, doesn't necessarily mean that we are going to have a good time with Internet of things. As a matter of fact, every "paradigm shift" of such a global scale brings troubles, unless the appropriate preparations have been made. With IoT it doesn't seem to be the case: As Alex Drozhzhin at Kaspersky Daily blog wrote, "There is a flood of appliances which could be connected – and some are connected – without a second thought as to whether or not it's necessary. Most people barely give a second thought that a hack of a smart-connected appliance could be dangerous and a lot more threatening than a simple PC hack."

In other words, more and more appliances of various kinds arrive – home electronics, health care devices, even car washes – equipped with Internet-enabled smart control systems, and they're remotely hackable.

The situation is pretty clear (or, rather, pretty clearly bad) with home appliances: check out the already-famous report by David Jacobi about how easily he managed to hack his own smart home to shambles. What about the business angle? The implications are serious and can get ugly.

Here's one scenario: a coffee machine serving a meeting room, where the most confidential information is shared between people. It's okay if this is just a "dumb" devices, operated with buttons and tumblers, and all it can do is blend the coffee beans then add boiling water and sugar, and fill the cups. But then let's imagine it is "smart", i.e. it is WiFi-enabled and voice controlled. "Voice controlled" means that it has a microphone built-in. WiFi-enabled means that it is a) connected to a local corporate network, b) can receive and, most likely, send data, c) remotely hackable if there are flaws in the firmware and the network isn't protected well enough. And given all this, is it possible such a smart coffee machine could end up a CyberEspionage device one day? It is absolutely possible – unless there are "draconian" measures applied by the firmware writers to make it

Actually every "smart" appliance that has functionality to receive data input "in background" – smart TVs, and any other device with cameras and microphones – can be used for spying (and occasionally such incidents have already happen). Recent APTs routinely use notebook cameras to take pictures of the environment without users' knowledge and consent. One can say that it is computers, and not smart devices, but in fact any smart appliance becomes a full-blown computer with the same possibilities and lack of security as its "common" brethren. Remember the spamming fridge?

In the post linked above we wrote about yet another scenario: attackers remotely disable a climate control system at a facility with strict temperature control rules (thus blinding IR security cameras, for instance) or switch off – again, remotely – the alarm system in an office building or bank. Then armed men in ski masks come in.

Every interconnected system is as secure and reliable as its weakest point. Every new smart device added to a given network is a potential entry point for people with malicious intent. Especially given the fact that the users of "smart" devices often neglect checking the settings, leaving the default ones set (which is a blatant violation of cybersecurity basics). It's like leaving the keys for the super-secure bank vault at the bank's doors under the rug.

Vendors of smart appliances are clearly interested in adding functionality (and thus adding value) to their devices. They may be "smart", they may be convenient to use, and just cool to have. But are they secure enough? Not necessarily.

"In general, the problem is that those who develop home appliances and make them connected face realities of a brand new world they know nothing about. They ultimately find themselves in a situation similar to that of an experienced basketball player sitting through a chess match with a real grand master," Drozhzhin wrote. Users may also be clueless about the hidden threats that smart devices may pose, – for them a fancy voice-controlled coffee machine is still a coffee machine, not a ready-to-settle "nest" for cyberspies.

And this means that developers of the home and business-oriented smart appliances must take a better look at how secure (or, for now, insecure) their firmware is, while the businesses who deploy such devices in their own networks, should keep them in check, in "presumption of guilt" mode.


Metadata Will Kill Your Privacy

The UK government inquiry into whether it conducts mass surveillance and the legality of such an effort has recommended tighter controls on access to communications metadata.

The inquiry finds that mass surveillance capabilities exist in the UK, but are used appropriately. The inquiry also rejects use of the term "metadata", which it feels is not helpful because it is too vague. Instead the UK prefers the term "Content-Derived Information" because it is felt a more nuanced approach to the collection of data about communications is required.

The report offers the four-level definitions of data that can be gleaned from details of an individual's electronic communications. The report goes on to say that Communications Data Plus "would encompass details of web domains visited or the locational tracking information in a smartphone" and to make the following observation about how it should be handled: "However, there are legitimate concerns that certain categories of Communications Data – what we have called 'Communications Data Plus' – have the potential to reveal details about a person's private life (i.e. their habits, preferences and lifestyle) that are more intrusive. This category of information requires greater safeguards than the basic 'who, when and where' of a communication."

The report says it has no problem with UK intelligence agencies collecting communications data through intercepts and does not recommend tighter controls on its collection and use. The call for more safeguards on Communications Data Plus is therefore notable in the Australian context, as the antipodean communications data collection proposal requires no warrant for access.

The UK report also says local legislation should therefore define three levels of metadata, under the following definitions:

Communications Data should be restricted to basic information about a communication, rather than data, which would reveal a person's habits, preferences or lifestyle choices. This should be limited to basic information such as identifiers (email address, telephone number, username, IP address), dates, times, approximate location, and subscriber information.

Communications Data Plus would include a more detailed class of information, which could reveal private information about a person's habits, preferences or lifestyle choices, such as websites visited. Such data is more intrusive and therefore should attract greater safeguards.
Content-Derived Information would include all information, which the Agencies are able to generate from a communication by analysing or processing the content. This would continue to be treated as content in the legislation.

It's hard to see its suggestions on a finer classification of metadata being followed, if only because the call for "greater safeguards" is vague and hard to follow.

the register

Beware of the Militarization of CyberScape

In the recent months a numerous number of Hacking campaigns have been uncovered by security firms. In many cases, they have been attributed to state-sponsored hackers.

Groups of hackers belonging to cyber units of several governments used sophisticated malicious code and hacking platforms to compromise computer networks worldwide. Private companies, government entities, critical infrastructure and citizens are all potential targets.

The overall activities of government entities in cyberspace are generally described as the "militarization of the cyberspace." Governments are investing significant resources to improve their cyber capabilities, creating 'cyber armies' to defend attacks from cyber space.

The debate about cyber weapons intensified after the discovery of the Stuxnet malware in 2010. Stuxnet was used by western entities to interfere with the Iranian nuclear program by sabotaging the centrifuges at the Natanz nuclear plant. A few months after the detection of Stuxnet, other malware was discovered - Flame and Duqu are two other high-profile cyber espionage tools that were used by state-sponsored actors.

Even when state sponsored, malware is discovered by security firms the vulnerabilities it exploits are targeted by attackers for a long time, causing serious damage to unpatched systems. Consider the Stuxnet virus - its code exploited the Windows Shell in Microsoft Windows XP systems, coded as CVE-2010-2568 and patched four years ago. Unfortunately, the vulnerability is still being used in cyberattacks targeting millions of computers worldwide.
Malware researchers at Kaspersky Lab discovered that between November 2013 and June 2014, the same Windows Shell vulnerability was exploited 50 million times in attacks against nearly 19 million machines all over the world.

In late 2013 Kaspersky Lab's Global Research & Analysis Team started a new investigation after several attacks hit the computer networks of various diplomatic service agencies. The attacks were part of a large-scale cyber-espionage operation dubbed "Red October," inspired by the famous novel and movie "The Hunt For Red October". The campaign acquired sensitive information from diplomatic, governmental and scientific research organizations in many countries, spanning Eastern Europe, the former USSR and Central Asia.

The malware and control infrastructure used in the attacks was highly sophisticated, which may indicate government involvement.

In March 2014 researchers at BAE Systems Applied Intelligence unearthed a cyber espionage campaign codenamed "Snake" that targeted governments and military networks. "Snake" had remained undetected for at least eight years.

Many other campaigns have been attributed to state-sponsored hackers. These are typically characterized by the nature of the targets, the level of sophistication and the duration of the attacks, which often take years to discover.

The U.S., Israel, Russia and China are considered the most advanced countries in cyber space, with their experts able to develop malware that could hit foreign networks and exfiltrated data in a covert way. They can also manage hacking campaigns that compromise their opponents' infrastructures.

In many cases governments run operations concurrently with conventional attacks. Covert cyberattacks, for example, were blamed on Russia during its 2008 war with Georgia. The finger of suspicion was also pointed at Moscow over cyber offensives during the recent crisis in the Crimean peninsula.

European governments are also investing in malware development. Malicious code R2D2 (also known as "0zapftis" or "Bundestrojaner") is an example of efforts by the German police and customs officials to spy on users and exfiltrated data from their PCs.

In March Mikko Hyppönen, chief research officer of security specialist F-Secure told the TrustyCon conference in San Francisco that almost every government is making an effort to improve its cyber capabilities.

Most of the hacking campaigns conducted by governments make use of highly sophisticated malware to compromise their targets - in many cases the code is designed to exploit zero-day vulnerabilities in the target's infrastructure.

This malware, however, could easily go out of control. In another scenario, a "threat actor" could reverse engineer the source code and spread it "in the wild." Cyber criminals, cyber terrorists and state sponsored hackers could enhance the malware and hit targets in an unpredictable way, making it difficult to identify the attack's source.

The availability of government-built malware is also having a significant impact on the criminal underground - the main customers for zero-day exploits and malware coding services are governments. Some security experts, for example, believe that two different Ukraine-based malware factories were behind Stuxnet's coding, acting like "sub-contractors" for the U.S. and Israeli Governments.

Some experts have argued that computer security companies may not prevent the spread of government-built malware in exchange for government favors.

The suspicion that security firms have "whitelisted" state-sponsored malware is certainly disconcerting - a policy like this would represent a serious menace to the overall Internet community. It also opens the door to a scary scenario in which a cyber weapon could run out of control.

Similar to nuclear armaments, the use of state-sponsored malware needs to be regulated by a legal framework and accepted on a global scale, establishing the rules of engagement.
Be aware, however - we are all nodes of a global network, and whoever controls this network will control the world. Governments will continue to focus their research on the development of new cyber weapons, including sophisticated malware that in the wrong hands could be a dangerous menace.

Fox news

Europe's Data Privacy Laws Irk US Tech Companies

Europe is closer to approving new data-privacy legislation that threatens to raise tensions with US technology firms.

The European Union body representing member countries on Friday reached a tentative agreement on a controversial power-sharing mechanism between national privacy watchdogs that had been holding up the legislation amid furious corporate lobbying.

Supporters say the new privacy rules are necessary to update and harmonize a patchwork of national laws that date to the 1990s, at the dawn of the World Wide Web. But tech executives say restrictions on how they could use data for advertising purposes could force them to stop offering some free services. Tech firms and some governments also say the pan-European board could increase, rather than reduce, the regulatory burden, punishing smaller firms.



Cyber War Exercise in Central London

Forty-two amateur cyber defenders gathered on the HMS Belfast in London this week to take part in a cyber terrorist attack simulation run by the Cyber Security Challenge UK.

The competition, known as the Masterclass and developed by a group of cyber experts led by BT, is now in its fifth year and aims to plug the skills shortage currently affecting both governments and UK businesses. The competition essentially invites participants to put their skills to the test and experience a dramatized version of events faced by regular cybercrime fighting professionals. It also allows sponsors of the competition such as BT, Lockheed Martin, and Airbus, to hover on the sidelines and cherry pick the next cybercrime busting whizz kids.

In 2014, the competition took place in an underground bunker of the Churchill War Rooms, with prizes worth £100,000 going toward educational and career advancement opportunities.

This year, organizers aimed to stoke interest among both the public and would-be cyber defenders by upping the dramatic narrative of the competition. Aboard the HMS Belfast, cyber defenders competed to regain control of the naval guns system, taken over by fictitious cyber terrorist network, the Flag Day Associates.

"I wanted to design a realistic challenge that used the kind of computer systems and networks that cyber defenders have to defend in real life," Robert Partridge, Head of BT Security Academy, told WIRED UK. "But I also wanted to make it exciting and put some Hollywood into it as well," says Partridge, while noting that he wanted to "de-geekify" the image of cyber security.
"There will be more jobs than candidates for [cyber defense jobs] in the next 20 years, and we need to lift the profile of cyber securities careers in the UK to address this skills gap," he continued.

Over the course of two days (March 12 to 13), the amateur cyber defenders were tasked with finding the vulnerabilities and flaws placed in the operating system set up by the competition developers. Primarily, the competitors had to race against the clock to regain control of the ship's gun systems. Secondly, they searched for weaknesses within the IT system of fictitious physical infrastructures, such as water treatment plants and manufacturing facilities, in order to defend these against the rogue cyber terrorist group.

As countries the world over make a push to establish smart cities, the physical infrastructures sustaining our societies are increasingly under threat from cyber attacks. As more systems are brought online, maintaining the security and stability of critical national infrastructure becomes paramount.

As part of the competition, Airbus' SCADA Challenge Brief encourages competitors to conduct a security validation test in real time. This allows competitors to practice sussing out what the flaws and best cyber security solutions are before they are deployed in the real world—or in this case within the fictive one created by the challenge.

"Airbus group understands that the industrial controls system that underpin our critical national infrastructures, such as water treatment facilities, electricity grids, and our logistics and supply chains, must also be considered for the cybersecurity solutions that we bring in place," Kevin Jones, Head of Cyber Operations Research Team of Airbus Group, told WIRED UK.

"As these systems go online and become increasingly interconnected, we also need to take action to secure them," he adds. The cyber attack, which physically affected the furnaces of a German steel mine back in December 2014, demonstrates the extent to which Internet crimes are infiltrating physical structures, he explains.

"Cyber attackers are looking to perform malicious actions against such industrially controlled systems, and as security professionals, we have to make sure we're building up the defenses," adds Jones.

Ein news

Jobs for Cyber Superstars

Raytheon has emerged as an industry leader in developing Cyber resilience, and their Cyber specialists help other organisations in government and industry to develop their Cyber capabilities.

The job opportunities are at a variety of levels within Raytheon's new Cyber Innovation Centre; including Cyber Research, Software Development, Systems Engineering and many other roles.
The benefits of the digital era come with a steep price; cyber attacks and cyber crime have quadrupled in recent years, and estimates of the losses exceed US$1 trillion.

With this type of crime and terror escalating, today's organisations require a new generation of digital defence warriors ready to face these global threats in this rapidly changing environment.
Experts predict 75 billion devices will hook into the Internet by 2020. That's a lot of open doors for hackers to exploit, and Raytheon Cyber engineers are working to stop them.

So, if code breaking excites you, reverse-engineering seems pretty straightforward and you like to break down information security systems just to build them back, stronger and better, then you are the kind of person we are actively seeking to recruit at Raytheon: a Cyber defence warrior who can help us protect the world's most critical data from breach, fraud, theft and sabotage.

Raytheon UK announces bursaries for the cyber programme at Lancaster University. The innovative cyber course is accredited by GCHQ and includes a comprehensive view of cyber with modules in business, law and psychology to ensure students are best prepared for the working environment.

Raytheon UK will provide a £3000 bursary each to six students towards their annual fees – three students reading data science and three reading cyber security. Students who want to be considered for the scholarship will be set technical challenges and will be asked to respond with a one-page project outline.


UK Police Should Retry Gun Technology Sensors

San Francisco is scaling up its use of an intelligent gunshot sensor system but when the same scheme was trialed in the UK it was abandoned after two years. The technology of the sensors has improved, so is it time to retry the system?

It sounds like a no-brainer. A tried and tested network of listening sensors are placed around a city and can instantly pinpoint where a gunshot has come from within seconds of the weapon being fired.

ShotSpotter promises to save police having to hunt door-to-door in the vague vicinity of a blast. It analyses the way the sound waves from the gun firing radiate out reaching microphones at slightly different times.

Its maker, SST says it can distinguish the sound of a bullet being fired from fireworks and other types of explosion, count how many shots were fired and even deduce how many gunmen were involved.

ShotSpotter allows the police to see how many shots have been fired and from where San Francisco is scaling up its use of the tech and it's also been deployed in Miami, Boston, Puerto Rico and Rio de Janeiro.

But an effort to use it to combat gun crime in the UK was abandoned when authorities in the city of Birmingham reported "technical difficulties". So, what went wrong - and would it be worth reconsidering?

Privacy concerns

In December 2010, West Midlands Police were optimistic about what the innovation could achieve.

The cost of investigating a single murder could run to £1m. By contrast, installing the system cost £150,000 and a further £21,000 a year to maintain.

"We're delighted to be the first city in the UK to secure this technology," said Chief Supt Chris McKeogh at the time. Some residents expressed concern that their conversations might be picked up - a previous effort to install hidden CCTV cameras in the city had proven controversial and had to be abandoned - but the police assured them this would not happen.

But just 20 months later ShotSpotter was judged to be a failure.

In August 2012 West Midlands Police said of 1,618 alerts produced by the system since November 2011, only two were confirmed gunfire incidents.

Sensors were placed on buildings but police would not say where or what the equipment looked like.What's more, ShotSpotter had also missed four confirmed shootings. Its conclusion was that resources would be best spent elsewhere.

With gunfire rates as low as they are in the UK, the cost/benefit equation needs to be carefully thought through.

Ch. Supt Clive Burgess said the system had "struggled to work" and that in future officers would instead focus on day-to-day community policing, anti-gun education programmes and the work of the counter-gang task force.

Now that the dust has settled, SST is willing to discuss what went wrong.

James Beldock, the firm's senior vice president of products, said the figures quoted two years ago were misleading.

"There were only two cases of an actual firearm shooting being missed [by SST] over an 18-month period," he said.

"The other two were air guns, which ShotSpotter is explicitly not designed to detect."

He acknowledged there were "technical problems", which caused the system to be less accurate than normal, but suggested this could have been avoided if the city had been more committed to the idea.

The ShotSpotter system is not designed to accurately identify air gun shots

"SST originally proposed a density of ShotSpotter sensors of approximately 10 per square kilometer," he said.

"Such sensor densities are standard for our international deployments - Brazil, South Africa, Panama, etc.

"Unfortunately, budget constraints pushed West Midlands Police to reduce that density. We take partial responsibility for permitting the budget to drive the decision, along with West Midlands Police."

The firm had learnt from this mistake and made other changes to improve the system.

SST staff now monitor all the sensors deployed worldwide through a central base in the US to confirm the cause of each explosion, rather than leaving such a judgement to local law enforcers on the ground.

Computers are able to differentiate between the sounds of gunshots and other noises
And a new generation of sensor - with approximately 10 times the processing power - has now been introduced, Mr. Bedlock said.

Even so, Birmingham - and other UK cities that eyed ShotSpotter - might be wise to remain reticent.

ShotSpotter is optimised to handle the very specific noises, frequencies and decibel levels created by conventional weapons.

But while such weapons may be relatively easy to come by in the US and parts of Latin America, they are less common in the UK.

As a result, criminals in Britain often resort to other types of firearms, including ones that shoot pellets and electric stun guns.

ShotSpotter's software can highlight gunfire hotspots to help police predict where the next incidents are most likely to occur

A review of the 22 injuries caused by guns in Birmingham's west and central areas between April 2011 and March 2012 reveals that the majority were the result of air-rifles and BB air guns.

"A higher sensor density might permit such modified weapons to be detected, but the economic equation would, again, need to be reviewed," said Mr. Bedlock.

It's not impossible that ShotSpotter will return to the UK. The Home Office notes that it is "down to each regional police force" as to whether it invests in the equipment.

But for now it seems this is one instance where the traditional trumps cutting edge tech - at least where British cities are involved.

BBC Tech


Self-driving Cars May Lead to Human Driver Ban

GTC 2015 Self-driving cars are "almost a solved problem," Tesla Motors boss Elon Musk told the crowds at Nvidia's GPU Technology Conference in San Jose, California.

But he fears the on-board computers may be too good, and ultimately encourage laws that force people to give up their steering wheels. He added: "We'll take autonomous cars for granted in quite a short time."

"We know what to do, and we'll be there in a few years," the billionaire SpaceX baron said.
Musk is no stranger to robo-rides: the Tesla Model S 85D and 60D have an autonomous driving system that can park the all-electric cars. Now he thinks Tesla will beat Google at its own game, and conquer the computer-controlled driving market, even though he's cautious of a world run by "big" artificial intelligence: "Tesla is the leader in electric cars, but also will be the leader in autonomous cars, at least autonomous cars that people can buy."

Although, while speaking on stage at the conference on Tuesday, Musk said human driving could be ruled illegal at some point, he clarified later on Twitter: "When self-driving cars become safer than human-driven cars, the public may outlaw the latter. Hopefully not."
It's no coincidence Musk was making noises about AI-powered cars at the Nvidia event: the GPU giant is going nuts for deep-learning AI, and just officially unveiled its Titan X graphics card, which it hopes engineers and scientists will use for machine learning.

And if you want to be like Musk, and develop your own computer-controlled car, Nvidia showed off its Drive PX board that will sell for $10,000 in May. The idea here is to have a fleet of cars, each with a mix of 12 cameras and radar or lidar sensors, driven around by humans, and data recorded by the team uploaded to a specialized cloud service.

This data can be used to train a neural network to understand life on the road, and how to react to situations, objects and people, from the way humans behave while driving around. The Drive PX features two beefy Nvidia Tegra X1 processors, aimed at self-aware cars.

Each X1 contains four high-performance 64-bit ARM Cortex-A57 compute cores, and four power-efficient Cortex-A53s, lashed together using Nvidia's own interconnect, and 256 Maxwell GPU cores.

Car brain ... Jen-Hsun Huang, Nvidia CEO, holds up the Drive PX circuit board

Nvidia engineers have already apparently captured at least 40 hours of video, and used the Amazon Mechanical Turk people-for-hire service to get other humans to classify 68,000 objects in the footage. This information – the classification and what they mean in the context of driving – is fed into the neural network. Now the prototype-grade software can pick out signs in the rain, know to avoid cyclists, and so on.

Nvidia's self-driving car software can pick out what's on the road

This trained neural network is transferred to the Tegra X1 computers in each computer-controlled car, so they can perform immediate image recognition from live footage when on the road.

The car computers can't learn as they go while roaming the streets. Deep-learning algorithms take days or weeks to process information and build networks of millions of neuron-ish connections to turn pixels into knowledge – whereas car computers need to make decisions in fractions of a second. This is why the training has to be done offline.

It's like a human learning how to drive: spend a while being taught how to master the vehicle, and then apply that experience on the road.

The Drive PX-controlled cars can also upload live images and other sensor data to the cloud for further processing if a situation confuses it, or it isn't sure it handled it right – allowing the fleet to improve its knowledge from experience even after the bulk of the neural net training has been completed.

the register

FBI Plans to Expand its Hacking Powers

A judicial advisory panel Monday quietly approved a rule change that will broaden the FBI's hacking authority despite fears raised by Google that the amended language represents a "monumental" constitutional concern.

The Judicial Conference Advisory Committee on Criminal Rules voted 11-1 to modify an arcane federal rule to allow judges more flexibility in how they approve search warrants for electronic data, according to a Justice Department spokesman.

Known as Rule 41, the existing provision generally allows judges to approve search warrants only for material within the geographic bounds of their judicial district.

But the rule change, as requested by the department, would allow judges to grant warrants for remote searches of computers located outside their district or when the location is unknown.
The government has defended the maneuver as a necessary update of protocol intended to modernize criminal procedure to address the increasingly complex digital realities of the 21st century. The FBI wants the expanded authority, which would allow it to more easily infiltrate computer networks to install malicious tracking software. This way, investigators can better monitor suspected criminals who use technology to conceal their identity.

But the plan has been widely opposed by privacy advocates, such as the American Civil Liberties Union, as well as some technologists, who say it amounts to a substantial rewriting of the rule and not just a procedural tweak. Such a change could threaten the Fourth Amendment's protections against unreasonable search and seizures, they warn, and possibly allow the FBI to violate the sovereignty of foreign nations. The rule change also could let the agency simultaneously target millions of computers at once, even potentially those belonging to users who aren't suspected of any wrongdoing.

Google weighed in last month with public comments that warned that the tweak "raises a number of monumental and highly complex constitutional, legal and geopolitical concerns that should be left to Congress to decide."

In an unusual move, Justice Department lawyers rebutted Google's concerns, saying the search giant was misreading the proposal and that it would not result in any search or seizures not "already permitted under current law."

The judicial advisory committee's vote is only the first of several stamps of approval required within the federal judicial branch before the rule change can formally take place—a process that will likely take over a year. The proposal is now subject to review by the Standing Committee on Rules of Practice and Procedure, which normally can approve amendments at its June meeting. The Judicial Conference is next in line to approve the rule, a move that would likely occur in September.

The Supreme Court would have until May 1, 2016 to review and accept the amendment, which Congress would then have seven months to reject, modify or defer. Absent any congressional action, the rule would take place on Dec. 1, 2016.

Privacy groups vowed to continue fighting the rule change as it winds its way through the additional layers of review.

"Although presented as a minor procedural update, the proposal threatens to expand the government's ability to use malware and so-called 'zero-day exploits' without imposing necessary protections," said ACLU attorney Nathan Freed Wessler in a statement. "The current proposal fails to strike the right balance between safeguarding privacy and Internet security and allowing the government to investigate crimes."

Drew Mitnick, policy counsel with digital rights group Access, said the policy "should only be considered through an open and accountable legislative process."


US Loses Contact with Drone Aircraft in Syria

An unarmed US Predator drone aircraft went down in Syria, but it's not clear whether it was shot down as claimed by the Syrian government, US officials have said.

Syria's SANA state news agency said that the country's air defenses shot down a US drone in a northwestern province along the Mediterranean coast. US drones are operated over Syria to provide reconnaissance of certain parts of the country's

A US defense official said that military controllers lost contact with an MQ-1 Predator over northwest Syria. The official said there was no information to corroborate the claim that it had been shot down. The US official was not authorized to discuss the matter publicly, so spoke on condition of anonymity.

Fox news

Oxford Cyber Risk for Leaders Programme

Any organisation that relies on computer networks, digital information, the Internet or an Intranet is vulnerable to cyber security risks. Sabotage, hacking, malware, even uncontrolled use of social media: all these can lead to financial loss, disruption of your operations or service, and, inevitably, reputational damage. The threats are real, and they are changing all the time.

Managing these risks is not the sole responsibility of the IT department, or even of your Chief Information Security Officer (if you have one). As a leader, it is your job to understand and oversee your organisation's response to cyber risk.

This programme will enable you develop leadership skills in the cyber arena, and to take effective action when dealing with an incident. It will help you build an awareness of the kinds of threat your business is likely to be facing, the operational dilemmas you will need to address, and, crucially, what questions you should be asking of your security advisors.

Building on world-leading research into cyber-security, and drawing on cross-disciplinary expertise from throughout the University of Oxford, the programme combines interactive lectures with simulations and discussions based on real, current cases. You need no background in cyber security: the focus is on general managers and directors.

Course length 2.5 days
Monday - Wednesday
Dates 8 - 10 June 2015
Cost £3,950
Accommodation is not included

Oxford Said Business School