Waste management reform expands with private sector involvement: Environment Minister    Mideast infrastructure hit by advanced, 2-year cyber-espionage attack: Fortinet    SCZONE signs $18m agreement with Turkish Ulusoy to establish yarn factory in West Qantara    Egypt PM warns of higher oil prices from regional war after 1st Crisis Committee meeting    US firm VXI to create 4,000 jobs in Egypt in $135m expansion    Egypt's Foreign Minister discusses Mideast de-escalation with China FM, EU Parliament President    Egypt's gold prices fall for 3rd day on Wednesday    Egypt's FM holds talks with Arab counterparts over Iran-Israel escalation    Egypt's PM urges halt to Israeli military operations    Egypt sets 3-month goal to join world's top 50 in business readiness: minister    UN Palestine peace conference suspended amid regional escalation    Egypt advances integrated waste management city in 10th of Ramadan with World Bank support    Egypt, Japan's JICA plan school expansion – Cabinet    Egypt's EDA, AstraZeneca discuss local manufacturing    Egypt issues nearly 20 million digital treatment approvals as health insurance digitalisation accelerates    EGP opens flat against USD on Monday    Sisi launches new support initiative for families of war, terrorism victims    Egypt nuclear authority: No radiation rise amid regional unrest    Grand Egyptian Museum opening delayed to Q4    Egypt delays Grand Museum opening to Q4 amid regional tensions    Egypt expands e-ticketing to 110 heritage sites, adds self-service kiosks at Saqqara    Egypt's EDA joins high-level Africa-Europe medicines regulatory talks    Egypt's Irrigation Minister urges scientific cooperation to tackle water scarcity    Egypt, Serbia explore cultural cooperation in heritage, tourism    Egypt discovers three New Kingdom tombs in Luxor's Dra' Abu El-Naga    Egypt launches "Memory of the City" app to document urban history    Palm Hills Squash Open debuts with 48 international stars, $250,000 prize pool    Egypt's Democratic Generation Party Evaluates 84 Candidates Ahead of Parliamentary Vote    On Sport to broadcast Pan Arab Golf Championship for Juniors and Ladies in Egypt    Golf Festival in Cairo to mark Arab Golf Federation's 50th anniversary    Germany among EU's priciest labour markets – official data    Cabinet approves establishment of national medical tourism council to boost healthcare sector    Egypt's PM follows up on Julius Nyerere dam project in Tanzania    Egypt's FM inspects Julius Nyerere Dam project in Tanzania    Paris Olympic gold '24 medals hit record value    A minute of silence for Egyptian sports    Russia says it's in sync with US, China, Pakistan on Taliban    It's a bit frustrating to draw at home: Real Madrid keeper after Villarreal game    Shoukry reviews with Guterres Egypt's efforts to achieve SDGs, promote human rights    Sudan says countries must cooperate on vaccines    Johnson & Johnson: Second shot boosts antibodies and protection against COVID-19    Egypt to tax bloggers, YouTubers    Egypt's FM asserts importance of stability in Libya, holding elections as scheduled    We mustn't lose touch: Muller after Bayern win in Bundesliga    Egypt records 36 new deaths from Covid-19, highest since mid June    Egypt sells $3 bln US-dollar dominated eurobonds    Gamal Hanafy's ceramic exhibition at Gezira Arts Centre is a must go    Italian Institute Director Davide Scalmani presents activities of the Cairo Institute for ITALIANA.IT platform    







Thank you for reporting!
This image will be automatically disabled when it gets reported by several people.



Get to know 5 of the scariest predictions about artificial intelligence
Published in Amwal Al Ghad on 02 - 08 - 2018

When people think of artificial intelligence (AI), scenes of android killers and computers-gone-rogue will often come to mind.
Hollywood films like "Blade Runner" and "The Terminator" franchise have instilled in us a sense of dread at the thought of an AI going against its programming and turning on humans.
For an industry that could generate over $1 trillion in business value this year, and almost $4 trillion by 2022, any major doubts over its ethical implications could hold significant consequences.
AI is a buzzword that gets tossed around often in the business world and in the media, but it is already having tangible effects for a slew of industries — not least those that rely on a significant amount of manual labor.
As AI comes increasingly closer to maturity, and businesses continue to ramp up investments in it, some worry that not enough attention is being paid to the broader social and moral implications of the technology.
CNBC spoke with some experts to see what they think are the five scariest potential future scenarios for AI.
Mass unemployment
A common fear among analysts, and indeed workers, is the likelihood that AI will result in mass global unemployment as jobs increasingly become automated and human labor is no longer required.
"Job losses are probably the biggest worry," said Alan Bundy, a professor at the University of Edinburgh's school of informatics.
According to Bundy, job losses are the primary reason for the rise of populism around the world — he cites the election of U.S. President Donald Trump and the U.K.'s decision to withdraw from the European Union as examples.
"There will be a need for humans to orchestrate a collection of narrow-focused apps, and to spot the edge cases that none of them can deal with, but this will not replace the expected mass unemployment — at least not for a very long time," he added.
Proponents of AI say that the technology will lead to the creation of new kinds of jobs.
The need for engineers will be heightened, as the sophistication of new technology requires the right talent to develop it.
Humans will also have to use AI, advocates say, in order to perform new functions in their day-to-day roles.
Research firm Gartner predicts that AI will create 2.3 million jobs and eliminate 1.8 million — a net increase of 500,000 jobs — by 2020. That doesn't throw out the fact that it would result in steep layoffs around the world.
A frequently referenced 2013 study by Oxford University points out that some of the most replaceable jobs include brokerage clerks, bank tellers, insurance underwriters and tax preparers — critical, though less skilled, occupations that keep the financial industry in motion.
Although it is possible to minimize the damage to the labor market from AI through upskilling and the invention of new jobs — and perhaps even introducing a universal basic income — it's clear the issue of job losses will not go away anytime soon.
War
The advent of so-called "killer robots" and other uses of AI in military applications has experts worried the technology could end up resulting in war.
Tesla Chief Executive Elon Musk, known for his outspoken views on AI, warned last year that the technology could result in World War III.
Though known for his hyperbole, Musk's comment channeled a very real fear for experts. Some analysts and campaigners contend that the development of lethal autonomous weapons and the use of AI in military decision-making creates a multitude of ethical dilemmas, and opens the possibility of AI-enhanced — or AI-led — wars.
There's even a group of NGOs (non-governmental organizations) dedicated to banning such machines. The Campaign to Stop Killer Robots, set up in 2013, calls on governments to prevent the development of AI-powered drones and other vehicles.
Frank van Harmelen, an AI researcher at the Vrije Universiteit Amsterdam, said that although he did not believe using the word "scary" to describe AI was entirely accurate, the use of these weapons should scare anyone.
"The only area where I genuinely think the word ‘scary' applies is autonomous weapon systems… systems that may or may not look like a robot," Harmelen said.
"Any computer system, AI or not, that automatically decides on matters of life and death — for example, by launching a missile — is a really scary idea."
Earlier this year, the U.S. defense think-tank Rand Corporation warned in a study that the use of AI in military applications could give rise to a nuclear war by 2040.
The thinking behind that bold prediction was that the chance of a military AI system making a mistake in its analysis of a situation could lead nations to take rash and potentially catastrophic decisions.
Those worries arose from an infamous incident in 1983, when former Soviet military officer Stanislav Petrov noticed that Russian computers had incorrectly put out a warning that the U.S. had launched nuclear missiles, averting nuclear war.
Robo doctors
While experts are mostly in agreement about the benefits AI will provide medical practitioners — such as diagnosing illnesses very early on and speeding up the overall healthcare experience — some doctors and academics are wary we could be headed in the direction of data-driven medical practices too fast.
One fear among academics is that people are expecting too much of AI, assuming it can form the kind of general intelligence that humans possess to solve a broad range of tasks.
"All the successful AI applications to date are incredibly successful, but in a very narrow range of application," said the University of Edinburgh's Bundy.
According to Bundy, these expectations could have potentially dire consequences for an industry like healthcare. "A medical diagnosis app, which is excellent at heart problems, might diagnose a cancer patient with some rare kind of heart problem, with potentially fatal results," he said.
Just last week, a report by health-focused publication Stat cited internal IBM documents showing that the tech giant's Watson supercomputer had made multiple "unsafe and incorrect" cancer treatment recommendations. According to the article, the software was trained only to deal with a small number of cases and hypothetical scenarios rather than actual patient data.
"We created Watson Health three years ago to bring AI to some of the biggest challenges in healthcare, and we are pleased with the progress we're making," an IBM spokesperson told CNBC.
"Our oncology and genomics offerings are used by 230 hospitals around the world and have supported care for more than 84,000 patients, which is almost double the number of patients as of the end of 2017."
The spokesperson added: "At the same time, we have learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives. This includes 11 software releases for even better functionality during the past year, including national guidelines for cancers ranging from colon to liver cancer."
Another concern is that the volume of data gobbled up by computers and shared about — as well as the data-driven algorithms that automate applications by using that data — could hold ethical implications over the privacy of patients.
The dawn of big data, now a multi-billion dollar industry covering everything from trading to hospitality, means that the amount of personal information that can be collected by machines has ballooned to an unfathomable size.
The phenomenon is being touted as a breakthrough for the mapping out of various diseases, predicting the likelihood of someone getting seriously ill and examining treatment ahead of time. But concerns over how much data is stored and where it is being shared are proving problematic.
Take DeepMind, for example. The Google-owned AI firm signed a deal with the U.K.'s National Health Service in 2015, giving it access to the health data of 1.6 million British patients. The scheme meant that patients handed their data over to the company in order to improve its programs' ability to detect illnesses. It led to the creation of an app called Streams, aimed at monitoring patients with kidney diseases and alerting clinicians when a patient's condition deteriorates.
But last year, U.K. privacy watchdog the Information Commissioner's Office ruled that the contract between the NHS and DeepMind "failed to comply with data protection law." The ICO said that London's Royal Free Hospital, which worked with DeepMind as part of the agreement, was not transparent about the way patients' data would be used.
Mass surveillance
Experts also fear that AI could be used for mass surveillance. In China, that fear appears to be becoming a reality.
In various Chinese municipalities, the combination of facial recognition technology and AI is being used to the benefit of authorities to clamp down on crime.
The world superpower is known for its social authoritarianism, with the late Chairman Mao Zedong's cult of personality still being very much pervasive throughout the land, four decades after his death. And critics say that the nation's push toward total surveillance is nothing short of an Orwellian nightmare.
China is currently home to an estimated 200 million surveillance cameras, according to a New York Times report published earlier this month. It is also the only country in the world rolling out a "social credit system" that tracks the activities of citizens to rank them with scores that can determine whether they can barred from accessing everything from plane flights to certain online dating services.
China, which is vying to be the global leader in AI by 2030, wants to boost the value of its AI sector to 1 trillion yuan ($146.6 billion). The country is pouring billions into the sector to help push this ambition.
One company leading the amalgamation of AI and facial recognition tech is SenseTime. Thought to be the world's most valuable AI start-up, with a valuation of $4.5 billion, Alibaba-backed SenseTime provides authorities in Guangzhou and the province of Yunnan with AI-powered facial identification.
SenseTime says on its website that Guangzhou's public security bureau has identified more than 2,000 crime suspects since 2017 with the help of the technology.
Toby Walsh, professor of AI at the University of New South Wales, said that surveillance was "high up" on his list of the frightening ramifications arising from AI.
Discrimination
Some readers may remember Tay, an AI chatbot created by Microsoft that caused a stir two years ago.
The bot was given a Twitter account, and it took less than a day for users to train it to post offensive tweets supporting Adolf Hitler and white supremacy. The problem here was the fact that the chatbot was trained to mimic users interacting with it online. Considering the oftentimes dark nature of some corners of the internet, it could be said the offending tweets were unsurprising.
The blunder forced Microsoft to pull the account. Although the incident proved somewhat humorous, it ignited serious debate about the potential for AI to become prejudiced.
Walsh said that discrimination was one of a number of "unexpected consequences" to expect from the technology.
"We're seeing this with unintended bias in algorithms, especially machine-learning that threatens to bake in racial, sexual, and other biases that we've spent the last 50-plus years trying to remove from our society," he said.
The issue, experts say, relates to the feasibility of making AI an objective, rational thinker, void of bias in favor of one particular race, gender or sexuality.
It's something researchers and developers have been thinking about seriously, examining things like the way facial recognition technology appears better at discerning white faces than black ones, and how language-processing AI systems can exhibit bias, associating certain genders and races with stereotypical roles.
IBM even has researchers dedicated to tackling discrimination in AI, and earlier this year said it would release two datasets containing a diverse range of faces with different skin tones and other facial attributes to reduce bias in AI-powered facial recognition systems.
Source: CNBC


Clic here to read the story from its source.