Month: April 2017 (page 1 of 6)

A.I. has to deal with its transparency problems

Below is a summary for the article : Artificial Intelligence has to deal with its transparency problems by The Next Web

*Note: Image used belongs to agsandrew / Shutterstock

Artificial Intelligence breakthroughs and developments entail new challenges and problems.

This can turn into a problem as we move forward and Artificial Intelligence becomes more prominent in our lives.

AI is not flawless, and does make mistakes, albeit at a lower rate than humans.

We humans make mistakes all the time, including fatal ones.

The same can’t be said of Artificial Intelligence.

So can we trust Artificial Intelligence to make decisions on its own? For non-critical tasks, such as advertising, games and Netflix suggestions, where mistakes are tolerable, we can.

The same goes for scenarios where human lives are at stake.

For the moment Artificial Intelligence will show its full potential in complementing human efforts.

In the meantime, firms and organization must do more to make Artificial Intelligence more transparent and understandable.

Eventually, we’ll achieve – for better or worse – Artificial General Intelligence, AI that is on par with the human brain.


To read more information, visit the original article :
This summary has been auto generated by

This Japanese App Can Speak To You Like A Real Person

Below is a summary for the article : This Japanese App Can Speak To You Like A Real Person by Geek

*Note: Image used belongs to

If you’ve ever wished you could talk to your phone or some sort of app the way you can speak to real people, there’s an awesome new smartphone application in Japan that’s working to do just that.

It’s called SELF, and it features a female robot character named Ai Furuse.

She’s a friendly and understanding artificial intelligence engine that has just received new upgraded conversation functionality that promises communication closer to what you’d expect from a “Real girlfriend or partner.”

Different expressions are always important for creating empathetic characters, and with AI she can react to different situations with different and unique expressions.

With a conversation library of over 30,000 words, she can also come up with the right words to say when you speak to her.

A great AI always learns too, and Ai is capable of doing that.

She can store and remember conversations from the past, as well as emotions.

This is an awesome use of AI technology, and I’m interested in conversating with Ai, but unfortunately, I don’t speak English.

Hopefully, there’s some way in the future to interface with a similar construct, preferably one that’s not only programmed for Japanese conversation.

Ai herself is pleasant on her own, but I’m waiting for the day where this kind of thing is nearly indistinguishable from the real thing.


To read more information, visit the original article :
This summary has been auto generated by

We Are Entering the Era of the Brain Machine Interface

Below is a summary for the article : We Are Entering the Era of the Brain Machine Interface by Backchannel

*Note: Image used belongs to Facebook

Last week at Facebook’s F8 conference, former DARPA head Regina Dugan, who leads its research group called Building 8, revealed that Facebook was working on a Brain Machine Interface project.

Yes, Facebook, whose goal is to connect everyone in the world to its network, now is exploring how to navigate the ultimate last mile problem - the gap between your brain and the keyboard.

I met with Dugan and the respective product heads of the brain and skin projects.

Based on the DARPA model of hiring scientists for two-year sprints to determine viability of ambitious projects, Dugan hired Johns Hopkins neuroscientist Mark Chevillet to work on a system that would transfer brain signals to text.

Using noninvasive optical light sensors, Facebook could analyze the neuro-signature of words that a user consciously directs to the “Pre-speech” brain region - basically a launch pad for what someone wants to say or write - and then produce it on a computer screen or file, at a rate of 100 words a minute.

He’s an entrepreneur who founded Kernel, a company planning to develop tiny chips that could be implanted in a brain and mess with the way that neurons signal each other.

As with Johnson’s Kernel, Musk is working on an implant that at first would help people with brain impairments such as stroke.

Again like Johnson, Musk views brain augmentation as a way that humans can compete with AI. As for the weirdness factor, Musk says that we fail to admit how much we are already kind of cyborgs.

If by, say, 2025, we can accurately interpret the brain patterns of words in the regions where people consciously intend the thoughts to be shared, who’s to say that by 2040 or so, we won’t get good at shooting optical beams at your skull to figure out what you’re really thinking? It’s been more than 50 years since Bob Dylan wrote, “If my thought-dreams could be seen/They’d probably put my head in a guillotine.” Sharpen the blades!

I mean all of us - if most people are zooming around with Lamborghini intelligence, you certainly don’t want to be left behind in a Model T.In any case, we may one day look back at this week as the beginning of the Brain Machine Interface era.


To read more information, visit the original article :
This summary has been auto generated by

Rise of A.I.: learning to love machines

Below is a summary for the article : Rise of artificial intelligence: learning to love machines by The Australian

*Note: Image used belongs to

Twenty years later, after learning much more about the subject, I am convinced that we must stop seeing intelligent machines as our rivals.

Many of the great early figures in computer science dreamt of creating a machine that could play chess.

A computer to run it didn’t yet exist, so he flipped through pieces of paper to run his algorithm, a “Paper machine” that could actually play a recognisable game of chess.

It took much longer than most early experts thought it would for machines to challenge the best human chess players.

During my 20 years at the top of the chess world, from 1985 to 2005, chess-playing machines went from laughably weak to the level of the world champion.

These are the same sensations that many are feeling today as intelligent machines advance in field after field.

The storyline grew more ominous and pervasive during the robotics revolution of the 60s and 70s, when more precise and intelligent machines began to encroach on unionised jobs in manufacturing.

Now we have reached the next chapter in the story, when the machines “Threaten” the class of people who read and write articles about them.

While experts will always be in demand, more intelligent machines are continually lowering the bar to creating with new technology.

Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives towards creativity, curiosity, beauty and joy.


To read more information, visit the original article :
This summary has been auto generated by

Emerging Ethical Concerns In the Age of A.I.

Below is a summary for the article : Emerging Ethical Concerns In the Age of Artificial Intelligence by Entrepreneur

*Note: Image used belongs to Amazon

Science fiction novels have long delighted readers by grappling with futuristic challenges like the possibility of artificial intelligence so difficult to distinguish from human beings that people naturally ask, “Should these sophisticated computer programs be considered human? Should ‘they’ be granted human rights?” These are interesting philosophical questions, to be sure, but equally important, and more immediately pressing, is the question of what human-like artificial intelligence means for the rights of those whose humanity is not a philosophical question.

If artificial intelligence affects the way we do business, the way we obtain information, and even the way we converse and think about the world, then do we need to evaluate our existing definition(s) of human rights as well?

Of course, what constitutes a human right is far from universally agreed.

Historically, technological improvements and economic prosperity – as measured by per capita GDP – have tended to lead to an expanded view of basic human rights.

Do human beings have a right to earn a livelihood? And, if they do, how far does that right extend? How much discomfort is acceptable before the effort required to find gainful employment moves from reasonable to potentially rights-infringing? If technology renders human labor largely obsolete, do humans have a right to a livelihood even if they cannot earn it?

Technology challenges our conception of human rights in other ways, as well.

Some of the most fascinating applications of improved artificial intelligence relate to the ability to quickly and efficiently analyze large quantities of data, finding and testing correlations and connections and translating them into usable information.

Typically, concerns around access to and use of personal data have centered on personal privacy concerns.

As we increasingly rely on data aggregation software not only to provide us with organized information, but to influence or direct actions, we may increasingly find ourselves asking the question – should we have the right to ensure data is used fairly?

While clear answers are unlikely to emerge any time soon, it will be equally important to ensure that we, collectively as a society, are asking the right questions to ensure that technological innovation equates to genuine progress.


To read more information, visit the original article :
This summary has been auto generated by

Japanese Automakers Look to Robots to Aid the Elderly

Below is a summary for the article : Japanese Automakers Look to Robots to Aid the Elderly by Scientific Americans

TOKYO – Japanese automakers are looking beyond the industry trend to develop self-driving cars and turning their attention to robots to help keep the country’s rapidly graying society on the move.

Toyota Motor Corp said it saw the possibility of becoming a mass producer of robots to help the elderly in a country whose population is ageing faster than the rest of the world as the birthrate decreases.

Toyota, the world’s second largest automaker, made its first foray into commercializing rehabilitation robots on Wednesday, launching a rental service for its walk assist system, which helps patients to learn how to walk again after suffering strokes and other conditions.

Toyota’s system follows the release by Honda Motor Co of its own walking assist “Robotic legs” in 2015, which was based on technology developed for its ASIMO dancing robot.

“If there’s a way that we can enable more elderly people to stay mobile after they can no longer drive, we have to look beyond just cars and evolve into a maker of robots,” Toshiyuki Isobe, chief officer of Toyota’s Frontier Research Center, told Reuters in an interview on Wednesday.

Speaking to reporters, he added that mass producing robots would be a natural step for the company which evolved from a loom maker in 1905 into an automaker whose mission is to “Make practical products which serve a purpose”.

“Be it robots or cars, if there’s a need for mass produced robots, we should do it with gusto,” Isobe said.

Globally, sales of robots for elderly and handicap assistance will total about 37,500 units in 2016-2019, and are expected to increase substantially within the next 20 years, according to the International Federation of Robotics.

Still, industry experts said that automakers were well placed to compete with medical technology companies including Switzerland’s Hocoma and robot manufacturers such as ReWalk Robotics of the United States, both of which have developed robotic walking assist systems.

“On top of that, many of them have been partnering with the likes of Google and other companies looking at applying artificial intelligence, which will put them in a strong position to compete in robot services for the elderly.”


To read more information, visit the original article :
This summary has been auto generated by

Robots Have Learned to Paint in Second Year of Robotic Art Contest

Below is a summary for the article : Robots Have Learned to Paint in Second Year of Robotic Art Contest by News Sys-Con Media

*Note: Image used belongs to

It should come as no surprise that dozens of robots from around the world are now also painting with a brush, and many of them are quite skilled.

The Robot Art 2017 competition returns for a second year with over 39 painting robots, more than twice the participants it had in its inaugural year.

The creativity of the teams and robots was evident not only in the artwork they produced, but also in how they went about making the art.

Of the 39 painting robots, no two teams took the exact same approach.

HEARTalion built a robot that paints based on emotional interactions with humans.

Other teams built custom robots that capitalized on their innate lack of precision to make abstract work such as Anguis, a soft snake robot that slithers around its canvas.

Other robots were built to collaborate with their artistic creators such as Sander Idzerda’s and Christian H. Seidler’s entries.

E-David submitted multiple abstract self-portraits not of a human, but of the robot itself.

The robot then used artificial intelligence and deep learning to make all other “Artistic” decisions, including taking the photos, making an original abstract composition from its favorite, and then executing each brushstroke until it had calculated it had done the best it could to render its original abstract composition.

The Robot Art 2017 competition will be running between now and May 15th, when more than $100,000 in awards will be given to the top painting robots.


To read more information, visit the original article :
This summary has been auto generated by

A.I. and robots will take our jobs – but better ones will emerge for us

Below is a summary for the article : AI and robots will take our jobs – but better ones will emerge for us Wired

*Note: Image used belongs to Luke Bugbee, 8VC

As virtual reality hardware and software evolve, whole new historical novels and science fiction adventures will be tailored to people’s individual personalities.

Construction of new habitats on the Moon and Mars will create colony design, terraforming jobs, and work building vehicles to handle the new terrain.

Given how spoiled Baby boomers are, these wild children of the 60s and 70s will no doubt respond well to new forms of attention and entertainment! Taking care of our nation’s elderly in a tender, loving way - if done well - will generate millions of new jobs.

Professional coaching will be in high demand as the American economy moves forward, also creating new jobs in the millions.

New technology will quantify aspects of your emotional reactions, self-discipline, baseline outlook, and a new class of psychological coaches will emerge to help people improve their personalities.

With new staff, the upper-middle class will live better than the lords and ladies of old.

New technology will open up richer worlds of human interaction as we develop new techniques for measuring and understanding our humanity.

We will witness large demand for chimera pets and emotional support animals, as well as new species of animals to use in new and existing industries.

Second, we should make it easier for entrepreneurs to start new firms and employ more people in new forms of work.

To make sure we’re creating new jobs we need to cut the red-tape of over one million rules that make our economy sclerotic and deter new business formation - and allow the market to rapidly evolve on its own terms to find new ways of employing millions of people.


To read more information, visit the original article :
This summary has been auto generated by

Robots with Guns: The Rise of Autonomous Weapons Systems

Below is a summary for the article : Robots with Guns: The Rise of Autonomous Weapons Systems by Snopes
The latter are known as lethal autonomous weapons systems.

*Note: Image used belongs to

In any case, autonomous weapons are surely under development by many nations, a reality so concerning to non-military robotics and artificial intelligence experts that many signed an open letter in 2015 urging a pre-emptive international ban on the weapons.

Artificial Intelligence technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle.

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Another group sounding the call for a halt to the development of autonomous weapons is the Campaign to Stop Killer Robots, co-founded by Human Rights Watch in 2012.

Members of the group will participate later this year in a meeting of government experts called by the United Nations Convention on Conventional Weapons, from which a new international protocol could emerge addressing the issue of – perhaps ultimately prohibiting – lethal autonomous weapons systems.

The U.S. isn’t among the nineteen nations calling for a ban, though it is one of the few countries whose military has adopted an official policy governing the development and use of autonomous weapons.

One would think such a view would be fully compatible with a ban on lethal autonomous weapons systems, but at least one other member of the Trump administration, Steven Groves, Deputy Chief of Staff of the U.S. Ambassador to the United Nations, has said otherwise.

The CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems will meet for its first session on 21 August 2017.


To read more information, visit the original article :
This summary has been auto generated by

My doctor is an algorithm: ‘Medicine has always welcomed new technology’

Below is a summary for the article : My doctor is an algorithm: ‘Medicine has always welcomed new technology’ by independent

*Note: Image used belongs to

The ‘good GP’ we know will always know his patients well; their medical history, their personality, details about their family and their foibles, and will retain this background information and call on it when assessing patients.

Unlike the good GP, who may be overworked, and stressed by the demands of a busy clinic, computers with built-in AI have an almost unlimited capacity to store information in medical records, and to recognise patterns that may have been missed by the GP. They are also excellent at measuring things, such as blood pressure, and analysing results, such as a routine blood test.

The role of specialist medical consultants – not just GPs – is also coming under threat too from sophisticated ‘deep learning systems’ which are already outperforming doctors in specialised areas of medicine, says Professor Barry O’Sullivan, director of the Insight Centre for Data Analytics at the Department of Computer Science, in University College Cork, and a leading AI researcher.

Patients key in symptoms, and AI determines how urgent each case is, and whether the user should be told to go straight to A&E, the chemist or simply go home to bed.

This is an area that AI excels, as it is very good at looking at medical data and finding specific treatments for patients which can, in some cases, even save their lives.

Organisations such as Cancer Commons have assisted cancer patients survive conditions that were regarded as fatal in their situations by combining AI with extremely rich and individual-level data,” says Prof O’Sullivan.

“It is less useful for other dimensions of medical care: contextualising findings in the context of the patient’s life, moving from recognising a pattern to agreeing a narrative between doctor and patient, and providing reassurance.”

“AI is also less useful for picking up on the unexpected elements in a patient’s presentation: noting, for example, that a patient presenting with a lesion on his ear also has swollen ankles. Or noticing that someone with chest pain smells strongly of alcohol at 11 in the morning and might have an alcohol problem. Or noticing that a woman who comes to have a prescription renewed brings along her child, whom the nurse remembers has missed a vaccination, and can receive it today.”

“Medicine has always welcomed new technologies and benefited hugely from them: stethoscopes, x-ray machines, MRI. But these tools amplify the effectiveness of healthcare professionals, rather than replacing them.”

“There is a requirement for the GP to conduct face-to-face consultation to develop a rapport, to physically examine a patient and read nonverbal cues.”


To read more information, visit the original article :
This summary has been auto generated by


Copyright © 2018 Arboter

Theme by Anders NorenUp ↑