Dave Farber
2018-07-17 02:22:44 UTC
Subject: [Dewayne-Net] Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
Date: July 17, 2018 at 2:22:36 AM GMT+9
Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
By Steve Lohr
Jun 20 2018
<https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html <https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html>>
For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach.
Companies like Google, Facebook and Microsoft have poured money into deep learning. Start-ups pursuing everything from cancer cures to back-office automation trumpet their deep learning expertise. And the technologyâs perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars.
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now â and disillusionment later.
âThere is no real intelligence there,â said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. âAnd I think that trusting these brute force algorithms too much is a faith misplaced.â
The danger, some experts warn, is that A.I. will run into a technical wall and eventually face a popular backlash â a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technologyâs limits.
Deep learning algorithms train on a batch of related data â like pictures of human faces â and are then fed more and more data, which steadily improve the softwareâs pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence â that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like âjustice,â âdemocracyâ or âmeddling.â
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org <http://arxiv.org/>, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: âIs deep learning approaching a wall?â He wrote, âAs is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.â
If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. âWe run the risk of missing other important concepts and paths to advancing A.I.,â he said.
Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learningâs weaknesses. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge â an initiative called Project Alexandria.
While that program and other efforts vary, their common goal is a broader and more flexible intelligence than deep learning. And they are typically far less data hungry. They often use deep learning as one ingredient among others in their recipe.
âWeâre not anti-deep learning,â said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. âWeâre trying to raise the sights of A.I., not criticize tools.â
Those other, non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley start-up, computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Deep learning comes from the statistical side of A.I. known as machine learning.
Benjamin Grosof, an A.I. researcher for three decades, joined Kyndi in May as its chief scientist. Mr. Grosof said he was impressed by Kyndiâs work on ânew ways of bringing together the two branches of A.I.â
Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the start-upâs chief executive.
[snip]
Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/ <http://dewaynenet.wordpress.com/feed/>
Twitter: https://twitter.com/wa8dzp <https://twitter.com/wa8dzp>
-------------------------------------------Date: July 17, 2018 at 2:22:36 AM GMT+9
Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
By Steve Lohr
Jun 20 2018
<https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html <https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html>>
For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach.
Companies like Google, Facebook and Microsoft have poured money into deep learning. Start-ups pursuing everything from cancer cures to back-office automation trumpet their deep learning expertise. And the technologyâs perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars.
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now â and disillusionment later.
âThere is no real intelligence there,â said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. âAnd I think that trusting these brute force algorithms too much is a faith misplaced.â
The danger, some experts warn, is that A.I. will run into a technical wall and eventually face a popular backlash â a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technologyâs limits.
Deep learning algorithms train on a batch of related data â like pictures of human faces â and are then fed more and more data, which steadily improve the softwareâs pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence â that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like âjustice,â âdemocracyâ or âmeddling.â
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org <http://arxiv.org/>, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: âIs deep learning approaching a wall?â He wrote, âAs is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.â
If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. âWe run the risk of missing other important concepts and paths to advancing A.I.,â he said.
Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learningâs weaknesses. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge â an initiative called Project Alexandria.
While that program and other efforts vary, their common goal is a broader and more flexible intelligence than deep learning. And they are typically far less data hungry. They often use deep learning as one ingredient among others in their recipe.
âWeâre not anti-deep learning,â said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. âWeâre trying to raise the sights of A.I., not criticize tools.â
Those other, non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley start-up, computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Deep learning comes from the statistical side of A.I. known as machine learning.
Benjamin Grosof, an A.I. researcher for three decades, joined Kyndi in May as its chief scientist. Mr. Grosof said he was impressed by Kyndiâs work on ânew ways of bringing together the two branches of A.I.â
Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the start-upâs chief executive.
[snip]
Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/ <http://dewaynenet.wordpress.com/feed/>
Twitter: https://twitter.com/wa8dzp <https://twitter.com/wa8dzp>
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=26461375
Unsubscribe Now: https://www.listbox.com/unsubscribe/?member_id=26461375&id_secret=26461375-c2b8a462&post_id=20180716222255:4E7432CA-8968-11E8-A67D-DA1E84D9510D
Powered by Listbox: https://www.listbox.com