The Atlantic has an amazing in-depth article on how Douglas Hofstadter, the Pulitzer Prize-winning author of Gödel, Escher, Bach, has been quietly working in the background of artificial intelligence on the deep problems of the mind.
Hofstadter’s vision of AI – as something that could help us understand the mind rather than just a way of solving difficult problems – has gone through a long period of being deeply unfashionable.
Developments in technology and statistics have allowed surprising numbers of problems to be solved by sifting huge amounts of data through relatively simple algorithms – something called machine learning.
Translation software, for example, long ago stopped trying to model language and instead just generates output from statistical associations. As you probably know from Google Translate, it’s surprisingly effective.
The Atlantic article tackles Hofstadter’s belief that, contrary to the machine learning approach, developing AI programmes can be a way of testing out ideas about the components of thought itself. This idea may be now starting to re-emerge.
The piece is also works as a sweeping look at the history of AI and the only thing I was left wondering was what Hofstadter makes of the deep learning approach which is a cross between machine learning stats and neurocognitively-inspired architecture.
It’s a satisfying thought-provoking read that rewards time and attention.
If you want another excellent, in-depth read on AI, a great complement is another Atlantic article from last year where Noam Chomsky is interviewed on ‘where artificial intelligence went wrong’.
Both will tell you as much about the human mind as they do about AI.
Link to ‘The Man Who Would Teach Machines to Think’ on Hofstadter.
Link to ‘Noam Chomsky on Where Artificial Intelligence Went Wrong’.
No comments:
Post a Comment