16 Nov 2012

A conference I'll soon be attending asked participants for the questions they find most important for A.I. I didn't submit any, but from the list of ~150 questions, they now asked us to vote for some (up to ten) that we find most important. Here are my picks (I did it really quickly):

  • Q12. DWIM - Do What I Mean problems. Can computers do what I think or intend or say?
  • Q15. Can machines and the people think together?
  • Q35. The computer processing power is increasing with the invention of new hardware devices, does this mean that the computation processing power is infinite and are humans able to achieve it?
  • Q54. Can computers exhibit the dynamics of the financial market , rather can we a) Model the financial market b) Make the computers to exhibit the same
  • Q78. Does a system become always smarter by integrating smart sub-systems or smart components together? If no, in what cases it becomes smarter, and in what cases it becomes not smarter even stupid? How to make a system smarter when integrating smart elements?
  • Q127. What is the difference between Turing test and Socratic method?

I was a bit disappointed by reading the list of questions. I could not find ten questions to agree with (but I might have missed one or two). Most of them were not only written badly, but clearly simply asked for what the researcher is currently working on (i.e. how can A.I. make use of mobile technology?/how can social networking improve our lives?) or what has been a question in AI already 50 years ago (i.e. what is thinking, really?).  

I am currently not that fanatic about being called an A.I. researcher, and this (to some extent) showed me why. The concepts are too fuzzy for me, everything goes under this A.I.  hat. Too few people care about topics that deeply influence how we will manage in the upcoming decades.

One of the themes I miss is complexity - only questions 54 and 78 really touch upon that (while I'm not sure question 54 has the intention that I hope it has). The complexity of interactions in our interrelated decision systems becomes less and less understandable and controllable. And, while we head towards a future of scarce resource, the complexity of our economic system increases as well, with no signs that the financial sector actually added any value in the way. 

A.I. researchers should decide why they do things. If you built robots, is that really what humankind needs? And if so, is it likely that we can afford a robot for everyone in 15 years? Do we need smarter decisions for future actions or rather better explanations of why systems actually fail? If you study complexity, is Facebook really where our fate is decided?

# lastedited 21 Nov 2012
You are seeing a selection of all entries on this page. See all there are.