Share and discuss this blog



Sunday, January 17, 2016

IBM is at it again; more lies about Watson and AI

IBM is at it again. They are still pushing Watson and still being fraudulent about it. Let’s look at the commercial that aired this weekend: 

Watson: Ashley Bryant,  a teacher of small children.  

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  

AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development, and geography. 

AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.


Every single line of this is nonsense, so let’s take it line by line:


Watson: Ashley Bryant, a teacher of small children.  

How would Watson know it was talking to this woman? Does Watson know her? What would it mean to know her? Did Watson happen to recognize her when she walked up to a computer? How would that have happened exactly? AI can do some of this, but recognizing a person you have never met is complicated, and at this point beyond the abilities of AI except in a very superficial way.

AB: That’s right. 

Watson: I have read it is the hardest job in the world.  


Oh, Watson has has it? Where exactly did it read that? How did it choose to say that as opposed to anything else it might have read about being a teacher? How did it know that this might be a reasonable thing to say? Does it have a conversational model of small talk with strangers? Why didn’t it say “teaching is something teachers do”? It probably read that too. Or “I read about someone who hated their teacher”? What mental model does Watson have that helps it select what to say from everything it has “read” that contains the word “teacher?” 

When people meet teachers in a new setting is this something that they likely to say to them? Wouldn’t that be a kind of condescending remark? Does Watson understand condescension? Does Watson understand intentions and goals in a conversation? NO. IBM is just pretending. IBM is making it all up. They are not working on how the mind works, how conversation works, or on AI really. They just like saying that that is what they are doing so they sell Watson.


AB: Thats why I am here. 

Watson: I can offer advice form the accumulated  knowledge of other educators. 

Really? It should be able to offer advice from me then. I have knowledge about teaching that includes that when someone comes to talk with you, they typically have a reason for doing so, and that a good teacher asks what someone is thinking about or what their problems might be. Even in this fictional conversation, Watson makes clear that it has no idea how teaching or learning work at all. The right answer to “that’s why I am here” would be to ask about her problem, not to make grandiose claims about things it can’t do, namely match what it has stored as text to what her real problem is. Asking the right question at the right time is one of the hallmarks of intelligence and good teaching and is way beyond anything Watson can do.

AB: That’s wonderful.

Watson: I can tailor a curriculum for each student by cross referencing aptitude, development and geography. 

Oh it can can it? Curricula are actually very difficult to build and doing so requires a sense of what a student might want to learn and best ways to get them challenged and excited. Knowing the geographical placement of the student is sometimes relevant but hardly a major issue. Measurements of aptitude are tricky. Is Watson going to make a curriculum based on a student’s SAT scores? Watson probably can provide more math problems to a student who got some wrong answers. That is probably something Watson can do, but it is not AI and does not take intelligence to do. A good curriculum designer tries to figure out what is hard to comprehend and tries to make learning more fun, more engaging, more challenging and more relevant to the individual goals of the students. Is that what Watson can do? Of course not. It can make absurd claims however (or the people who wrote the commercial can.) 

I heard about a company that uses Watson to help in education. It takes Wikipedia pages (or any text) and turns them into tests. A marvelous innovation. So, Watson can take a bunch of words and make up test questions about the. Thats what it can do in education. And, by the way, that doesn't require understanding what the questions are even about, which is good because Watson understands nothing.


AB: Sorry to interrupt but I just have one question: how do I keep them quiet? 

Watson: There is no known solution.

Why can’t Watson look up the word “quiet” and take words of wisdom from the accumulated knowledge of educators on the value of quiet?

Because this is a commercial, and like most commercials it is selling based on minimal truth. IBM should know better and it should stop doing this. They are trying to convince the world that computers are smarter than they are, that AI has succeeded far better than it has, and that they, IBM know a lot about AI. All they seem to know about AI is how to retrieve text and lie about it.

Time to stop this crap IBM.

4 comments:

Unknown said...

Maybe IBM is just desperate?
http://www.technologyreview.com/view/545801/dont-blame-watson-for-ibms-slide/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20160121

Doug Stowe said...

"How do I keep my kids quiet?" What kind of question is that? Who the hell says they should be? Real learning is messy and kids make noise. That may be an unpleasant circumstance and an inconvenience for AB, her administrators and educational policy makers at large. And Watson would never know that, having never actually paid any attention to students learning.

I write about hands-on learning and the necessity of it at http://wisdomofhands.blogspot.com One of my readers referred me to this site, where he suspects I will find much in common with my own beliefs.

I often feel a sense of educational outrage of my own. Schooling as it is practiced in the US, largely ignores all that we know about developmental psychology, how students learn, how their minds develop, and how they may be best lured to participate in democratic society.

logicmoo said...

> Roger Schank wrote:
> I started a company called Cognitive Systems in 1981. The things I was talking about then clearly have not been read by IBM (although they seem to like the words I used.) Watson is not reasoning. You can only reason if you have goals, plans, ways of attaining them, a comprehension of the beliefs that others may have, and a knowledge of past experiences to reason from. A point of view helps too.

I agree with nearly everything that you said in your well-written ( scathing ) about Watson . I really appreciate your point of view. Why do we not have voices like yours within the realm of public understanding?

I tend to create my own story of the history of AI (we all do) and you are an important part of that story. Forgive me for exaggerating or misrepresenting your role in some parts of the following narrative: The first book I read of yours was the one published with Abelson (SPGU). towards the end of the book either I came to (or you came to the conclusion) that the software that you outlined (involving SAM/PAM) that was capable of understanding complex scenarios was held back only by the lack of prewritten scripts. I know that these prewritten scripts were intended to be very generalizable . Meaning we needed only a trivial number (though a large enough number to offend the anti-"a-priori" crowd at that time :) ) .. I don't remember if it was specifically said in the book ( just due to the time period) but undoubtedly we could use hierarchies of generalizations to make just a moderate number of scripts applicable to several different types of scenarios ( later on we might call this ontological engineering (where we crafted methodologies for dealing with the various idiosyncrasies of subsumption equalities)). Your successes I felt ( I doubt that Doug Lenat's version of history coincides with mine ) was the energy behind the viability of the CYC project. What I mean is having a handful of reasonable theories of machine understanding by various scientists who seem to have a common problem (you one of them) . we needed a blackboard that could hold an undefined amount of knowledge structures. For example PAM and SAM have strong overlap in types of knowledge structures they consume. and more importantly produce. Meaning we needed an architecture that was capable of supplying your modules with the type of generalizable /subsumptive knowledge structures that could be worked with over a long period of time and from the many modalities, contexts and points of view that might be needed. Not only using your current theories (at the time) but future theories. And of course from others to! really even to this day, I can't imagine it even being possible to implement explanation patterns or script theory, SAM, PAM etc without having a system ( nearly identical) to the architecture of CYC. I'm not referring to CYC as people describe it (my prescribed use of is less about modeling human understanding and more about using it as a general blackboard ). To where the goals of each agent in various scenarios are represented homoiconicly (I just mean readable by the agents themselves) and able to be access by various modules. What really bothers me ( fists shakingly! ) is that we have not yet disproven or even tried ( to my knowledge) to continue the experiments that you have outlined on a scale that we seem to have available with Watson. Or have we?


Thanks in advance,

Douglas

logicmoo said...

> Roger Schank wrote:
> I started a company called Cognitive Systems in 1981. The things I was talking about then clearly have not been read by IBM (although they seem to like the words I used.) Watson is not reasoning. You can only reason if you have goals, plans, ways of attaining them, a comprehension of the beliefs that others may have, and a knowledge of past experiences to reason from. A point of view helps too.

I agree with nearly everything that you said in your well-written ( scathing ) about Watson . I really appreciate your point of view. Why do we not have voices like yours within the realm of public understanding?

I tend to create my own story of the history of AI (we all do) and you are an important part of that story. Forgive me for exaggerating or misrepresenting your role in some parts of the following narrative: The first book I read of yours was the one published with Abelson (SPGU). towards the end of the book either I came to (or you came to the conclusion) that the software that you outlined (involving SAM/PAM) that was capable of understanding complex scenarios was held back only by the lack of prewritten scripts. I know that these prewritten scripts were intended to be very generalizable . Meaning we needed only a trivial number (though a large enough number to offend the anti-"a-priori" crowd at that time :) ) .. I don't remember if it was specifically said in the book ( just due to the time period) but undoubtedly we could use hierarchies of generalizations to make just a moderate number of scripts applicable to several different types of scenarios ( later on we might call this ontological engineering (where we crafted methodologies for dealing with the various idiosyncrasies of subsumption equalities)). Your successes I felt ( I doubt that Doug Lenat's version of history coincides with mine ) was the energy behind the viability of the CYC project. What I mean is having a handful of reasonable theories of machine understanding by various scientists who seem to have a common problem (you one of them) . we needed a blackboard that could hold an undefined amount of knowledge structures. For example PAM and SAM have strong overlap in types of knowledge structures they consume. and more importantly produce. Meaning we needed an architecture that was capable of supplying your modules with the type of generalizable /subsumptive knowledge structures that could be worked with over a long period of time and from the many modalities, contexts and points of view that might be needed. Not only using your current theories (at the time) but future theories. And of course from others to! really even to this day, I can't imagine it even being possible to implement explanation patterns or script theory, SAM, PAM etc without having a system ( nearly identical) to the architecture of CYC. I'm not referring to CYC as people describe it (my prescribed use of is less about modeling human understanding and more about using it as a general blackboard ). To where the goals of each agent in various scenarios are represented homoiconicly (I just mean readable by the agents themselves) and able to be access by various modules. What really bothers me ( fists shakingly! ) is that we have not yet disproven or even tried ( to my knowledge) to continue the experiments that you have outlined on a scale that we seem to have available with Watson. Or have we?


Thanks in advance,

Douglas