Home > Case Studies Solutions > Building Watson Harvard Case Study Solution Analysis

Building Watson Harvard Case Study Solution Analysis

Recommendation

While Watson is clearly not a perfect system and there is likely to be a need for greater research and development, Watson’s implications, and all the technology like it, are deep. In the field of artificial intelligence and advances in technology, it needs to improve, change and evolve every day. Watson presents only a stage in which potential advances and discoveries will be made to benefit this technology in the near future. It would be advisable to further the research in order to achieve the benefits in that future. While highly capable of processing information much faster than a human being, it would be considered to have an advantage, and yet it gave a completely inaccurate answer to a question, which means that there is still something missing in decision-making skills as to when to answer it. If a human being is not sure of an answer they would possibly refuse to answer; nevertheless, Watson chose to answer, and the answer given with low artificial intellectual certainty was completely wrong.

Eventually, Watson sets a precedent and encourages further effort and investment in these technologies ‘ research and development. Human beings ‘ unpredictability and sense of selection is still something that can not be accomplished synthetically (Shih, 2012). Computers have none of humanity’s emotional motivations, at least not in this case or at the level of technology, these influences create the ethical, logical and moral uses of our knowledge and thinking. There is a dynamic nature to human thought, organic thought, which has yet to be captured within the programming of artificial intelligence and computer technologies.

Building Watson Harvard Case Study Solution Analysis

Rational

There are a number of reasons that Watson was chosen to participate in the “Jeopardy” game as a meaningful challenge of its abilities, including determining its ability to strategically choose when to answer and when not to, choosing categories from the panel options, and determining calculated certainty when bidding monies on “Daily Doubles” and “Final Jeopardy.” Knowing that at the core of the technology Watson is a sophisticated computer and computers are beneficial for their ability to compute, calculate, retrieve, and analyze data at incredible speeds, far faster than a human brain (Shih, 2012). A trivia challenge of, both, common knowledge and specialized knowledge should be an ideal task for a computer to do decidedly well, ideally, better than any human opponent. There are, however, a number of human personality and psychological traits that are totally irrelevant to artificial intelligence, from compulsive liars to tellers of truth and equal players and misdirectors. These traits are just a few, thousands of possible human-attributable emotional, mental and behavioral responses; none of which are shared by artificial intelligence. (Shih, 2012). In view of these criteria, a machine can win not only a match of intelligence but also a match of strategy and professional play?

While Watson did quite well in answering several questions correctly throughout the course of the game, it was incredibly wrong when it gave one specific incorrect answer that was not just wrong. When asked to name the U.S. state, Watson replied, “Toronto,” having its largest airport named after a WWII hero and its largest city named after a WWII battle. (Shih, 2012). Toronto is not a U.S. city, in the United States at all, and has absolutely nothing to do with the questions involving WWII namesakes; the degree of wrong was surprising to designers and proved that greater study will need to be done to determine why it chose an answer so obviously incorrect and then choosing to share that answer with any “programming” common sense that allowed to think that the answer could be correct would be extremely important.

Watson was programmed to share its level of certainty by answering with a different number of questions marks following a given answer, which allows them to determine its level of confidence in its answer. In the “Toronto” question, Watson showed several question marks regarding its uncertainty in the answer. However, that said, with that level of uncertainty determined why would Watson option to answer at all?  Designers believe that these flaws found, during the “Jeopardy” challenge, are a direct response to the need for more open and variant software to handle unusual language, slang, and terminology that may be misinterpreted by Watson; affecting its ability to judge correct answers (Shih, 2012).

Winning Jeopardy would show incredible problem solving skill, adapting to varying formatting and structure of questions and answering, and understanding when it is better to forgo answering when not answering is a strategically beneficial, after all. Watson’s performance can easily be called fascinating, intriguing, and a testament to the advancements in the technological sciences that take place every day, but it did not grasp the sweeping victory one might expect in the competitive game play of a question and answer game.

Background

Human beings are fascinated with all of the amazing potentials of technology; from Frankenstein to time travel, and light sabers to starships, so many of the concepts and developments that seem like science-fiction fairy tales have become modern scientific realities. Computers created that could think, respond and interact with human beings was, called foolishness once, is now mainstay of modern society. IBM invested a great deal of time and energy into the development of technologies that could not only store, retrieve, and share information, it could, also, make judgment calls on the relevance and legitimacy of that information and rationalize its answer choices. Their first attempt in this direction occurred with “Deep Blue,” a computer that was able to successfully defeat a world class chess champion (Shih, 2012). This victory in a game, like chess, proves that an artificial opponent has the potential to show strategic, tactical, and counter-tactical planning in its efforts to beat an opponent.

The next endeavor into this field led to the development of Watson. A computer designed to be able to make greater judgments, accuracy of information. and rationalization of the information. The best way to test this new system, it was decided, was to challenge it to a game of “Jeopardy.” Jeopardy is a classic American television quiz show that has been in the air for decades. It is, often, remarked as one of the most challenging trivia challenges, requiring a great deal, of both, general and specialized knowledge (Shih, 2012).  It, also, involves a certain level of strategy, after all, at certain points in the game contestants are asked to wager monies won on the certainty of their answers. This is a critical point in the game, where an overconfident player can risk too much and a conservative player can gain advantage even if their answer is inaccurate. It is this sort of varied play that makes the game such an ideal choice to challenge the advancing technologies that Watson represents. However, its performance proved to have some surprising faults that could not be anticipated but will contribute to further research and study, perhaps the next efforts to test this system, or another like it, will result in a truly challenging and impressive game with a more confident, more certain, and  more accurate artificial opponent responses.

References
  • Shih, W. (2012). Building watson: Not so elementary, my dear!. The Harvard Business School, 1-19.

Related Posts

Leave a Comment

eleven − one =