Why Your "TQ" May Matter More Than Your "IQ"

27 Dec. 2011 | Comments (0)

David Ferrucci, Principal Investigator for IBM's
DeepQA/Watson project, wants me to understand something very, very important about what makes Watson work. The algorithms, software and massively parallel architectures underlying the Jeopardy-winning technology's design are undeniably important, he says. But the essential breakthrough, Ferrucci asserts, is that Watson — like a dog, a dolphin, a killer whale or a high school apprentice — could be successfully trained.

Programming — or learning how to use — your digital devices is yesterday's paradigm. Tomorrow's technologies are increasingly about their ability to learn from you and your ability to effectively train them. Writing decent code may prove a less marketable skill than excellence at tutoring a child or, quite literally, teaching an old dog new tricks. The locus of value creation is shifting away from assembling innovative ensembles of technology and towards training those ensembles to be smarter than you are.

This theme and thesis dominated MIT's "Race Against the Machine" symposium exploring the future of human employment in an era of digital business transformation. Watson's success was hailed as both precursor and avatar of a rapidly emerging genre of intelligent systems. What digitally distinguished Watson's design was not how much it "knew" but how quickly and easily it could be trained to apply its disparate knowledge. Watson didn't win because it "knew" more. It kicked human butt because it could learn — and act on its learning — far faster and with greater confidence. Watson was bred to be ultra-trainable.

Google research vice president — and ex-IBMer — Alfred Spector effectively agreed. What makes Google's collective intelligence algorithms so brilliant, he observed, is that they're constantly learning from — as well as about — their users. The kind of intelligence Google is gunning for is not greater knowledge and more information but greater ability and more flexibility in learning how to learn.

Apple's tremendously popular Siri is modeled on the same "interactive adaptiveness" technical sensibility. IQ is rapidly giving way to TQ — the Trainability Quotient — as the metric mattering most to the artificial intelligentsia.

This represents radically different expectations and definitions around machine intelligence from those but a generation ago. When I studied AI, logic and ontology mattered most. Specifying rules and knowledge domains for "expert systems" was state-of-the art. Today, data sets and statistics dominate. Machine Intelligence has quickly been overtaken by Machine Learning as the quantitative discipline redefining cognition, decision, language and psychology. Massively parallel computational architectures have superseded tightly-written dedicated algorithms for solving — or resolving — ambiguously complex problems.

The triumph of Machine Learning has spawned its complementary counterpart, Machine Training — which means that "the right answer" is increasingly determined not by the machines but by their users. People train, teach or tutor — but don't say program! — the machines to come up with answers, outcomes and scenarios that have the desired degree of precision. For the original Watson, the right answer was the right question in a fraction of a millisecond. But now Watson's TQ is being reengineered and rededicated to health care diagnostics. Different doctors in different specialties will no doubt train their Watsons differently.

As the learning abilities of machines dramatically improve, so does the importance of people to tutor and train them. Your career prospects are not bright if a machine has little to learn from you. Conversely, the more your technologies can learn — and add value — based on that knowledge, the more valuable you are likely to be.

The human capital implications are compelling: you might be far better off professionally investing time in becoming a better tutor and coach than learning a new computer language. Similarly, if you can be a Cesar Millan of machines — a digital disciplinarian who helps people get more value from their devices much the way the original helps people have healthier relationships with their dogs — you possess a core competence that virtually guarantees a high-impact professional life. With all due respect to Peter Drucker, you can learn a lot from an animal trainer. You need to learn how to help your machines learn if you want to succeed.

This blog first appeared on Harvard Business Review on 11/03/2011.

  • About the Author: Michael Schrage

    Michael Schrage Michael Schrage, a research fellow at MIT Sloan School’s Center for Digital Business, is the author of Serious Play and the forthcoming Getting Beyond Ideas.

    Full Bio | More from Michael Schrage

0 Comments Comment Policy

Please Sign In to post a comment.
  1. Human Capital
  2. Back to Top