Future Tech: An interview with Jason Leonard, IBM Watson

In this Future Tech interview we’re speaking with Jason Leonard, the Watson Business Leader for IBM across Asia Pacific, Greater China Group & Japan. Watson, the cognitive computer famous for its win in Jeopardy, is now in the workplace doing everything from helping pharmaceutical companies to accelerate research to providing automated self-service for customers. In this role Jason leads a rapidly growing team responsible for engaging customers on behalf of the Watson Business Unit. Jason has 20+ years of experience across many industries, including Defence, Government, Banking & Insurance and Software Development. Jason has a Masters Degree in Applied Science (Artificial Intelligence) from RMIT University and a First Class Honours Degree in Computer Systems Engineering from Swinburne University.

 

Shara Evans (SE): Greetings. This is Shara Evans from Market Clarity. Today, I am delighted to be speaking with Jason Leonard about artificial intelligence. Jason is the Director of Watson’s Cognitive Systems at IBM, leading the service line across Asia-Pacific, China, and Japan. He has tremendous experience both on the business-application side and in computer science, and, in particular, working with R&D organisations, designing and architecting and developing artificial-intelligence applications.

Jason, thank you so much for joining us today.

Jason Leonard (JL): Thank you, Shara.

SE: I’d like to start our discussion with a bit of a backgrounder about artificial intelligence, AI. As I understand it, there are a couple different paths that AI can take. The first is with expert systems or with what people call narrow AI. The second is with artificial intelligence that is trying to mimic the human mind and is called general intelligence. Perhaps you can give us a bit more insight into these different streams of machine learning.

In the beginning

JL: Sure. Artificial intelligence has been around for a very long time now. Some of the early methods began way back in the 1950s, 1960s. In those first early years a lot of the early work went into developing what was known as ‘expert systems’. People looked at methods by which they could use logic, where they could use so called “if-then-else” rules, or engines, more generally, to solve a particular problem.

By Tej3478 (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

By Tej3478 (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

This early work was found to be rather limited and very narrow. A range of new techniques started to be developed in the 1990s, when they started to take off. They involved things like neural network, genetic algorithms.

Then, in more recent times, with the explosion in the amount of data available to people, this has meant these ‘expert systems’ now have a lot more information to be able to work with and a lot more opportunities to become useful as well. Now this explosion of data, in part, has come up along because of the growth of the Internet and smartphones. Smartphones are delivering a whole bunch of extra capabilities like GPS systems for example. So subsequently, the applications for AI have started to expand from there as well.

Now when we talk about data, there are broadly two types of data. One is structured data. The other is unstructured. Structured data is the kind of stuff that computers have worked with very well for the last 60 odd years, for example information stored in relational databases. Unstructured data is the sort of stuff that’s really been caused by the explosion of information on the Internet like photos, videos, textual documents, emails, and tweets, for example. Everything that’s on social media tends to be unstructured. We are finding that this unstructured data – which has been ‘dark’ (unavailable) to computer systems – is now accessible and providing really interesting insights for organisations.

IBM Watson: General machine intelligence

SE: In terms of what IBM is doing with Watson, would that type of cognitive computing fall into the category of artificial general intelligence more so than a narrow AI?

JL: The future of such technology which we believe will be cognitive, not “artificial”—has very different characteristics from those generally attributed to artificial intelligence — spawning different kinds of technological, scientific and societal challenges and opportunities, with different requirements for governance, policy and management.

One of the most important things to know about cognitive computing, or general intelligence, if we use that term, is the ability of Watson to learn at scale, reason with purpose, and interact with humans naturally. Watson learns by example rather than people trying to program it or to set up pre-learned rules. This was first demonstrated on the “Jeopardy” TV show.

SE: I remember that.

JL: With the “Jeopardy” TV show, Watson had to learn from previous “Jeopardy” game to learn how to interpret a question, and learn how to find the answer with a high level of confidence. When Watson played Jeopardy! – it did one thing -natural language Q&A, based on five technologies. Today, Q&A is only one of many Watson capabilities available as an application programming interface. Since then, we have developed more than two dozen new APIs, powered by 50 different cognitive technologies

IBM and The Weather Company Partner to Bring Advanced Weather Insights to Business (Source: IBM Watson)

IBM and The Weather Company Partner to Bring Advanced Weather Insights to Business (Source: IBM Watson)

SE: It seems to me like there’s a bit of a blur in between what you would call expert systems and this general intelligence. It’s not just one or the other.

JL: There is also a third category, and that’s the whole area of advanced analytics. When we talk about things like predicting the weather, or trying to understand what is of interest to a person when they’re buying something, we would set up some advanced models that may also use learning technologies —and now we are seeing applications with a combination of technologies— some solutions have elements of expert systems, and elements of cognitive computing and use analytics as well.

SE: Yes, so it really is quite a blend. I wanted to ask you about something else that I’ve been hearing a lot about, which is a term called deep learning as opposed to just general learning, with respect to artificial intelligence. What’s changed there in the last year or so?

JL: Just going back to what I was referring to, there’s a lot more data that’s available this days. The Internet has opened up this area of deep learning. Deep learning is about learning by example.

AI and Emotional understanding

SE: There’s one other trend that I’d like to talk to you about, which is really exciting, which is emotional understanding by intelligent systems. I’d really like to get your feedback on how well you think computer systems can pick up on emotions today, and what’s required to do that? I’d imagine things like the cadence of voice, but also robot vision, and looking at facial expressions, and so on.

canstockphoto5149743JL: Yes, there’s actually quite a few different aspects to this one. Apart from the couple that you’ve just mentioned, there’s also feedback back to the human. It’s one thing for the computer to understand the current emotional state of the person, but we also want the computer to be able to respond in a natural human-like way to that person as well. It doesn’t just answer the same way. The computer shouldn’t answer in the same monotone voice.

Going back to that “Jeopardy” game show, it wasn’t really about winning a game show. It was actually about understanding human language. Let’s call that the first layer. Human language itself has a whole bunch of nuances, jokes, puns, ambiguities, so the computer needs to understand that.

Then a step along from that is: if technology can learn to understand your personality a little bit better, like whether you’re extroverted, introverted more adventurous, conscientious, then perhaps the interaction for the person engaging with a cognitive computing system, their experience is likely to be better.

SE: So that will go into big data, wouldn’t it, being able to get information about the person you’re speaking to from lots of different sources because it’s hard to pick up on a personality in a very short transaction?

JL: Well, that’s really interesting. Some things that we think might need lots of data in fact only need a comparatively small amount of data. To pick up someone’s personality — we have the Watson capability to do that — actually doesn’t need that much data at all. Watson, for example, can use a whole bunch of tweets that a person posts. Watson can start to have a pretty good idea of what your personality traits are based upon the words and the phrases you use.

Now think about this technology in the context of creating applications for different types of engagements. For example, if I’m a marketing person, and I’m trying to appeal to a particular group of people, say general consumers versus the more technical audience then I will want to use a different tone.

SE: Yes, so it would go to whether the conversation is one to one versus perhaps one to many, or one to one with the expectation that it really is a one to many conversation that others would listen in on.

JL: Yes, quite a lot of different circumstances, personal conversation versus a more business-oriented or a marketing conversation.

SE: That’s fascinating.

Neural Nets

One of the things you’ve mentioned earlier on, Jason, was neural nets, and it’s a term I come across quite a lot in my reading of AI. I’m hoping you might be able to explain a little bit more about what AI researchers mean when they talk about assembling a neural net.

JL: The concept of neural nets goes back a few decades as well. AI Researchers looked at how neurons work within the brains, and how they connect to each other, and the feedback loops which are set up within, and then said, “Let’s set up some systems that work in a similar way.”

neurons-582054_1280Using neural net is an example of supervised learning. It recognises some objects, and then you give it more examples: that’s a different object, or maybe it’s the same object from a different angle. We start to get clusters. Those clusters can be assigned a value. Like I’m going to assign a value that this is a dog; this is a chair; this is a building, and from that then we can start to say that now we know that whenever a photograph or an image comes in that has this set of attributes, the neural net will say, “Well, that’s a dog.”

Now this is where it gets interesting. There’s little dogs, big dogs and different dogs that are visually very different to each other. When we get a new image coming in, it might be miscategorised, and by miscategorised it’s probably not saying something wrong. It’s probably saying, “I think it might be a dog, but I’m not quite sure.” We try to find those examples when we’re training these systems, so that the system starts to recognise detailed attributes.

The more images of a dog the cognitive computing system sees, the more likely it’s going to get it right with a high level of confidence. When we talk about AI, neural nets, or cognitive systems, we are talking about probabilistic systems rather than the deterministic systems we are used to. Probabilistic systems look for evidence and provide a confidence level, deterministic systems know that 1+1=2.

SE: Yes, and I can see where it can be a bit fuzzy. If we go to your example of dogs, one can show a picture of a wolf, and I can imagine that an AI would categorise that as a dog because it does looks similar, and you need a certain amount of training to distinguish between the two species.

JL: That’s right. There could be a bit of an overlap there, especially the dog breeds that are similar to wolves, so we test to keep constant of malamutes for example compared to wolf, so the system will learn to distinguish the difference, if that difference is actually important to the application at hand.

SE: Well, if it came to human danger, it probably would be.

Watson in Healthcare

IBM Enhances Watson's Ability to

IBM Enhances Watson’s Ability to “See” Medical Images (Source: IBM Watson)

I’d like to turn to some of what you’re doing at IBM Watson and talk about, in particular, the healthcare vertical because there’s a quite a bit of exciting work that you’ve done there. Perhaps you can start off, Jason, by talking about some of the latest advances of Watson in the healthcare space.

JL: There’s a range of Watson capabilities on the Watson platform. IBM works directly with organisations using this platform, and in addition, there are many partners who are building interesting applications in healthcare and other industries with the platform as well.

What is IBM doing in healthcare?

Well, a variety of things. IBM has been working with organisations such as Memorial Sloan Kettering in the field of oncology. Watson is being used to assist the expert, the expert in this case being an oncologist.

So why do these oncologists need these systems at all? They’re experts in their field and have had many years of training and experience. The problem is this explosion of data again. So in the field of oncology, there’s thousands of new medical research, reports coming out all the time. There’s new drugs being discovered all the time, new clinical trials and information being shared at conferences.

How does an oncologist keep up with all this? And then, between patients, they might only have a very short time, maybe five minutes or so, to look at this incoming patient’s medical record, and there might be 30, 40, 50 pages in that medical record.

Also let’s imagine the superhuman oncologist who’s been able to read everything that comes up every night, and with their photographic memory, are they’re still going to be able to make the connection between this patient who could be a candidate for a medical trial that has just been released two days ago?

Screenshot from Under Armour smartphone application (Source: IBM Watson)

Screenshot from Under Armour smartphone application (Source: IBM Watson)

It’s very challenging for anyone to be able to keep up with this volume of information.

So we’re trying to meld the best of what the oncologist does to the best of what cognitive computers, can do. Cognitive computers are very good at making connections, are very good at reading and making sense of massive loads of information, in this case the domain of oncology. To sum it all up, Watson’s helping the oncologist with data to make the right decision, but the decision still rests with that oncologist. That’s the first one: oncology.

SE: That’s an exciting area. I’m wondering, Jason, whether you’re also drawing in information that might come from wearable devices or other sensor devices that an individual patient may have on their person.

JL: Some of our partners are working on applications in this area. One of our partners, Under Armour, the clothing company, is going down this path.

SE: And it makes a lot of sense because the more information that is very subjective that a physician or a specialist has, the better decisions they can make.

JL: Absolutely. Then being able to connect, if the person has — for example, a slightly elevated heart rate, what does that mean in context of the other issues that person has?

SE: That’s right.

What are the some of the other healthcare areas that you’re working on in the Watson team?

Genomics Research

JL: Well, there’s been an explosion in genomics the people might have heard about in recent times. It’s kind of similar but takes into account DNA testing as well. By adding information about the tumour in the context of a patient’s DNA, medical researchers and practitioners can be very specific about research, diagnosis and the treatment plans by considering drugs that specifically target that specific change in the DNA chain.

SE: That’s a fascinating area. Do you see humans being taken out of the loop in any of this? I mean, that’s certainly a concern for many applications of AI. From what you’ve been saying in the medical realm, it would seem that it’s more of a partnership between the AI agents and the physician or specialist.

JL: It’s a partnership. Oncologists, specialist, professional — these applications are all about supporting the experts and enhancing their daily work. This is particularly useful since the productivity and outcomes from people who are best-in-class is often 10x the average (competent) professional…cognitive computing can help close this gap.


Robo-advisors

SE: Let’s turn to the finance industry because there’s some interesting things happening there as well. In particular, with wealth management. You’ve probably heard about some of the FinTech disruptors like robo-advisors, and they would certainly be using elements of artificial intelligence in trying to make these decisions for wealth management. What is IBM Watson doing in that area?

canstockphoto24626906JL: It really is a fascinating area. How does a Wealth Manager help their clients? Again, this technology is about helping the expert tap into the vast volume of data and tailor their advice very specifically to the needs of their client, faster. Everyone’s situation and portfolio is different.

For example, if the US Federal Reserve decides to change the interest rate, there could be an impact to the Indian shipping industry. If we know there’s an impact to the Indian shipping industry, the Wealth Manager would want to know which of their clients have invested, and their degree of exposure to this change.

Trying to draw those highly variable lines together is incredibly difficult for people to do and provide the higher levels of services expected by customers.

SE: It almost sounds like you’re talking about a level of frontline customer service as well.

JL: In some cases, we’re dealing with organisations who are overloaded with the number of customers they’re dealing with and then looking to offload some of that work to a system-like works connecting a virtual agent. We all have had that experience where we waited extended periods of time for the call centre to take our call and when you do get through you get transferred to different departments until you get the right help.

SE: Oh, yes.

JL: We’re also helping those call-centre staff with information they can use in natural language so they can respond accurately and consistently to a wider scope of customer inquiries.

SE: Yes, and I’ve got to say some of the voice-recognition systems are absolutely appalling. They don’t recognise accents, and they don’t even recognise native English speakers speaking to them. If you say a word that’s not in their vocabulary, they’re completely thrown. I could see where AI could play a role there.

JL: Well, absolutely. Because of AI and deep-learning platforms in total, voice-recognition systems are getting a lot stronger. They’re able to recognise what you’re saying even in the presence of different accents or background noise or where other compounding aspects might throw a system off.

This kind of gets a little bit to how AI can be applied. When I’m talking to organisations, I’ll say, “Take it a step at a time.” Using an example of a call centre, does the organisation really need voice interaction? Lots of people are using text messaging or live chat today, and this can remove the whole voice-recognition issue. Then organisations could later decide to build voice recognition as a layer on top of existing systems.

SE: Yes, and it’s less frustrating for a customer, too, when they don’t have to constantly repeat themselves.

Watson: Providing customer service in the finance sector

Are you doing anything with Watson in terms of customer service in the financial segment?

JL: Yes. Let me provide another example from the Insurance industry on how these cognitive systems can be used to improve customer interactions and provide help.

First of all, using this technology to explain basic instructions, like: How do I fill out this form?; Why are you asking me this question?, What am I covered for under my insurance policy, etc,

By Sharique.m3em (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

By Sharique.m3em (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

Secondly, helping to process customer claims. For example, if you’re filling out a claim form describing how a car crash occurred you might either say, “I crashed into the back of the car,” or, “The car crashed into the back of me.” Now for a human the difference using natural language is easy to understand. But for traditional computing systems, it’s difficult because almost the same words are being used just in a different order.

This is how Watson systems or cognitive systems can be smarter and help with things like faster client processing because it can understand the difference between those two sentences and can work out who is at fault (as per law of the land) and who should pay for damages.

SE: So it’s working out the nuance on what’s being said rather than just focusing on keywords.

JL: Big, big difference. Search engines only use keywords. Cognitive computing can understand the nuances of natural language and that the order of words can make that sentence mean something entirely different.

Making sense of the information deluge

SE: In your example with healthcare, you mention the deluge of documents and incoming data that is a challenge for any professional. I’d imagine in the finance industry you have very similar challenges. Are you doing anything in the area of risk and compliance automation?

JL: In the finance industry, professionals must identify and act on emerging trends, spot operational issues or opportunities in real time and respond proactively. Cognitive technologies, coupled with human experience and insights, can enhance and help inform timely decision making and help to mitigate risk.

SE: That’s fascinating. I’d imagine that there are applications in the government sector as well as the finance sector because there are governance issues for corporations and for government entities that involve compliance and assessing risk. Is that one of the targets that you’re focusing on for Watson?

JL: More generally, an industry that has a lot of unstructured data is a likely candidate. Government is certainly one.

Digital Assistants

SE: That’s a fascinating application. Another application that I find particularly interesting is this concept of having a personal digital assistant, a cognitive assistant, a digital avatar that basically acts like my expert EA, but it’s virtual. It’s a software entity that helps me navigate my life. Is IBM doing anything in this area?

woman-163426_1280JL: IBM, in partnership with our enterprise customers and together with our partner community — we are helping them build their own applications.

We’re getting to some really interesting areas. Imagine a virtual health assistant that has an understanding of what’s going on with your life at the moment. Earlier we talked about emotion and personality and technology, so how do you combine some of those things?

Then imagine a virtual assistant this is not only helpful but has a bit of a personality associated with it. It talks to you in a way that you’d like to be spoken to. For example, imagine if your virtual assistant could say and do this for you: “I see you’ve had a really busy week. I’ll tell you what I’m going to do because I really want to look after you. I’m going to order some food to be delivered to your door because I know you’re going to be exhausted and you cannot cook tonight. And because I see you’ve had a really busy week, I know that you probably need to stock up on your nutrition levels, I’m going to make sure you eat fresh and healthy food. And I know you like tasting ginger, but you hate celery, so I’m taking that into account as well.”

SE: What kind of timeframe are we looking at for these kinds of assistants to be available? I know that there are some experimental prototypes, but when do you think the average person might access to this kind of digital assistant?

JL: Well we already have digital assistants today that people are using. It’s not quite to the vision that I was talking about, but it won’t be long before it becomes available to the average person.

SE: Right, and I guess it also goes to: Will it be cloud based or primarily app based perhaps on your smartphone. I imagine there’ll be a combination of both at some stage.

JL: Yes, definitely a combination of both is the answer.

SE: And I imagine having access to other databases. For instance, information about traffic congestion in an area that you’re planning to travel to could allow that avatar to do much more interesting things than if it was confined only to data you have about yourself.

JL: Yes, traffic and transport is another area that people are working on to create solutions to provide commuters with improved services.

SE: It sounds so fascinating.

The Watson ecosystem

One last thing I like to ask you about with respect to Watson is that: you’ve mentioned it’s a platform that you work with a very wide range of partners, how is that platform to access? Is it open to any developers that might want to take advantage of it, or is there a gating process that you use to allow people to use — to have access to Watson?

JL: IBM has opened its Watson platform to the world, allowing a growing ecosystem of over 80,000 developers, students, entrepreneurs and tech enthusiasts globally to easily tap into the most advanced and diverse cognitive computing platform and APIs available today.

SE: Do you think AI will ever get to be equivalent to human intelligence and or even surpass human intelligence, and what sort of time frame might we be looking at if you think that that will happen?

JL: I think that cognitive systems or AI or other branches of this field of science and technology, are different to human intelligence. Humans will always be better in the areas of relationships, judgment and intuition. Where computers are going to be a bit stronger is in the area of computation, access to data, being able to draw extensive data and bring it together for consideration by humans.

SE: Jason, thank you so very much. This has been an absolutely fascinating conversation.

JL: Thank you.

About the author: Shara Evans is internationally acknowledged as a cutting edge technology futurist, commentator, strategy advisor, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.