Future Tech 2024: An Interview with Dr Robert Fitch (Australian Centre for Field Robotics)

Rob_Headshot_webIn this Future Tech interview, we’re speaking with Dr Robert Fitch, a leading research scientist in the area of autonomous field robotics and a senior research fellow with the Australian Centre for Field Robotics (ACFR) at The University of Sydney. He is interested in the science of how to make complex decisions under uncertainty applied to key robotics problems in commercial aviation, agriculture, and environmental monitoring.

Shara Evans (SE): Today I am delighted to be at the Australian Centre for Field Robotics (ACFR), speaking with Dr Robert Fitch, who is a senior research fellow specialising in robot motion planning. Robert, thank you so much for taking the time to speak with me today and to show me around the lab.

Robert Fitch (RF): My pleasure.

SE: First can you please describe the types of research that you and your team are doing at the ACFR?

RF: Sure. The Australian Centre for Field Robotics is one of the largest outdoor robotics research labs in the world. We work with robots in many different environments, ranging from ground-based robots to flying robots to underwater robots, all the way to robots that are working in social spaces. There’s a huge range of types and sizes and environments where we can use robotics and investigate their capabilities.

Social Robots — Not what you’d expect them to look like

SE: I have to ask you a question about some of these social robots — that is, the robots that interact with people — what do they look like, and what do they do?

RF: As with all kinds of robots, they don’t necessarily look like the ones that you see in the movies. We get our impression of what a robot should look like oftentimes from what one might see in a movie or read about in science fiction. In reality, robots sometimes look like those but most oftentimes not.

The social robotics group here at the Australian Centre for Field Robotics works with many different kinds of these. One of the very interesting projects that they work with is the idea of exploring robots in social spaces, meaning public spaces. One of the main interests is to put robots into art galleries, into interactive displays and interactive installations. One of their systems and robots were actually two robot wheelchairs. There’s a back-story there about how one’s a fish and one’s a bird. They’re in love, and they can’t really communicate directly with each other, so they have to, in a sense, communicate through you as a visitor. They have little printers that can print out messages.

artspaceDisolve

Fish-Bird installed at Artspace, Sydney, 2006

SE: The robots print out messages as they’re going around the museum?

RF: That’s right. They’re printing out little pieces of paper with messages. You can pick them up and read them and then move around. You may have the idea that one is shy or one sort of likes you. The idea is to explore this, what they call the idea of a dialogue between the human and the robot, but it’s a dialogue in terms of more emotion, so the sort of proximity.

There are other robots as well that investigate that same thing.

 Autonomous Robots

SE: Are these robots completely autonomous, or is a human controlling them?

RF: They’re completely autonomous. Most of the ones that we work with are completely autonomous. They’ll have sensors that tell where they are in the world and what other things are around them in the world, such as other people. They have algorithms or mathematics that allow them to make decisions as to where they should go and what they should do.

SE: What considerations are there in trying to design and control a robot? For instance, are there differences in the algorithmic design for robots on the ground, as opposed to robots underwater or robots in the air? I’d imagine that they all have different axis of movement, and that would have to be taken into account in the motion planning. Can you give us a bit more background?

RF: Most of the robots that we have are really designed for a specific purpose or a perfect specific environment. A flying robot, obviously, has to be able to fly in the air, so there are different designs for that. Some of them look like airplanes, which have fixed wings. Some of them look like small helicopters. Some that you’re familiar with have multiple rotors — multi-rotors.

In terms of the ground robots, they often have wheels or they’ll have tracks. They’re designed to operate on some particular type of environment. If you have a wheel robot that maybe looks like a car, it goes into places that cars go, on the street or off-road.

Then you have other kinds of ground robots that are designed for rugged terrain. They may still have wheels and they may have tracks, but they’re much more robust, to handle the type of terrain you may find out there in the natural environment, such as in an agricultural context or unprocessed land.

SE: Yes, or perhaps even in a mining environment.

RF: Or a mine site—that’s right. There have been efforts in robotics to break that dependency between the robot design and its environment and to actually design robots that change their shape to match what they have to do.

Transformer Robots!

ultimate-bumblebee

Transformers Ultimate Bumblebee

SE: It sounds like the Transformers.

RF: Like the Transformers.

SE: Is that realistic?

RF: I’ve worked on this in the past. It’s an area called self-reconfiguring robots, where we’re building robots out of modules. Each module can move with respect to the others, so detach from the module it’s attached to, move and reattach somewhere else. As a research idea, it’s very fascinating. You can imagine how these things could be like Transformers, just completely changing their shape.

SE: Yes, and they’d each have their own processor, wouldn’t they, or multiple processors?

RF: They do. That’s right. That’s one of the most interesting aspects of that research — is that you have a robot that actually has many individual brains. There’s not one brain that’s controlling that. It’s more like a swarm of bees, where the robot acts collectively. The mathematical challenge is how to design the algorithms that allow that robot to do something globally — for example, assume some kind of given shape based on these individual decentralised mobile interactions.

SE: It’s parallel processing on steroids, really, because you might have 30 or 50 or 100 processors all in very close proximity that are in effect talking to each other and coming up with a hive decision.

RF: That’s right. Exactly. It’s communication, and it’s a decentralised control. They’re really some of the most interesting robots. We don’t see those robots out in the field yet. This is a really kind of research that you might find in a lab; whereas most of the work here involves robots that actually do real work.

SE: Yes. Do you have these little robots — can I call them hive robots or swarm robots? Do you have any in the lab now that you’re playing with?

RF: Yes, we have built some modular robots.

Autonomous Robot Control

The second part of your question is really about the means of controlling these types of robots. There are a variety of considerations, whether they’re underwater or aerial, how do we — in light of a mathematical level — make decisions for motion or for what it does. What we try to do in robotics is really think about these problems in general. We try to think about the problems as abstract in form, so that scientifically we can build this knowledge base of how robots move. When it comes down to a particular robot, then it’s just a variation of that central problem.

SE: So, it’s just a different physical configuration but the actual algorithm is very similar and not really reliant on whether it’s an underwater robot or an aerial robot. The basic set of mathematical principle is common across that.

RF: That’s right. That’s what we’re trying to do because then we could really reuse that theory in robots that no one’s thought of yet. That’s what we’ve got in principle. Of course, at the end of the day, you do have to worry about how the robot actually moves in terms of its wheels or whether it’s underwater or what-have-you, but there are common elements that you can abstract. As a researcher in motion planning, especially for planning for multiple robots, that’s what I’m interested in: commonalities. Because of that, I work with robots in various environments.

Robotic Sensors — Building a 3D Environmental Map

SE: I’d imagine that all of these different robots need to be cognisant of terrain, and that there are a variety of sensors and cameras and other devices that help them navigate their environment by sensing the environment. Can you share a little bit with us about the types of sensors and cameras that might be in use?

Radar sensor at ACFR, Photo: Market Clarity

Radar sensor at ACFR, Photo: Market Clarity

RF: Sure. In robotics, one of the most common sensors is a laser rangefinder. The idea is that you want to be able to essentially sense depth: the distance from you — as a robot, and you as a person — to other things in the environment.

We, as humans, have two eyes. The reason we have two eyes is so that we can acquire stereo visual images. We can understand the kind of three-dimensional pictures that surround us. If you can’t do that, it makes it more difficult. You can still sense depth with only one eye but to a more limited degree.

In robots, most often this is done with lasers. Basically a laser is a type of sensor that will emit energy into the environment and that energy will bounce off. It will be reflected from something. Then there’s a sensor that detects that energy as it’s reflected back. Because we know the speed of light and we know the time it took to be reflected back, we can then infer the depth away from the robot where that object is. What a laser sensor does is essentially make those kinds of calculations very quickly.

SE: Well, it would have to be; otherwise, you’d have a robot bumping into things.

RF: That’s right. Essentially you have sort of a rotating element that scans, in a line, the environment. You get a measurement of the central points on one line. Then you have lasers that are called 3D lasers, which essentially have just multiple of these scan lines operating in parallel. One of the ones that we worked with has 64 beams. What you have is a three-dimensional point cloud of the geometry of the environment around you.

Robotic sensor arrays at ACFR, Photo: Market Clarity

Robotic sensor arrays at ACFR, Photo: Market Clarity

SE: The more points you have, the better picture of the environment is, which means the finer grain control that the robot — or that you as the planner will have on the robot’s movements. Is that right?

RF: That’s right. It’s basically an issue of resolution. If you think about a camera image, your HD camera has higher resolution than a camera you may have on your phone, although cameras on phones are quite good these days. You have cameras measuring megapixels, which really is a measure of how densely spaced those pixels are. You have the same concept in lasers, where the more densely spaced the laser points are determines the kind of resolution of your depth image. The distance from the laser really affects that point density. In a way, that’s slightly different to how a camera works, but the concept is roughly analogous.

SE: Would the same equipment be used in aerial robots, as would be ground robots?

RF: That’s a good question. You think about aerial robots, especially smaller aerial robots, is that the amount of mass that you can carry around is limited.

SE: That’s what I was thinking because some of these 3D layers would be quite heavy.

RF: Quite heavy, yes, much heavier than the robot itself. With larger flying robots, you can carry the large lasers. In general, especially when we’re talking about the multi-rotors, the sensing is really limited by the amount of payload that it can carry. Certain small robots can carry small lasers. That’s certainly possible. Often you’d see vision being used. You’d see cameras being carried on the robot.

Robotic sensor arrays at ACFR, Photo: Market Clarity

Robotic sensor arrays at ACFR, Photo: Market Clarity

SE: Would it be a pilot or remote operator that’s using the camera to help them navigate, or would that be doing that automatically?

RF: The robots that we worked with are autonomous. By law, there’s a safety pilot who has responsibility for that flying robot and what it does. Really, however, that robot is being controlled by itself. It’s autonomous. It senses the environment by GPS guidance that helps it know where it is. It’ll have some sort of sensors like a camera or maybe a laser that helps it out to know where it and where it’s going. Oftentimes, the aerial robot’s job is to collect sensor information, so to collect sensory data that we can all make some sense of in terms of making a map, or collecting information about a crop in an agricultural setting, or tracking an animal, or whatever you’re interested in.

SE: That’s quite different than a lot of what I would call commercials drones that are out there, perhaps with a video camera and used by someone doing videography, and being controlled by a human as opposed to being autonomously controlled.

RF: That’s right. The commercially available or commonly available flying robots are really primarily remote-controlled. It’s quite difficult to build the kind of algorithms that allow the robot fly itself. There are those projects. This is certainly accessible to some people, but it’s a lot more difficult to do that than just buy a kit.

SE: Yes, and have a joystick-type of control.

What does a robot have in common with a 747?

The robots that you and your colleagues are building, do they have anything in common with, say, commercial planes such as 747s?

This system will help pave the way for optimised flight routes that will improve operational efficiency and support greener commercial aviation, says Professor Salah Sukkarieh.

This system will help pave the way for optimised flight routes that will improve operational efficiency and support greener commercial aviation, says Professor Salah Sukkarieh.

RF: That’s a good question. What does a robot have in common with a 747? In turns out that we have some experience with this kind of thing. We have a project with Qantas, where we are building the next-generation flight-planning system for it. In that case of the flight-planning system, it’s a computer program that decides the path that the airplane will take, that the pilot will fly as it’s flying from A to B. If you’re flying from Sydney to Los Angeles, there’s some path through space that that airplane is going to fly to make the best use of the wind energy that’s available. So you’ll have some estimate of the wind, currents, and you want to have a computer program that takes that into account and finds the best path for that.

It turns out that the mathematics behind solving that problem is very related to the mathematics that we used to plan motions for robots. In the case of commercial airline, we have a pilot who is actually flying along that path. In the case of the robot, you have controller or computer controller that’s following that path. Just the idea of trying to get from A to B while minimising some objective functions such as time, to get there the fastest, energy, the use of the least amount of energy, it’s really very suitable. We’ve been able to apply the knowledge that we have about robot motion planning to commercial flight planning, and that’s what we were explaining with this Qantas project.

The Genesis of Google’s Self-Driving Car

SE: That’s interesting. There’s another project that’s been in the news a lot lately, and that is robot-driven cars. I know that ACFR has been involved in doing quite a bit of work in this area. Perhaps you can give us a little bit of background on the work that ACFR has been doing with driverless cars.

RF: We’ve worked with self-driving vehicles for quite a few years. It goes back to 2007 when we participated in a competition that was organised in the states called the DARPA Grand Challenge. Many other labs throughout the world — I think there were over 100 labs participating in that competition, and we were one of them. We took a standard Toyota RAV4, and turned it into a robot —a self-driving car. We did most of the work here in Sydney, and then we airlifted that robot to Berkeley, California. We’re partnering with Berkeley, UC Berkeley. We did the field testing on the ground there and then participated in the competition there. The point of that competition was to get to a point where self-driving vehicle could obey the rules of the road—in this case, the U.S. rules of the road.

google car

Google’s driverless car

SE: Yes, we wouldn’t want it to drive down the wrong side of the road!

RF: That’s right.

SE: You’ve got to put the programming in there correctly.

RF: That’s right. We had to make sure that we were driving on the right instead of the left in that case. The result of that competition in general was very good. Subsequent to that DARPA Grand Challenge, the results from most of the leading teams were published in scientific papers. The Google Car — the large project that Google is doing with self-driving cars — really builds on the results of that competition. With all this scientific information that’s publicly available, some top roboticists moved to Google. So they started to work for Google, and started this project. That was really the beginning of the Google Car — this DARPA Grand Challenge competition.

Prior to that, there’s a long history in the world of self-driving vehicles. It wasn’t just that particular competition that kicked it off, it goes back to the ’80s really. I think it was a combination of decades of work that really allowed that technology to be mature enough to be transferred over to the commercial space.

Rugged terrain driverless vehicle at ACFR, Photo: Market Clarity

Rugged terrain driverless vehicle at ACFR, Photo: Market Clarity

SE: And then it broadened out into the public arena, as in on roads, as opposed to having robot machines driving around in factories and getting things from warehousing-type situations that typified earlier uses of robots navigating on their own.

RF: Yes, exactly. That was an example of really making robots getting out of the lab and into the real world. The self-driving cars, even in the 80’s, were driving on public roads, but to a far more limited degree than what we see now.

SE: I have to ask you: what kind of sensors are on the cars themselves — on the robots themselves, I guess, is really what you’ve made these cars into. How many sensors do you need? What are the different kinds? What’s the minimum that would be required for a safe driverless car?

RF: That’s a good question. I don’t think there is necessarily a right answer to that question at this point. The principle, really, is that you have to be able to have enough sensory information to understand the world around you and what obstacles there are. In our case, we were relying on multiple laser sensors. We talked about lasers before. In our case, we’re using two-dimensional lasers. Really, the three-dimensional laser is very useful in that case. In that competition, the most successful teams did the 3D laser because you get that full three-dimensional-point cloud around you, so really understand the geometry of what’s there.

Economies of Scale

SE: Aren’t they really expensive?

RF: They were very expensive at the time.

SE: How much are they now?

RF: I think in 2007, they were over USD $200,000. Now, they’re far less, less than USD $70,000.

SE: You put those on top of the cost of a car, and suddenly your car starts becoming very expensive.

RF: I think there’s a pattern in kind of any specialised technology development — that is you have a situation where there’s specialised sensors that are just being used by researchers, and there are not that many researchers in the world. Once that gets to the point where it’s able to be used in a commercial sense, then you’d have economies of scale that come into play. The cost of the sensors goes down. New types of sensors will be developed, lower cost versions, and then at some point, the cost is then no longer prohibitive.

SE: I suppose you could make an analogy to computing, say, in the early ’60s or ’70s, where you’d needed the equivalent of a room-sized computer that would cost hundreds of thousands, if not millions of dollars. Today, you get more than that on the chip that’s in your child’s toy for the cost of a few cents.

RF: That’s right. The analogy that I’d like to use is: in the mid-’80s the most powerful supercomputer had less computing power than the iPad does today.

SE: It’s mind-boggling, isn’t it?

RF: Yes.

SE: So I suppose these 3D lasers could come down in price and in scale, and in weight at the same level that computing has, just following the same technology laws that we’ve seen be proven for decades now.

RF: Yes, that’s right. Absolutely. We are seeing that today. Whereas seven years ago there was one 3D laser. Now there are many. There are several types, shapes and sizes.

About the author: Shara Evans is internationally acknowledged as a cutting edge technology futurist, commentator, strategy advisor, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

In Part 2 of the interview with Dr Robrt Fitch we continue our discussion about driverless cars, autonomous robots, and how the use of robots in agriculture has the potential to completely transform the agricultural industry.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.