Future Tech: An Interview with Mark Bishop and Dr Ramin Rafiei

Dr Ramin Rafiei and Mark Bishop

Dr Ramin Rafiei and Mark Bishop

In this Future Tech interview we’re speaking with Mark Bishop and Dr Ramin Rafiei of Ocular Robotics. Mark is a roboticist, seasoned businessman and founder of Ocular Robotics. He’s solved one of the most pressing challenges for mobile robotics: rapid and accurate collection of visual data. Under his leadership, Ocular Robotics is on track to become a leading Australian tech company. Ramin has joined Ocular Robotics as commercial director to help drive the company’s growth He’s a PhD physicist, engineer and businessman whose career spans the aerospace, automotive, nuclear and photonics industries. He has a track record in new generation sensor development and a passion for bringing innovation to market.

Shara Evans (SE): Hello. This is Shara Evans, CEO of Market Clarity. Today, I’m at Ocular Robotics speaking with Mark Bishop, the CEO and founder, and Dr Ramin Rafiei, the commercial director.

Welcome guys. Thanks so much for taking the time to show me around the lab.

Mark Bishop (MB): Well, thank you very much for coming to see us, Shara.

SE: Your RobotEye is amazing in terms of the speed at which it captures data, and it’s really cute too. Why don’t you begin, Mark, by telling us why you founded this company. How did you even get into this space?

MB: It all started a little way back when I was a researcher at the Australian Centre for Field Robotics (ACFR) at the University of Sydney. I was working on the problem of robot perception. At the time I was looking for ways to allow robots to interact more intelligently with their environment. Currently, robots build geometric maps of the world and work within that framework, and they’re driving where they think they can go.

A robot in an outdoor environment — a large coal truck in a mine, for instance — doesn’t really want to go around a tuft of grass but at the moment, that’s exactly what happens because it thinks it’s a harder obstacle like a rock. The sensors in their systems can’t tell the difference between different types of objects, just their shape.

I was working on that robot perception task when I developed the technology to point multiple sensors in a single direction at the same time, and realised the technology solved a completely different robotics problem with a much more immediate commercial opportunity attached to it: the ability to deliver very timely and well-registered information to robotic platforms. When I say well-registered, I mean the robot having very precise knowledge of exactly where in its environment the information came from. That became the technology we’re commercialising at Ocular Robotic. Its principal advantage is its ability to sense simultaneously with speed and precision. That is what we do better than anyone else.

A rock is a hard place, but does a robot know that?

SE: It really is quite amazing. When you were showing me the demo earlier, I was quite impressed with how fast it was able to map out the room. As it happens, I’m also very familiar with a lot of the work being done at ACFR. I’ve been a visitor to the labs multiple times, and I have to say the work there always impresses me.

RobotEye Party

RobotEye Party

The example that you gave about not figuring out the difference between driving over a tuft of grass and a rock is actually quite important, especially as I think about things like driverless cars if you have a situation where somebody may trip out in front of a car, and the car knows that it’s supposed to stay on the road, but there might be another way, rather than slamming on the brakes. There might be some sort of soft hedge that it could drive into, as opposed to crashing into a wall. That way, people might be jolted but not necessarily mangled. I think that kind of technology would be really useful there.

Starting with your journey at ACFR, how did you begin the commercialisation of the algorithms that you were working on?

MB: Our core technology, which we’re commercialising at Ocular Robotics, is really an opto-mechanical system. The most important difference between what we do and what other equipment does with direct sensors is that we remove most of the mass that normally needs to be moved to direct the view of the sensor. In all other technologies, you move the sensor itself; you need to move at least one of the motors, but those things are relatively heavy in their own right, and so the support for them needs to be reasonably substantial as well, and so the whole package becomes quite heavy. It’s hard to move anything that’s heavy quickly and precisely at the same time. A good example is the difference between the handling of a sports car and a semitrailer. Exactly the same principles apply. We’ve been able to minimise the weight that needs to be moved to redirect the view of the sensor.

SE: Well, certainly the couple of RobotEyes that I was holding were really quite light. I was holding two of them at the same time, and I don’t think that would be a problem for many people. Would this be the typical weight that you’d see in a production environment?

An incredible lightness of vision

MB: In our systems, we’ve paid a lot of attention to making the moving parts very light because that’s the core way in which we provide the advantage. In future systems, we’ll also be looking at making the entire package that you were holding light as well so it can even be applied to small airborne platforms. Yes, over time, the weight and size of our systems will be able to be tailored to all sorts of different applications. For wearable systems, we’ll be able to engineer products that are basically the size of your thumb.

SE: Really? How light would that be? Gosh, that would be very light.

MB: They would be. We can see them being used in wearable applications, perhaps police and soldiers and miners having sensors attached to them. Also, we’ve even had inquiries from people wanting to use them for sporting referees, and also for smaller robotic platforms and airborne platforms.

SE: Yes, they would be perfect for drones because they need to be light. The lighter the payload, the longer the battery will last.

Let’s continue with the story of how you came to form Ocular. You were working at ACFR as a researcher, or were you doing your studies there?

MB: I was a student at the time I developed the technology.

SE: Okay, so how did you go from being a student to launching a company?

MB: I had a business background in aquaculture. With my family I founded a greenfield aquaculture business near Tamworth in New South Wales, where we grew trout and delivered them direct to restaurants in Sydney and Brisbane and throughout north western New South Wales.

SE: Wow. So you’ve always had a bit of a commercial background?

MB: Yes, exactly. I guess that meant the initial step into business wasn’t a thing I hadn’t done before and had no knowledge of. So the fear of business itself wasn’t there.

SE: Yes, I think that is a big issue for a lot of researchers. They’re very comfortable in the world of R&D but when it comes to business and how to propose their system, promote it to potential customers, or raise money, that’s where it starts to fall apart.

MB: Exactly. I had the experience in business and wasn’t scared to actually step into business, but, obviously, commercialising new robotic technology is very different from growing fish.

SE: A little bit (laughing).

MB: There was certainly a steep learning curve on the differences in those business environments and what was required in terms of commercialising a new technology.

JEEVES Robot developed by Georgia Tech to be used as a domestic cleaner

JEEVES Robot developed by Georgia Tech to be used as a domestic cleaner

SE: Was ACFR supportive in terms of helping you to launch a company?

MB: Not in a direct way. ACFR wasn’t involved in the spinout of the company, but certainly the relationships that I have with the people at ACFR were a great help in those initial stages.

SE: What year did you actually launch Ocular Robotics?

MB: The company was incorporated back in 2006. We had some very early grant funding from the federal government to do some of the early business planning and early IP protection work, but we started to operate as a business in 2009 after raising a seed round.

SE: Did you have employees at that stage, or was it just yourself?

MB: Initially it was just me.

SE: Yes, well, that’s how it is with a lot of businesses. No surprise there.

MB: Then there were two and three, and we’ve grown a little from there.

The first three years or so from 2009 were spent developing the technology core, and understanding it very well, knowing how to design it, and knowing how to make it perform. Then, as we grew the team some more and were able to conduct a productisation phase we were able to take the core of what we had done, which we understood very well, and build products around it. You’ve seen some evidence of that in our 3D laser scanners, our hyperspectral systems, our rapid vision systems, and now of course we’re very much focused on the business-development phase and growing our customer relationships.

SE: Let’s turn for a moment to a couple of the products or applications, if you like, for the RobotEye. You’ve shown me the laser vision, the camera vision and the spectral vision. Could you perhaps describe each of those a bit more?

Ramin Rafiei (RR): I first want to communicate that our technology is simply revolutionary, redefining human-robot and robot-environment interactions. And here I refer to robots in the broadest sense. We are enabling robots to sense and experience their world in ways that were previously unimaginable. This is what I call a superhuman vision capability.

SE: I love that term: superhuman.

The all-seeing RobotEye

RR: Yes. We have built the world’s most dynamic eye. This eye, known as RobotEye is the backbone of our entire range of sensors. RobotEye can see far beyond the light that is visible to the human eye. In addition to visible light, RobotEye can also see ultraviolet, infrared and radar wavelengths. As such our family of products is divided into 3D LIDAR, vision, thermal, and hyperspectral. Regardless of the product, all RobotEyes sense their world with simultaneous speed and precision.

SE: The combination has to be there. You’ve got to do it fast, and it’s no good if it’s not precise.

RR: Absolutely. And no one else in the world can achieve the two simultaneously.

SE: Really? Wow.

RR: There are systems that can do it fast. There are systems that can do it precisely. RobotEyes are the only sensor platform in the world that brings the two together.

SE: Ramin, it would seem to me, with that combination, there are a lot of applications in the world of robotics, and here I’m thinking robots in agriculture, driverless cars, drones. I think you mentioned mining applications before as well. Can you give us an idea who some of your customers are and the kinds of applications that they’re using RobotEye for?

RR: RobotEye is a true platform capability and the diversity of our customers is testimony to the limitless number of applications possible for this technology. Especially now with RobotEye as a contender for a space-based project, the sky no longer limits its applications. Current deployments of the RobotEye family of products include autonomous navigation, robotics, 3D mapping, mine automation, situational awareness, emergency response, port automation, critical infrastructure and border protection. Many of our customers purchase our RobotEye products to integrate with their platforms. By doing so they drastically increase the capability of that platform to sense and interact with its environment.

SE: Robot vision, is such an important part of any kind of autonomous platform because there’s no human with a joystick controlling these devices, and some of them can be absolutely huge, especially in the mining area and in cargo-loading, freight containers, and so on.

RR: Shara, I have a very good example. On the 4th of October a 60-minute program was broadcast across the US titled “Hands Off the Wheel.” The program looked at the advanced state of driverless cars. One of the greatest technical challenges standing in the way of a driverless vehicle having a place in your garage, my garage, and on the streets of Sydney or anywhere else across the globe, is the ability of that car to deal with unexpected events.

SE: That happens on roads every second of every day.

RR: The real world is both unstructured and unpredictable. We recently wrote a thought piece on that, highlighting the need for both adaptive and active sensing to address this challenge. That is precisely what RobotEye enables: the ability to adaptively acquire information from your environment at ultrahigh speed and precision.

The intelligent eye

If you think about a typical laser scanner, which are the eyes of a driverless car, by nature it’s dumb. Specifically, it has no ability to be selective about the information it captures. With RobotEye, you can acquire only the 3D information the vehicle requires, enabling unmatched rapid scrutiny of the unexpected in your environment. This is the sensing gap that a RobotEye can now fill.

Images generated using different renderings of a single information rich 3D point cloud. The point cloud was rapidly captured in a region scan mode by the RobotEye RE05 3D LiDAR.

Images generated using different renderings of a single information rich 3D point cloud. The point cloud was rapidly captured in a region scan mode by the RobotEye RE05 3D LiDAR.

SE: There are artificial-intelligence-type algorithms in RobotEye that are able to detect specific kinds of things, if I’m understanding you.

RR: RobotEye empowers the AI to take control and reach decisions much faster than could otherwise be possible with a traditional sensor or scanner.

SE: Give me an example of a couple of things that RobotEye would recognise as being important. I’m imagining that people would be in that category, anything living, maybe animals, maybe other vehicles. What else?

RR: Let’s say a car is driving down the highway. A RobotEye may just be looking out at the far horizon for navigation purposes to make sure that the road is clear and that the vehicle can continue moving forward. All of a sudden there is an object on the road which the vehicle needs to be able to instantly identify. Is it a paper bag or a rock? Can the car drive over it or does it need to steer around it? To enable it to recognise what that object is, RobotEye can seamlessly adapt its behaviour to capture high-resolution data of only the object under scrutiny for instant recognition by the AI.

SE: That AI software that you’re describing right now — is that part of the RobotEye sensor, or is that something that the driverless car manufacturer would put in?

MB: We allow the programmers who are writing the AI software in the background to do new things with the sensing on the frontend. It’s really our sensing sitting at the frontend that enables them to get the information from where they want it, when they want it, at the resolution that they want it. They need to enable the AI sitting in the background to make the decisions it needs to make.

SE: It has to happen so fast.

MB: It does, yes.

RR: Let me share some numbers with you. The RobotEye is 20 times faster than the fastest saccadic movements of the human eye. The RobotEye accelerates five times faster than the fastest reflex of the human eye. That’s fast, Shara!

SE: That’s pretty fast.

RR: What’s amazing is that we achieve speed without compromising on the precision with which we capture data. Pinpoint precision is how the RobotEye moves. At any instant in time, you know where you’re looking.

SE: Another company, a competitor, might have a really fast eye but it’s not as precise as yours, from what you’re saying.

RR: Competing systems are based on the re-engineering of a century old solution. So precision can only be achieved by compromising on speed and vice-versa. Not to mention that competing systems are bulky and heavy.

SE: Well, a lot of companies might make claims about how good their technology is, but I understand that you’ve just won a major award in the last few weeks that really quantifies how unique your solution is.

Game-changing award

game-changer-awards

MB: Yes, Shara. We were very, very proud to be the winner of the Next-Gen Game Changer Award in the Game Changer Awards, run at RoBoBusiness Conference in San Jose recently.

SE: Well, congratulations. That’s fabulous. What is the Game Changer category?

MB: The Next-Gen Game Changer category is a category they have for a product or technology that they believe will really change the game, as the name suggests, in the evolution of robotics technologies moving forward. And game-changing in the terms of how robotic technologies can perform the tasks they can undertake.

SE: This is a US-based award, isn’t it?

MB: It is, yes. It was delivered in San Jose, in the middle of Silicon Valley, and so we’re very proud to have received it in a location that might be considered the focus of the robotics revolution.

SE: That’s fabulous.

RR: Shara, the competitive landscape was tough. We were up against the greatest robotics companies in the world, most of which are based in Silicon Valley. What we were recognised for is that we’re not just another evolutionary-based technology. RobotEye is revolutionary. That was the difference.

SE: That is wonderful. I love to see Australian technology success stories because I’ve seen so many great things developed here, and so many tend to be under-recognised.

In terms of your customer base, are your customers mainly based in Australia, or are they all over the world?

MB: They’re certainly all over the world, and that’s been the case right from the very start. Ninety percent of everything we output is exported.

SE: Why don’t you share with us, if you can, who some of your customers are and the kinds of applications that they’re using RobotEye for?

MB: Some of our customers are in the security space, and they’re looking at using them for various applications.

SE: Is that US? US Homeland Security?

MB: You know, I can’t say.

SE: You can’t say? Okay. Otherwise you’d have to shoot me. Okay, don’t shoot me.

Digitising the Catacombs

MB: The bulk of our customers at the moment sit in the robotics and automation space. They range from automating the capture of information from the Catacombs in Rome, as the Rovina Project is doing; automating the functions of an oil and gas industry robot, which a company from Spain is working on; through to automating coal loading at a port in Northern Queensland.

SE: Are you able to provide any of your customers’ names, or is that taboo?

MB: The Rovina Project in Europe is a coalition of academic and commercial organisations looking to use robotics to autonomously digitise sites that would be difficult to digitise by conventional means.

ROVINA Robot

ROVINA Robot

GMV is the aerospace company in Spain that’s competing in the ARGOS Challenge. Total (a French oil and gas company) is running a competition, a bit like the DARPA robotics challenge for the oil-and-gas industry, called the ARGOS Challenge. GMV is a participant in that competition, one of five, I believe, that Total is funding to develop a robot that can work in an oil and gas environment and explosive environments to enable people to be removed from those dangerous situations.

SE: You’d certainly need very high precision there.

MB: Yes, you certainly do.

ARGOS Challenge - FOXIRIS Robot

ARGOS Challenge – FOXIRIS Robot

SE: The consequences of wrong data could be catastrophic.

MB: They could.

SE: Earlier, you mentioned that you have a couple of customers in the automotive industry. Are you able to tell us anything about those customers?

MB: We do know that both Toyota and Hyundai have purchased systems for use in their advanced driver assistance and driverless car development. Beyond that, they’re obviously not telling us or many other people exactly what they’re up to.

SE: That’s their secret sauce. It’s wonderful to see that you have some really big-name brands that are using your technology. Going back to the company, what are your plans for next-generation products? You’ve got the RobotEye, and I’ve got to say I’m really impressed with what I’ve seen so far, but there must be further plans in the cooker.

Future plans for RobotEye

MB: For us, the RobotEye is an amazing technology platform, and you’ve seen a couple of different versions of the core eye technology here. It’s a technology platform that can both go to larger sizes to work with wider aperture sensors and go right down to very small sizes, the size of your thumb, as we discussed a little earlier.

There are other aspects of the way we build a product around a RobotEye involving the optics that are integrated into the system, the sensor that’s integrated and then the firmware and software layers we put around those to build a complete product.

That means, from a single embodiment of our core technology, we can develop many different products. You’ve seen here today how we’ve developed 3D LIDAR sensors from our 25-millimetre aperture systems. We’ve developed vision sensors, stabilised vision sensors, hyperspectral sensors. That ability to continue to build products around the core embodiment and technology is a real strength of the company in terms of the commercialisation of its products.

Ocular Robotics - BannerSE: With these different kinds of sensors, I’m assuming that, in most cases, you needed different RobotEyes for different purposes. Although that’s not always the case, is it? For instance, you showed me one RobotEye that was able to switch between laser and high-definition camera.

MB: It is possible. Most of our existing products are single-sensor products, but it is possible to use multiple sensors with a single eye.

SE: Is that a direction that you are looking to go in?

MB: Yes, there are reasons why you would do that.

SE: The cost would be one. The fewer the sensors that have to be loaded on to a robot, the more cost effective it would be.

MB: The cost, space, weight, all sorts of reasons. In terms of fusing laser data and camera data, that’s an obvious one of course. There’s lots going on in that space. There’s another company in Queensland using that sort of data to digitise the entire electricity network.

SE: That’s fascinating.

MB: That’s a real thing that we will see developed…

SE: What are they doing with the digitisation? Are they using something like drones to fly along the power lines?

MB: Yes, they’re using fixed-wing aircraft and helicopters and ground platforms to capture data from different angles and at different resolutions.

SE: Are they capturing them in 3D form?

MB: They’re capturing them in separate forms and then putting these together to create a 3D model.

SE: That’s fabulous.

MB: Yes, some very interesting work there.

SE: Yes, my mind is racing ahead. I’m thinking of virtual reality walkthroughs of some of these places. You’ve got all of this really rich data, and next thing you know you can have a VR system where literally you can start to see exactly what’s happening from anywhere in the world.

ROVINA Robot on Italian National TV

ROVINA Robot on Italian National TV

MB: The Rovina Project is a great example of that. When they’ve completed their work there all of us will be able to trundle around the nooks and crannies of the Catacombs in Rome and discover all of the sites that we wouldn’t normally be able to get to, and that certainly would be very difficult to survey and digitise by conventional means. They have small robot platforms that can scour the catacombs and elsewhere and build up the entire map. It’s a really interesting application of the technology.

SE: Well, there’s likely a whole future of travel, and future of tourism in the business there, too.

RR: Indeed.

SE: My mind is still going to the catacombs. I’m thinking about taking a bit of a walking tour through the catacombs without leaving my living room. Although I have to admit I am the sort of person that would want to go there in person and feel it as well, so you’ll need to somehow get that haptic touch integrated. That would be another add-on for you: get that haptic touch and feel.

MB: Yes, there’s something for the future.

Watched over by machines of loving grace

SE: Something for the future. One of the other applications that comes to mind as being really useful for robot vision is, just in the house. If you think about the things that are going to impact our lives as we get older, there’s always the chance of tripping and falling and being hurt, and nobody knowing that you’re hurt because your phone is too far away. You might be in severe pain for a long time or in some sort of distress. Do you see a way that robot vision could be incorporated into either home-monitoring systems or robots in the house, or something similar that might help us navigate our lives better in the future?

MB: Exactly. The service robotics sector is growing at a very rapid rate at the moment, and so that’s already happening. Vision and sensing of a robot’s environment as it moves around the house, being able to understand what it sees, are absolutely critical components of monitoring the people in the environment, and in an aged-care scenario, understanding whether the situation is normal or not, and being able to respond in an appropriate way.

SE: It’s an intelligence layer that you have to add on there, too, so there’s the very quick recognition that something is wrong, and then there’s the intelligence that says, “Hey, there is really something wrong. What do I do about it?” Would the sensors have to be mounted on robots, or could they be actually mounted on cameras or camera-like things at strategic locations?

MB: It could be done either way. It could be done by fixed monitoring points, like you say, with cameras mounted, similarly to security cameras at the moment, or on robotic platforms that can wander through the house or building in a structured or unstructured way and monitor that everything is as it should be.

SE: For this to really go mainstream, the price point for the sensor, the RobotEye, would have to be pretty low to be able to do mass customisation, but, as I understand it, because we’re still in the beginning of the robotics age — I know robots have been around for several decades, but I still see us at the very infant stage — things like sensors tend to be fairly expensive. Are you able to talk at all about price points and where you see that going over the next 10, 20 years?

RobotEye 3D scan of the neighbourhood

RobotEye 3D scan of the neighbourhood

MB: Certainly, our current products address what we would call the industrial market. The cost is relatively high and the applications that can bear those costs are in industry, in mining, in security, and those types of applications. But our core technology, what Ocular Robotics is commercialising, really is a platform technology. We can take that technology platform and engineer it across price points that will certainly meet the needs of the commercial market and even the consumer market over time.

SE: I imagine that, as we go on over the next 10 years and beyond, there’ll be more and more robots or devices that will have use for these things, and as you start to produce in volume, the costs go down.

MB: That’s definitely the case, yes, but it is also a case of building a product that specifically targets the price/performance needs of a range of markets.

SE: Yes.

RR: We recognise the significance of the consumer market and we’ll reveal those plans in the near future.

SE: We can talk about those in the next interview we do after you come out with some more things. There was one other thing that you mentioned when we were touring through the lab, and that was a gigapixel panorama. Can you tell me a bit more about that, Ramin?

RR: The latest addition to the Vision range is an automated, rapid 360-degree panorama capture system which has the ability to acquire and immediately display gigapixel panorama imagery, all in under one minute.

SE: Oh my gosh. That is fast.

360 degree 3D images — instantly

RR: Absolutely. If you think, for example, in law enforcement, when a police vehicle arrives at, for example, the scene of an accident or a crime, RobotEye can rapidly capture in immense detail the surroundings in order to preserve the original conditions of the scene.

SE: That’s like a 360-degree panorama.

RR: That’s right. There is no system in the world that can deliver that level of detail in that short span of time.

SE: Is that a camera-type of image or laser image as well? I’m wondering whether you’d actually get the 3D depth, or whether it would look just like a high-definition photo.

RR: Certainly. Again, if you couple a vision system with the LIDAR system, which we do very well, then you can use the two of them to build that depth map. You have your laser data, and you overlap on top of that the pixel information, the RGB, and you can generate an information rich RGB depth map. We call this capability multi-RobotEye fusion, delivering unparalleled depth of information to our customers in real time.

SE: Then tying that together with augmented reality or virtual reality, you could literally do walkthroughs back through a crime scene after the fact.

RR: You can just think how that would transform the whole legal process and so on. Now the judge can take a walk through the crime scene.

SE: Yes, wow. That would bring a new meaning to the word justice. What other applications would you have for that? I’m actually thinking that could be huge in the entertainment industry. You’d be able to film a movie or a scene where people could feel like they’re there and walk around.

RR: RobotEye is ideal for capturing reality TV shows, or sports broadcasting. Only the world’s fastest eye can capture such high speed events in immense detail. Further, we can capture those ultra-high resolution panoramas, videos and track multiple targets simultaneously on any moving platform, even on a boat that is rocking like crazy.

SE: Wow. My mind is just getting around that one. You’re able to make it look like the boat is still.

RR: That’s right. No matter how erratically the platform on which the camera is mounted is moving, we can subtract that motion from the image that we capture.

SE: And that’s directly in the RobotEye, that’s not separated out…

RR: That is part of the RobotEye, and that’s a stabilised vision RobotEye.

SE: That’s incredible. What other applications are you imagining that this gigapixel panorama would have? I’m even thinking in tourism, too.

MB: That capability is certainly very relevant to tourism, and to real estate. Another one is disaster relief.

SE: Certainly.

MB: If there was a disaster scene, and a robot platform or even a human platform was able to get into the disaster scene in a very short amount of time, we could capture incredible detail about what was going on there, and then that imagery could be transmitted back to a control room somewhere where the people making decisions about the response to the disaster could have incredible detail about what’s happened there and respond in the appropriate way without making guesses of what might have gone on.

SE: I would imagine that, in almost any kind of disaster situation, especially one that’s hazardous for humans, that kind of information could be incredibly useful.

MB: Yes, we believe that’s exactly the case.

SE: There’s one other application I wanted to ask you about, which goes right back to your background, Mark, in aquaculture, and that is submarine robots. Have you applied RobotEye to an underwater scenario?

MB: We haven’t done that ourselves. Actually, we had an inquiry just recently from a firm that deals with UVs. Yes, that’s a really interesting area. I guess it couples with another area that you may be interested in, in terms of applications. When a person is using a head-mounted display such as the Oculus Rift connected to a remote camera that tracks their head movements, they get nauseous very quickly because the remote camera is not able to track the motion of their head closely enough. Our technology solves that problem, which means that people can now use head-mounted display technology to view sites remotely in a way that wasn’t possible before because of the nausea issue.

vision_generalSE: I imagine there’d be a big market impact in the gaming industry as well.

MB: Certainly with the Oculus Rift, there’s been a big impact in the gaming industry, what we’re mainly talking about is using that head-mounted display technology to view real places. In terms of viewing within a videogame, that’s handled between the head display and the GPU unit in the computer. In most instances where we’re relevant is using that same head-mounted display technology to view real locations and interact with real locations.

SE: I think where my mind was going is marrying the two: a game that takes place in a real location.

MB: Yes, yes, yes.

RR: Spot on Shara.

SE: My mind was just too fast. My mouth didn’t catch up (laughing).

Just in closing, is there anything else that you would like to share with us about your vision for Ocular Robotics., I suppose, for the product or other cool applications?

MB: In terms of cool applications, I think one that would be fun to work with is sports broadcasting. We can see using our eye for lots of fun things in sports broadcasting. The speed with which our system can move would enable the capture of a tennis ball from racket to racket in close up, watching the ball onto the racket and seeing it deform and spin off. You’d be able to see all those details in close up as it happened. It wouldn’t be a great way to watch the entire match, of course, but you can imagine what reporters or broadcasters would do with it. Similarly, in other sorts of ball sports: cricket, baseball, golf you name it, it’d introduce some really interesting close-up information about the way things are moving.

SE: That’s fabulous. Well, thank you both so much for your time. It’s been a great pleasure to come out here and to see RobotEye in action.

MB: Yes, certainly, Shara. Thanks for coming out to visit us.

RR: We appreciate your time, Shara. We’re grateful to see companies like Market Clarity giving recognition to innovative Australian companies that are quietly changing the world with their technology. After all, the key to Australia’s continued prosperity is to become more innovative, and Market Clarity is a leader in showcasing and broadcasting Australian innovation. We’re grateful that you’ve joined us today. Thank you.

SE: Thank you so much. It’s been my pleasure.

About the author: Shara Evans is internationally acknowledged as a cutting edge technology futurist, commentator, strategy advisor, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.

Looking For a Dynamic Speaker?

Get in touch now to check Shara’s availability to speak at your next event. Shara works closely with her clients to ensure all her presentations are tailored to your event – ensuring maximum impact for your business.