Future Tech 2025: An Interview with Ross Goodfellow, Managing Director, Serenus
In this Future Tech interview, we’re speaking with Ross Goodfellow, Managing Director at Serenus. Ross Goodfellow is an experienced ICT executive who has held senior roles with Equant, BT/Infonet, Fujitsu and Macquarie Telecom. Outside Australia and New Zealand, Ross also has extensive international experience in the APAC region, including India, China, Japan and Singapore. Ross’ expertise spans business strategy, management, sales and technology innovation, with a focus on networking, infrastructure and Cloud Computing. Ross sees the tremendous potential Cloud Computing holds for business services and, in response, he founded Serenus in 2013. Serenus provides innovative Cloud management technologies, Cloud networking services and management consulting services.
Shara Evans (SE): Hello. This is Shara Evans from Market Clarity. Today, we’re focusing our Future Tech series on the world of the cloud and the Internet of Things.
Joining me this afternoon is Ross Goodfellow, an ICT industry veteran, with a history of executive roles in companies such as Equant, BT Infonet, Fujitsu, and Macquarie Telecom. In 2013, Ross launched into the start-up space, forming his first company, Serenus, which focuses on the cloud services segment. Serenus has already been granted several patents pertaining to cloud performance measuring and monitoring technologies. I’m delighted to be interviewing Ross for my Future Tech series.
Ross Goodfellow (RG): Good morning, Shara.
SE: Hello, and welcome. One of the most disruptive technology trends right now is what people are calling the Internet of Things or IoT. Let’s start our conversation today by defining what we mean by IoT. Ross, you’re deep into this space. What’s your definition?
Defining the Internet of Things
RG: Well, Shara, my definition would be that we forget all of the preconceptions about what an IT device is, such as a PC or even a smartphone like an Android or Apple device, and we throw that out of the window, and we just assume that anything that can have intelligence in it, either via a chip or some form of identifying device, that anything — a fridge, an electric kettle, whatever it is — can be addressed, communicated with, and have some sort of intelligent dialogue via the Internet or other networks. It’s really machine-to-machine communications in its broadest possible sense.
SE: Some might even argue that it includes people, too, especially as we start thinking about wearable devices, which is obviously still a machine, but, as we go further into the future with ingestibles and implantables, those little things that are communicating to the rest of the world may well be imbedded inside of our own bodies.
RG: Absolutely. I have absolutely no doubt that people will be either wearing or hosting some form of chip within the next decade or so.
SE: I think it’s going to be a lot sooner than that.
Back to IoT, the way that I see this whole ecosystem developing is that it’s not just about the devices. It’s about how these devices connect and interact with other devices and data sets, and then ultimately deliver value to an end user, either in their personal lives or in their business function role. That’s where cloud services fit in, in my view. It’s an area where you’ve been doing a lot of work. Ross, perhaps you can elaborate on some of things that you’re seeing.
RG: Yes, certainly. I think you’re right about delivering value with respect to cloud services. From Serenus’ perspective, we’re focused more on the business market than consumer markets, but we’ve seen tremendous uptake of cloud services in consumer markets, and I think most people use things like Dropbox or Skype. There are a plethora of cloud services out there that people are using today.
Let’s turn to the business market — and to some extent, the uptake of the use of cloud services in business has been probably a little slower than one would imagine, given the potential economic and operational benefits that cloud offers in terms of scalability, unit-cost reduction, et cetera. There are a whole lot of reasons behind that.
Business is inherently conservative. They’ve already sunk a lot of cost and effort into developing IT systems, whether they’re in-house or hosted in data centres. To change that and disrupt that is very risky, but also very onerous on the part of business to make immediate changes when new services and technologies come along. However, many businesses have moved sweepingly to the cloud.
One of the main obstacles that we see is this word control — the perception of lack of control. Companies need to know where their data is, and there are plenty of discussions about data residency, off-shoring, and on-shoring, and of course there are regulatory implications as well.
Companies really need the surety of knowing where their data is, how it’s being carried around, but they also need guaranteed performance. When they, for example, move their IT functions out of the on-premise or in-house environment, put it into a data centre or into the public cloud, the end users are still sitting where they’re sitting and they’re accessing these services and IT functions, and businesses expect the same sort of productivity — in fact, increased productivity because they’re accessing new cloud services.
I think one of the real areas of concern for businesses is: how do you continue to get the level of productivity and responsiveness and guaranteed performance for the end users we previously got when all the systems were in-house — controlled and managed and visible. How do you get that same productivity and guaranteed performance when you move the processing and storage and responsibility of those services outside the corporate environment?
SE: Yes, it’s an interesting conundrum. Part of what I’m seeing is that small businesses seem to have taken to the cloud faster in many cases than the larger enterprise businesses. From a small-business perspective, it makes sense. Rather than buying expensive software that might need a lot of customisation, today, a smaller business can take advantage of many services that are cloud-based without a capital expenditure.
RG: Absolutely. It’s the classic movement of CapEx to OpEx, and most businesses are looking for that.
SE: Yes, and even the larger businesses are willing to outsource to cloud providers certain things — for instance, sending out newsletters. Lots of companies, big and small, use hosting facilities because they’re spam compliant; they’re easier to manage; and it just makes sense.
Cloud Service Performance
When it comes to controlling core functions in an enterprise environment, you’re very right that the CIO or IT Manager wants to have a handle on the performance of their systems and needs to understand where data is. And, importantly for cloud services, whether the format that the data is stored in is transportable or can be duplicated in other places in case the cloud company goes bust — because these things do happen as well.
RG: That’s correct. I think one other major concern is, as the cloud grows, as uptake in the cloud increases, will the performance remain the same? There’s the performance that you might get today, but will that performance still be guaranteed and be consistent tomorrow? Will it be equally as good in a year’s time, in ten years’ time? Of course, once you’ve committed core functions to the cloud, is there a way back?
SE: I’ll raise one other issue, Ross, and that is that everybody talks about the cloud as if it’s one thing, but there are lots of clouds. There’s a Google Cloud for its services and Android devices. There is an Apple cloud, an Amazon Cloud, a Samsung cloud, a Facebook cloud. There are carrier clouds based on different vendor architectures, like Cisco or Alcatel-Lucent or HP. Then we’ve got hosted business clouds and private clouds, and now we’re even starting to see sensor clouds in particular vertical industries. This data spans public, private, and hybrid infrastructure. I see something that I’m calling cloud wars on the horizon. How are we going to handle this? Most importantly, from an enterprise perspective, how can you measure and monitor it? Don’t we need something like a cloud ecosystem?
RG: Absolutely right. You’ve struck on the word that I use increasingly, and that’s cloud ecosystem. I think we have to accept the fact that, just like was the case in hardware where you have proprietary operating systems; you have proprietary network protocols. IT companies are in business to make money, and proprietising has always been a thrust. That will happen. It will happen with the cloud. It is happening with the cloud, and it’s only going to get more and more complex.
On the other hand, you’ve got smart people developing open standards like OpenStack, which is an attempt to orchestrate what’s going on with all of these clouds and the cloud wars, as you say. Orchestration is really important, and it’s orchestration across the cloud ecosystem.
SE: That means that, if you’re an enterprise and you’re using services from multiple cloud providers, they must all play within that same ecosystem framework.
There’s one other angle to this we haven’t really touched on, and that is the connectivity. When people say the cloud, the definition of the cloud is really dependent on many, many things. My definition of the cloud is everything within the data centre concerned with providing processing power and data storage and software right through to the delivery of those services to an end-user device, whatever that is, and irrespective of what the connectivity or the networks are, which are involved in delivering those IT services. I think that’s what end users and business customers in particular have to be mindful of: that the cloud is about the effective and high-performance delivery of those services to the end user.
If you just think of the cloud as what’s going on inside the data centre, that’s really only half the story. There’s no point getting the productivity or the economic gains or the scalability offered by cloud services if the end users can’t effectively, efficiently gain access to those cloud services in a way that increases their productivity overall.
Extending the cloud to include end-to-end connectivity
SE: I would agree with you, Ross. In fact, what I would call it is quality of experience, and it’s an end-to-end experience that goes from whatever device or sensors connecting at one end to a person or device at the other end.
SE: This gets complicated because there are lots of different devices that play a role in end-to-end cloud performance, and they’re in lots of locations. They may be in a data centre, an enterprise’s internal network, or a service provider’s network. It could be sensor device or even an end-user gadget like a smartphone or a tablet, and they may use lots of different networks to connect. How can all of these different devices be monitored and measured in some sort of systematic manner when we’ve got such a diverse range of things and networks that may be involved in the path?
RG: That’s the big question, isn’t it? That’s exactly where we’ve come from with our thinking in Serenus. We understood that we’re moving away from a Layer 2 private network topology where everything is deterministic. We’re moving into a world where more and more data is passing over the public Internet or public cloud or private cloud, 3G, 4G networks, wireless Internet VPNs, or carrier Ethernet networks. It doesn’t matter. There’s connectivity of all sorts. There are devices of all sorts. That’s why we set out with an architecture that was abstracted above the Layer 2 or physical network interconnectivity topology.
We set out to design a cloud-management control automation system that really was network agnostic and device agnostic. We look at objects within a cloud, and we quite frankly don’t care what those objects are. Then we look at the data that passes between those objects, and we capture the performance of that data, and through data analytics we can do all sorts of smart things about understanding trends, performance, end-to-end performance, and making some intelligent decisions about resource optimisation, and do some signalling and functions like that which I can talk about.
The short answer is: we kind of didn’t look at the problem within the framework that currently existed. We looked at an abstracted view of what clouds would become and are becoming, and tried to develop an open architecture which was adaptable to the coming changes as we possibly could. We believe that the more that a product or a technology ties itself to a specific infrastructure, the more it limits its future adaptability to what’s happening and changing out there.
SE: In many ways, what you’re describing as an abstracted layer reminds me of Web browsers and how they’re at a completely different abstraction layer from the network or device that they reside on. Who would have imagined 20 years ago how popular the World Wide Web or the browser would be? Today, we use that same interface on everything from our computers to our tablets and phones and smart watches and who knows what other devices in the future. Would you see that sort of analogy happening with this abstraction layer but obviously from a control and monitoring perspective for cloud services?
RG: Yes, I think that analogy holds. I’d also extend it. Just like TCP/IP was an incredible breakthrough in technology — really, it’s the pivotal moment that made the Internet possible, and it was an abstraction of IT in the sense that all of a sudden it didn’t matter what devices were communicating with other devices, what networks data packets were passing over. It put all of that down in Layer 2, as it were, and created Layer 3, which was abstracted above the network technology.
I look at this cloud-management challenge in very much the same way, and it’s directly applicable to the challenge we face with the Internet of Things and your so-called cloud wars in that how do we get above the individual devices, protocols, proprietary technologies, et cetera, and how do we have an overarching orchestration of all these things?
SE: Is there a particular signalling protocol that would be used, or does it need to be invented?
RG: I think they will evolve, but our technology does use standard protocols.
It uses standard network monitoring protocols so that it can draw performance data from network infrastructures. We also used existing protocols to probe software agents that can reside on these cloud objects wherever they are in the enterprise cloud.
There are also protocols emerging for the signalling to service providers in order to do things like bandwidth on-demand signalling or process on-demand signalling, which of course really closes the loop for the whole cloud story.
Whilst we’ve seen a lot of automation and automated provisioning in the data centre for things like turning up CPU processing or adding additional storage, we haven’t seen that same kind of flexible automated provisioning in the network world where, for example, if you had a 10 Mbps broadband pipe in one of your offices and you wanted to upgrade that to a 100 Mbps pipe, well, I’m sorry but most service providers will still take a paper order and give you a six- to eight-week lead time for provisioning of a bandwidth upgrade. That is just unsustainable in today’s resource on-demand cloud environment.
SE: You bring up a good point there because, as an observer of the telco world of many years standing, I have to say that real bandwidth on-demand is something that seems to be lacking from many providers. I’ve seen a couple of companies that are geared up to do this. For instance, Megaport, which is a fairly new player, has an elastic connectivity platform, but I’m still not seeing this as a widely available feature. Do you expect that this is something that service providers will increasingly offer?
RG: Absolutely. Look, it’s something that I think is technically feasible now. I think that the main barrier has been commercial concerns, such as how to price these things, how to adapt the billing systems to be more flexible, and ordering systems. It’s really the BSS side, the business support system side, of telcos and service providers that have held back this flexibility, I believe.
In fact, when you look at a typical connection to a service provider or a carrier, most likely if you buy an enterprise broadband service, you’re probably getting a Gbps interface — then by software throttling, you are throttled down to whatever bandwidth that you’re paying for.
It’s not a great leap to see that through software-defined networking (SDN), the allocated port speed can be much more flexibly configured based on signalling from the end user or from a third party as to what sort of bandwidth requirements are needed or anticipated.
The short answer: yes, I think within the next couple of years we’ll see a lot more innovation in that space. I know that one particular carrier, Pacnet SDN, recently acquired by Telstra, was already trialling bandwidth on-demand.
SE: I would agree with you that we’re going to see more and more of it. I still think, and again I’m agreeing with you, I think that the bottleneck is going to be on the backend, in the BSS systems: the billing, the ordering, and the support systems. The pricing is probably more easily solved than some of the legacy networks that are sitting in the telco backend.
Bandwidth on Demand: Telco Challenges
RG: Well, yes, but don’t underestimate the commercial concerns when it comes to holding back innovations. It’s quite common and has been common in the industry for a long time to simply advise customers to overprovision bandwidth in order to have excess capacity to meet demand spikes.
SE: It’s an expensive way to do it, especially if you’ve got very peaky demand.
Let’s turn back to the enterprise for a moment and look in a little bit more detail about some of the signally required, some of the protocols being used. What is required for an enterprise to get an end-to-end cloud resource utilisation and performance trend view? Also, can an enterprise get a handle on bottlenecks or, better still, be able to predict issues before they occur?
RG: Well, I would say that with today’s products in the market, the answer is: with great difficulty. There are various products available that view various networks, but I don’t believe there’s a product today that has an end-to-end cloud ecosystem view.
That’s where Serenus’ VPNscope comes into play. We’ve built a prototype beta product that does precisely what we’ve been talking about — it looks at the network, multiple networks’ elements, and takes real-time feeds to determine end-to-end network performance and to make smart decisions about capacity requirements. It also then calculates capacity optimisation, signals and issues a protocol NSI (Network Service Interface), which is now emerging as a standard protocol to be used by service providers to do that bandwidth on-demand auto-provisioning that we’ve just been talking about.
Cloud Performance Index (CLIX)
In addition to that, we also have newly developed technology that’s recently been patented, and that hasn’t been incorporated into our product yet, but that’s the next stage of development, which is an end-to-end performance-measuring metric called CLIX, and that stands for Cloud Performance Index. CLIX is the trademark name.
What Cloud Performance Index is all about is a methodology and algorithms for calculating the performance of the individual infrastructure elements primarily concerned in the delivery of end-to-end services over a cloud network.
What CLIX does is it gives us a method of measuring a composite performance for all of the infrastructure elements concerned in the end-to-end delivery of a cloud service. Now, by that, I mean it looks at CPU utilisation. It looks at I/O load, and utilisation in the data centre. It then combines that with various network characteristics such as latency throughput, which then give a combined end-to-end performance indicator from the processing source or the software, right through where the end user is experiencing the services on their end device.
There has not been a way of consistently measuring end-to-end cloud performance in this manner before, so I believe this technology really does give us a new tool which could be used as a consistent metric to not only indicate the performance of the delivery of cloud services, but is also able to highlight where there are infrastructure bottlenecks, both within the data centre and external to the data centre somewhere within the network, that are contributing to the end-to-end delivery of cloud services.
SE: Would it also take into account end devices like different computer systems or different phones, tablets, and so on because sometimes the performance perception is actually a result of an end-user device, and there’s nothing at all wrong with the service but you’ve got an old machine that the end user is interfacing through?
RG: It can be, but these days most of the processing is typically going on in the cloud and with very, very thin client or presentation layer really only going on at the end-user device. Maybe there are some processing delays at the end-user device. I would say they would be minimal in the overall scheme of things.
Most of the delays and bottlenecks are really taking place in the network, or perhaps in processing loads and I/O loads, but primarily in the network with things like congestion, packet loss, and latency. These things really are not visible to the end user, nor are they particularly visible to the network manager because they’re hidden in the cloud, so to speak. Unless there are visibility control management tools that can highlight these bottlenecks in a hybrid network world, then the poor old end user’s sitting out there, going off to make a cup of coffee or whatever they’re doing, waiting for their response.
SE: Yes. Well, the ability to do this in a hybrid network or across multiple cloud networks, I think, is quite interesting because typically you’d get into a finger-pointing issue when it’s crossing different infrastructure provider boundaries.
Crossing Infrastructure Boundaries
RG: Absolutely right. It’s all about who owns the problem. If you think about cloud services today, the cloud provider is typically concerned with services taking place in a data centre somewhere, and the end user is subscribing to some sort of an access service, and that may be a carrier providing 4G or it might be an Internet service provider providing broadband. You’ve got multiple parties there, for a start. That’s where finger pointing will take place.
Unless you have a systems-integrator or end-to-end service provider that is managing all of those service elements, and that can pinpoint exactly where problems lie, then what are you going to do? Put a service call into Equinix if your cloud app is going slow? It’s really going to be very, very difficult in a multi-cloud environment to understand where the problem is.
SE: Even from a service provider perspective, if they offered a managed cloud service, they would still need to have tools that allow them to have performance measurement in monitoring across multiple cloud domains.
RG: Precisely right. Perhaps I should clarify now where we see value for this product.
One is in enterprise environments, where the enterprise manages its own cloud environment or indeed has multiple cloud providers, wants an overarching view over those multiple clouds to ensure that they can see what’s going on with their enterprise environments. That’s environment number one.
Environment number two is for service providers, which you just mentioned. Service providers provide managed services on behalf of multiple clients. By using this tool, service providers can, in effect, extend the managed service cloud edge right to the end-user device, and be able to view and manage multiple discrete customer cloud environments with and end-to-end management tool.
The third environment is with networking equipment vendors. If you think about this technology, whether it’s the resource optimisation and monitoring technology or whether it’s CLIX being the Cloud Performance Index technology, think about the opportunities to imbed these technologies in networking devices such as routers or other networking devices. It adds more intelligence to the network to be a much more autonomous cloud-enabled device.
The last category of application is really the middleware vendor. And there are many — most of the large IT players today, whether it’s HP, IBM, you name it, are developing some sort of cloud deployment or cloud management suite. They really need these sorts of technologies, and are struggling to come to grips with how they adapt existing network management systems to address the complexity of cloud. I see technologies like CLIX, Cloud Performance Index, really playing a lead role in some of these control platforms as they emerge.
SE: Aren’t there other network management vendors also looking at this space? Who are your competitors here, and how is VPNscope and CLIX different from what some of the other vendors are doing?
RG: Look, I think there’s lots of stuff going on all over the place, and always will be. There will always be something new. There are legacy providers of network management systems, and they’re trying to forward engineer legacy products. There are also very interesting new companies like New Relic, and they’ve got technology that looks at application-level performance, but New Relic still doesn’t really address the connectivity issue of hybrid cloud environments. There are also OpenStack components such as Nova, which is a fabric controller. Again, that’s a standard. That’s not a product. No doubt products will be built based on that standard. Yes, there’s going to be a lot of work happening in this space over the coming years.
VPNscope: Innovate NSW R&D Grant Winner
SE: Is VPNscope something that an enterprise could purchase perhaps on a beta basis or a service provider could purchase? Is it that far along at this stage?
RG: Well, VPNscope has been trialled in a service provider environment. The background to this is that Serenus was successful in winning an R&D grant from Innovate NSW — that’s a branch of New South Wales Trade & Investment — to develop a minimum viable product prototype. That’s where it stands at the moment. We also have alongside of that two patents, which underpin the technology that VPNscope is based on.
It needs more work. That’s the stage we’re up to now. We’re looking for further partnerships and involvement from collaborative partners or investment partners to develop this to the stage where we can deploy it as a product both in an enterprise environment, as well as a service provider environment.
It’s by no means a shrink-wrap product. I am talking to prospective beta partners, looking at deployments in their various applications, and again it will need some further development. That’s the opportunity.
SE: That’s also the nature of start-up companies in new product areas, which is exciting.
One of the last things that I’d last to ask you about is how Serenus’ cloud monitoring tools might fit in with the emerging war zone for IoT operating systems. In the last few weeks, we’ve heard about Google’s Brillo OS. Apple has HomeKit that it is hoping will be an IoT OS. Huawei has recently announced its LiteOS, and no doubt there’ll be others as well.
RG: Yes. There are going to be developments ongoing, no doubt. My understanding of a lot of these new offerings is that they’re operating systems for devices. I still think the missing link here is the connectivity or the network. Making devices smarter or making them able to be connected to create this Internet of Things world will happen. That’s all well and good. People will put the smarts in the operating system, which can go in the fridge or go in the watch or go in the chip, but there’s still the connectivity requirement.
Those devices, whatever they are, still need to connect to some smart software somewhere — there has to be a purpose for that fridge to have intelligence. It’s one thing to give intelligence to a fridge. It’s another thing to do something smart with it. If it’s simply open the door and turn on the light when I walk in the front door, that’s fine. There are plenty of home applications for IoT. If it’s something that needs connectivity back to software in the cloud or a corporate environment, then it’s going to need network and connectivity. That’s where the real bottlenecks and challenges are going to be seen, particularly as the network becomes more and more congested and overloaded with things like video traffic.
Can I just throw in a statistic at this point, Shara? It’s a really interesting fact that the total growth of data, globally, is occurring at 57% compounding year on year. Just think about that, and project that over the next five years or so, and we see all that video, triple play, and Internet of Things traffic being thrust into the network. The network is becoming a huge potential bottleneck, and the control and management and visibility of data within that network is becoming a massive challenge.
SE: Well, we’ll leave it there. Lots of challenges ahead, not the least of which is network bandwidth and how to manage it, but also what I was calling cloud wars and maybe even an emerging IoT operating system war zone as well.
Thank you so much for taking time to share with us some of the developments that you’ve been working on to help solve these problems. It’s been a pleasure, Ross.
RG: My pleasure, Shara.
About the author: Shara Evans is internationally acknowledged as a cutting edge technology futurist, commentator, strategy advisor, keynote speaker and thought leader, as well as the Founder and CEO of Market Clarity.