Operationalize your automation investment
Is your automation investment adding value to your business? Let’s start by asking ourselves the following questions:
1. Have you invested in an automation framework?
2. Are your operations teams happy?
3. Can you measure a significant improvement?
Depending on who you ask within an organization you will probably receive a different answer. But at the end of the day, what matters most is that automation leads to improvements in agility and quality at an operational level. Furthermore, this should be obvious to both clients and users of the services. Ordering and running PoCs is one thing, but deploying automation on a selected platform and making use of it in daily operations is another.
It is time to operationalize the automation platforms?
What do we mean by operationalize? At a very basic level, operationalize means to make use of automation tools to gain benefits. To get a more holistic view we can use the definition from Wikipedia:
“In research design, especially in psychology, social sciences, life sciences, and physics, operationalization is a process of defining the measurement of a phenomenon that is not directly measurable, though its existence is indicated by other phenomena. Operationalization is thus the process of defining a fuzzy concept so as to make it clearly distinguishable, measurable, and understandable in terms of empirical observations.”
The latter part of this definition is the most relevant for operations. For instance, how can we measure the impact of the automation platform? By this I don’t just mean installation and operation, but rather the effect of the automation project. Basically, to be be successful it must have a direct measurable impact on different parts of the organization:
1. Provisioning time is reduced from weeks to minutes.
2. Provisioning errors, including changes of services, are minimized.
3. Operations teams are happy with the solution and use it.
4. Customer relations have to deal with fewer problems caused by provisioned services.
5. New types of services can be introduced faster.
6. And more..
The success of the automation project can never be measured by the tools installed but the effect on other phenomena.
With the rise of SDN/NFV and automation a lot of emphasis has been put on selecting tools, presenting marketing results of PoCs etc. However, operations teams are still doing a lot of manual work and are therefore skeptical of product vendors. And this with all rights. If automation does not solve operational problems it is just an unnecessary cost.
In order to be successful, operations teams need to be much more involved in automation projects: they have to be users, owners and developers of the solution. I will not use the word DevOps, because it is overused these days. But without the users taking ownership and actually developing part of the solution it will never become truly operationalized.
You need to be tough during the operational phase, if something does not prove to add value across the organization, then it is not relevant. also, if the operational people do not test and accept the features it will fail.
Can you measure your automation project without talking about the actual tool? We know how to get the most out of your automation investment – contact us today!
Wikipedia contributors. (2017, November 8). Operationalization. In Wikipedia, The Free Encyclopedia. Retrieved 10:55, May 29, 2018, from https://en.wikipedia.org/w/index.php?title=Operationalization&oldid=809332909
Is Big data the big elephant for service assurance?
As telecom providers onboard new services and customers, network and service assurance becomes more complex and more difficult to manage. One key reason for this is the poor quality of the network and service assurance data, which often suffers from:
- A lack of priorities
- A lack of service context and customer context
- And too much irrelevant data
The industry has made various and regular attempts to address this issue, yet without any real breakthrough. As a result, alarm correlation efforts often fail due to the impossible nature of rule maintenance. Similarly, initiatives focusing on inventory system lookup – to help build context – drastically fail due to incorrect and incomplete inventory data.
Don’t pin all your hopes on Big data
The industry has now turned to Big data in the hope that it will help solve the assurance data problem. The general belief seems to be that we can throw even more low-quality data into a Big data platform and magically get the answers we need. However, Big data scientists concur that this simply isn’t possible. Thus, for the data issues mentioned above there is simply no silver bullet and to think Big data is the easy answer boils down to a disproportionate belief in the technology or an over zealous product vendor over-selling their capabilities.
And why is this? Successful Big data projects have two preconditions:
- Highly relevant, high-quality data must be available – quantity is not quality
- Clear definition of questions to answer – there’s no magic wand for all questions
Service-focused assurance is a way forward
At Data Ductus, we strive to take a more service-focused approach. In the solutions that we deliver with our partner Netrounds, for instance, we provide high-quality data at the service layer. Focusing on data quality at the source in this way, enables you to answer specific questions such as:
- Does the service work at turn-up?
- What is the network loss, latency and jitter?
- What is the Mean Opinion Score (MOS) score for Structured Insulated Panels (SIP) calls.
For more information about this approach, see our joint white paper on small data versus Big data at: https://www.netrounds.com/service-assurance-need-big-data-small-data-white-paper/.
If you are interested in discussing these topics with us, get in touch.
The catch-22 of service orchestration
Automating service deployment can cause a dilemma if not done properly. Deploying services manually has the benefit of unlimited flexibility. Smart engineers can principally configure services to meet any customer requirement. However, that way of working is infeasible. Service deployment projects take too long time and introduce too many errors. Furthermore, the operational cost is too high since you need a larger staff to cope with the demand.
Accordingly, many service providers and enterprises automate service delivery. Ironically, this type of automation is often counterproductive. Service providers tell us this leads to a culture of “it works, don’t touch”. We refer to this as the “catch-22 of service orchestration”.
In a typical example, a part of the catalog is automated with a “hard-coded” solution which required a long and costly software project. The solution then fulfills the goals of fast and error-free configuration, but it does not offer the flexibility to adopt the portfolio to new customer and market needs. It takes yet another long and costly software project to achieve this. Therefore, there is a risk the organization falls back to manual configurations.
How can we use service orchestration to avoid this situation?
There are several things to consider to remain flexible and still automate:
- Inhouse DevOps teams: Do not outsource everything to an external partner. You need the skills internally to implement changes.
- DevOps culture: Your product owners, and operations and development teams need to work together in an efficient manner. See a presentation on the topic here: https://www.slideshare.net/stefanvallin/devops-for-network-engineers
- An automation/orchestration platform that supports both design-time and run-time features. At design-time, the design team must be able to design, implement and test new or changed services within days or weeks. Operations can then easily automate the deployment. It is a litmus test when you select the platform, how quickly can you implement a simple service yourself. Is it hard-coded? Aim for model-driven platforms that render themselves from data-models like YANG or TOSCA. Evaluate seriously with your own development teams.
A fast turn-around in the design phase delivers a quick turn-around of new services to the market. A fast and error-free run-time engine gives fast service delivery to customers.
It is also important to have a partner that guides the organization towards this way of working. You should focus on training, technical expertise to help in implementing the first services and work towards a model where you maintain and further the services internally.
At Data Ductus, we have helped clients around the globe to establish efficient in-house DevOps procedures. We’d be happy to share our experiences with you.
Assurance is dead. Long live orchestrated assurance.
Legacy service assurance procedures are not living up to today’s requirements on customer experience and service quality. Thus, we need to rethink service assurance, abandon practices with a strict reactive focus on events and alarms, and instead start to think about what really matters. Simply renaming traps and syslog streams to “telemetry” will not help. Neither will dropping event and alarm information into a big Data lake.
Let us start by analyzing the underlying problems.
Disconnect between Service Fulfillment and Service Assurance
First of all the service fulfillment/delivery and service assurance processes are disconnected. The delivery team provisions a service without knowing if it really works. There is very little automatic hand-over to the operations team on how to monitor the service. In many cases, the assurance team has to start from scratch, perhaps not even realizing the service exists until a customer calls and complains. Furthermore, to “help” them understand the service they have incomplete service discovery tools and inventories.
Sub-optimal activation testing
Frequently, services are not tested at delivery, which, as mentioned above, is a considerable problem. In many cases, there is simply no activation testing carried out. Customers detect if the service is working as expected. In other cases, a simple ping is done at service delivery. But that has very little to do with the customer experience. Furthermore, legacy testing techniques, when at all performed, often require manual and expensive field efforts. This, of course, is too slow and inefficient.
Very often neither customer care nor the operations team has real insights into how the service actually is working.
A poor understanding of the end-user experience
Today, service assurance practices focus on resources, servers, applications and network devices. Accordingly, assurance data consists of log files and counters relating to these resources. This, however, has little to do with how services are working. You can have a fault on a device that is not affecting a customer and, furthermore, many customer issues have nothing to do with a fault. Most under-performing services are due to a less than optimal configuration, and alarm or performance systems will not detect these problems.
The industry has begun to realize that service assurance is not living up to requirements. But rather than identifying the root cause and doing something about it, it seems all too often we are looking for a free lunch instead – the Big data promise. You can’t just throw incomplete and low-level data into a Big data repository, and expect to draw conclusions about service health.
Fear the mapping-machine
Unfortunately, Big data alone does not bring us closer to the goal. Calculating service health from low-level resource data is not obvious. The mapping function is simply not available in Big data frameworks, while with machine learning, the training sets for the service data are lacking.
At Data Ductus, we work with technology partners to provide solutions which we believe bring us closer to a resolution. Our two product partners Cisco and Netrounds have defined a concept and implemented a design pattern called Orchestrated Assurance to address the underlying problems and move to service focused assurance, see: http://orchestratedassurance.net
The principles are the following:
- Measure service metrics directly. Do not try to infer them from resource data.
- Use automated tests in every service delivery.
- Tie the orchestrator and assurance systems together so that the orchestrator automatically performs the testing and enables monitoring.
We are always eager to learn from others and to share ideas. If you have comments or would like to assess potential joint initiatives, do not hesitate to get in touch.