Insights

IoT defined

People talk a lot about IoT these days. But it seems they rarely mean the same thing. Not surprisingly, this leads to some confusion. We recommend the definition which the European Research Cluster on the Internet of Things has proposed, IERC Cluster SRIA 2015, and think one can draw important conclusions from it.

A dynamic global network infrastructure
with self configuring capabilities
based on standard and interoperable communication protocols
where physical and virtual ”things”
have identities, physical attributes, and virtual personalities
use intelligent interfaces,
and are seamlessly integrated
into the information network

The European Research Cluster’s definition clearly highlights that the ”I” in the Internet of Things is essential. With an infrastructure of globally connected “things” we have surpassed Machine-to-Machine (M2M) communication.

The fact that these “things” come with unique capabilities (”personalities”), built-in intelligence and self-configuration capacities, means they truly enable edge computing. And the edge to which your system delegates processing tasks may exist within a component that by nature belongs to a different system. And this system was perhaps not even available at design-time.

The definition of IoT brings new architectural requirements

The conclusion you can draw from this IoT definition is far-reaching. Architectures, the organizations of systems depending on components which relate to their environments and one another, are severely impacted once components become intelligent and connected. IoT therefore introduces new requirements on software architectures. The impact is further discussed in IT goes IoT and Everything as a Service – in your cloud.

It is also a defining thought in our IoT workshops which have a strong focus on modern architectures and how they can complement existing infrastructures.

At Data Ductus we specialize in helping companies and public organizations to embrace these new IoT challenges and use them to drive change and embrace new opportunities. Getting started is just a workshop or pilot project away.

IT goes IoT

Considering the speed with which IT landscapes are currently changing, how we manage the future is more important than the track records of our past. As of 2017, “things” connected to the internet outnumber the computers, tablets and phones we are used to. Thus, IT is frequently spelled IoT and the way we handle this shift will determine our future.

Onboarding new data models in an application has always been a challenge in software design. The ability to automate the inclusion of new connected devices, sensors, actuators or entire subsystems, into a new service offering has become crucial. And the inclusion of “things” is not enough. Your new information models also consume APIs of various cloud services. Combined, “things and clouds” equate to new opportunities. But they emerge faster than you can plan your next release. So, how do you cater for this?

Learning from others

The changes we face, emphasize the importance of learning from others. In the IoT shift, “Enterprise IT” has a lot to learn from Telecom. After all, managing vast numbers of network components, and rapidly onboarding new network services, is what Telecom is all about. And this is quite similar to the onboarding of new more or less smart IoT devices or APIs. Bringing software development teams and teams with a telecom background together is therefore an ideal way forward when you make strategic decisions about the role of IoT and APIs in your digital transformation.

IoT platforms and Hybrid IT

In Gartner’s 2017 Emerging Technologies Hype Cycle, “IoT platforms” are approaching the peak of expectations. The tasks of these platforms are not simple. They have to join and combine the services of legacy IT infrastructures with the new and “agile” services introduced in the course of your digital transformation. They also have to incorporate artificial intelligence and machine learning into what will be your future business as usual. As a result, Hybrid IT and Hybrid Integration Platforms emerge as the next generation of hype. For examples, see the Telecom and Network Orchestration & Automation page on this site and how Data Ductus combined its Telco and software development skills to develop the IoT platform of the E.ON’s smart home solution.

Agility beyond software development

IoT impacts the way we all work today. The new focus on service-orientation which it requires, leads to a re-invention of agile. Developers have used agile methods since the 1990s. But IoT also requires agility in business development to motivate software development efforts. Furthermore, to avoid the pitfalls of creating isolated IoT silos, you need a concept of agility and Continuous Delivery built-into your infrastructure. For more on this topic, see Architecting agile and Agile teams need agile infrastructures.

Enabling the business development of your digital transformation

New service-oriented business models drive the demand for IoT. On the flipside, the capabilities of IoT devices enable and motivate the creation of new service-oriented businesses. Consequently, agile business development and agile system development become two sides of the same coin. The take home from this? To meet the new demands, developers must adapt their architectures to meet the requirements of faster turnarounds in business development and embrace the concepts of hybrid IT.

At Data Ductus we specialize in helping companies and public organizations to embrace the challenges posed by IoT and use it to drive change and embrace new opportunities. Getting started is just a workshop or pilot package away.

Do you need expert help with an IoT challenge?

 

Everything as a Service – in your cloud

Software design and system development is impacted by the current cloud shift. For new software solutions, the ability to produce and/or consume APIs and related services is key. Whether you are building new applications or integrating existing ones, your teams must adapt. And the change is profound.

Monolithic architectures and the waterfall models creating them are not particularly popular these days. And this for a good reason. However, while decoupling in software architectures is fashionable buzz, much of software development still carries the mindset of monolithic dinosaurs. Project teams may do the proper and fashionable moves but continue to think inside their old boxes. If so, they fail to leverage the capabilities of their new infrastructures. And they introduce new risks.

APIs bring information hiding and separation of concerns to a new level

The principle of information hiding has dominated software engineering practices for many years. It recommends that software modules hide their implementation details from other modules to make them all less interdependent. In software architectures, you thereby achieve modularization and a separation of concerns. As a result, your software becomes easier to maintain and less vulnerable. These ideas are by no means new. In fact, they’ve been around since the 1970s.

To a certain extent, (cloud) service APIs represent yet another way to achieve this separation. But there is a difference between legacy enterprise application architectures which internally encapsulate its modules from one another, and modern applications depending on the APIs of third parties. The chief architect of the legacy enterprise application made sure the modules were combined to meet overall objectives. If desired, techniques such as dependency injection could streamline cross-cutting concerns in otherwise encapsulated modules. In our new world, by contrast, the API consumer has no means to inject or impact the inner workings APIs called.

This insight must also be considered in other software design aspects, especially when it comes to an application’s cross-cutting concerns.

Information security – a cross-cutting concern impacted by API consumption

Information security is an area that illustrates how APIs change the requirements on software design. Some examples:

  • Authorization. Commonly used design patterns group user permissions into roles based on the RBAC concept (Role Based Access Control). The thinking behind RBAC goes back to the 1980s. Rooted in their traditions, software developers often continue to think of access permissions in terms of RBAC even after they have moved their software to an API consuming cloud among clouds. However, since there is no chief architect among clouds ensuring roles are globally and equally understood, the RBAC concept is outdated. Instead, the notion of roles can become a severe security hazard. To orchestrate APIs in a secure fashion, we must externalize authorization decisions from the individual API producers and consumers. They should be moved to yet another service capable of determining who gains access to what, where, why, when and how. In its decision making, the authorization service, typically a Policy Decision Point (PDP), applies the information owner’s security policies. Other services can then query the PDP using Attribute Based Access Control (ABAC) to ensure they share information in a policy compliant fashion only.
  • Authentication. Before we can even think of authorizing users, we need to establish their identities. In the enterprise, this was typically done through a call to an enterprise directory service (LDAP or Active directory). Among our clouds, however, there is no single enterprise directory in charge. Instead, we need yet another type of externalized service, a trusted Identity Provider which federates identities.

 

New security protocols

Lastly, the way externalized security services pass identity and authorization information to consuming services needs to adapt to the needs of modern API consumption as well. Applications firing hundreds or thousands of queries to microservices per second cannot wait for each microservice to do its authentication and authorization in old ways. Instead, we need new carriers, such as JWTs (Jason Web Tokens), issued by our new trusted identity and authorization services, that we pass along with our calls.

The fact that applications, in spite of their API dependence, rarely use such externalized security services in a consistent manner, is a sign that many software developers and architects have not yet fully embraced the change that the API economy introduces. And this is the root cause in some of the security breaches we hear about in the news.

The importance of double loop learning

So, what can we learn from failures to consistently adapt to the requirements of new operational principles? Single loop learning may enable software developers to repeat popular buzzwords. To fully embrace change, they need to take yet another loop and internalize the overall objectives of the shift into their mindsets and thinking. Software developers and architects who stick to old habits may jeopardize information security.

If you are a buyer of IT consultancy services, make sure your partners in software design are double loop learners. If not, you should call Data Ductus.

Architecting agile

In software development, agility typically refers to iterative and collaborative methods of working. Agile methods bring “just-in-time” to software development. But without a supporting agile architecture, timely delivery will rarely happen. And for IoT, “just-in-time” is simply not good enough. Your IoT platform needs to be ahead of time.

Software Development Lifecycles (SDLC) are impacted as software development increasingly becomes agile through Continuous Delivery and DevOps schemes.

Of course, HOW you develop your software is of importance. Yet, WHAT you build is equally important. The need to easily adapt to change drives agility. And for that same reason, software architectures must cater for future change from day one.

For years, we have striven to decouple software functions to make them less dependent on one another, which is all good. Yet, once our software moves into clouds and/or incorporates increasing numbers of intelligent “things”, decoupling is not enough.

Our E.ON case study provides an example of an IoT platform built with such architectural agility in mind. We’d be glad to share the experience with you. In fact, we have already packaged our lessons learned into our IoT workshops.

IoT and edge computing

Pushing data processing to the edge, near the data source, becomes a necessity for efficient IoT platforms. And this, in turn, introduces a new type of dependency, the introduction of “unknown unknowns”. If you design your software solution based on what can be pushed to the edge of currently known and connected “things” and the capabilities they offer, your edge computing becomes hard-coded and stale from day one.

Thus, our architectures must enable the detection of new capabilities and the generation of new services at run-time. We have to automate automation. We must embed the ability to cater for change not only in the software we build ourselves, but also taking into account the change in third party services we may want to consume in the future.

IoT and Industrial Control Systems (ICS)

Legacy Industrial Control Systems (ICS) have some things in common with modern IoT platforms. They receive data points retrieved from large numbers of monitoring devices, process the information and act upon the results by sending instructions to device controllers. For this reason, industrial IT can potentially benefit from the rapidly growing availability of “smart devices” capable of monitoring and controlling industrial equipment. Yet, in industrial IT, experts have been hesitant. The requirements on “near real-time” are strict in ICS. If a controller is delayed with just a few milliseconds, the disaster may already be unavoidable. While native SCADA protocols are trusted to meet near real-time requirements, the internet protocols of IoT have (rightfully) been questioned.

Faster than real-time – ahead of time

The real-time objection is, however, becoming less relevant. For one, IoT protocols have become faster. But more important: the power you gain by collecting and analyzing vast amounts of data from (fairly cheap) standardized and componentized (IoT) actuators, enables so much more of intelligent and proactive decision making.

A parallel illustrates the problem: Airbags in cars explode in your face in case of an accident. The “real-time” requirement is literally a matter of life or death. If the sensor identifying a crash event triggers the inflation of the airbag a millisecond too late, it is useless. However, in modern cars, “IoT” adds vast amounts of additional data to help identify potential catastrophes ahead of time. The distance to other objects on the road, lane departure warnings, pedestrian detection and other “smart” features in modern cars, help ensure we become less dependent on the airbag inflation control. In similar ways, our experience from industrial IoT shows that ICS and SCADA environments greatly benefit from enhancements offered by modern IoT platforms. More data, more intelligence, more proactivity. Rather than being reactive in “real-time”, your systems become proactive ahead of time.

Our industrial kiln case study provides an example of IoT technology built with “near real-time” requirements in mind. We’d be glad to share the experience with you. In fact, we have already packaged our lessons learned into our IoT workshops.

 

System of systems, machine learning and AI

We have yet to design a system that passes the Turing test. But the machine learning that takes place today certainly means our AI driven machines are getting smarter by the day.

The sheer amount of Big Data and the new levels of analytics achieved on data-in-transit as well as on data-at-rest, brings modern IoT solutions to levels of intelligence that was unheard of just a few years ago. Thus, understanding and using this intelligence is a must for companies and organizations that want to stay ahead of the curve.

Harnessing this in larger IoT infrastructures opens up yet further opportunities. “Systems of systems”, whereby the capabilities of AI machines are pooled into one powerful system, offer greater intelligence and performance. Enough to fool Turing? Well, if not today, potentially tomorrow.

IoT and AI for industrial IT

At Data Ductus we specialize in bringing together IoT, AI and smart-machines for our industrial clients. We help organizations assess the interoperability of existing solutions, define new infrastructure and ecosystem requirements, and ultimately combine the different elements of a system to deliver day-by-day improvements. And the results speak for themselves.

In industrial bakeries we are freeing up time for people to develop new and exciting recipes as AI runs the ovens and ensure the best baking results. We orchestrate actuators in industrial kilns to communicate with an intelligent central system so wood is dried at the right temperature and to the millisecond. We have added analytical intelligence to already quite “smart cameras” to help minimize waste through benchmark learning in furniture factories. In mines, our software solutions help optimize and automate operations. Public water supply agencies use our solutions for early detection of water quality issues. And these were just a few examples.

A new industrial revolution

Conclusions drawn from these engagements with industrial clients: digital transformation revolutionizes not only the business models and go-to-market strategies of producers we have worked with. It also profoundly changes the way they produce and the quality of their products. Machines that fail to learn in time and production lines without any AI, increasingly become a business risk.

What AI does for machine learning, workshops do for people. As consultants always are keen and curious to learn from our clients. Therefore, we do our best to package relevant experiences into workshop packages relevant to others.

Digitalization transforming human performance

Digitalization changes the way humans act, live and work. Amazon has replaced thousands of stores, but built a huge market for logistics companies. Doctors can diagnose patients faster thanks to online medical journals, while people carry out self diagnosis from medical websites. Knowing the threats and opportunities for your organization and its customers is key for a successful digital transformation.

 

Illustration of new services made possible through digitalization.

Increased mobile capabilities, real-time data analytics, cloud-enabled services, AI, and social media integration, all contribute to new, innovative and often disruptive services.

If your company isn’t taking advantage of these, another company most certainly is. New competitors emerge and old, “stuck in their ways” vendors fade away. Therefore, companies have to reassess their markets and understand how digitalization impacts their customers and their ecosystem as a whole. These are some fundamental questions you need to answer:

  • What type of digital disruption(s) must my organization prepare for?
  • Who is preparing to take over if we don’t act?
  • When will this happen?
  • How big will the impact to our business be?
  • Where in your value-chain does it hit?
  • What can we do to counter this?

 

Focus on human rather than digital performance

However, in the end, digitalization is all about digital services that improve human performance. It should be seen as a facilitator rather than the driver of change.

The best approach in helping organizations transform is to therefore to ask: how can we improve stakeholders’ lives?

At Data Ductus we specialize in helping companies and public organizations to embrace the challenges posed by digitalization and use it to drive change and embrace new opportunities. Getting started is just a workshop away.

 

What DevOps teams can learn from Dog Agility

Continuous Delivery is the promise on which DevOps too often fails to deliver. The extreme programming movement started 20 years ago with the explicit goal to improve software quality through a “just-in-time” concept for software development, also known as agile development. But still, according to last year’s edition of the often quoted Standish CHAOS report only 29% of software projects deliver on-time and on-budget.¹ What can we learn from this?

Lesson learned from many years of Standish reports offer some obvious conclusions.

  • Winning hand – these are the software project characteristics which reach a success level amounting to 81% in the 2016 Standish CHAOS report. Their characteristics:
    • Small projects
    • Agile methods
    • Skilled teams
  • Losing hand – these are the software projects running the greatest risks with a failure rate of 64%:
    • Large projects
    • Waterfall methods
    • Unskilled teams

So the recommendation is clear. The DevOps movement is on the right track.

1. Presentation of conclusions from the 2016 Standish CHAOS report Standish report presentation

But agile development is not enough – learn from dog agility

A dog's perspective on agility“Agility”  means different things to dogs and software developers. Yet, there are things we can learn from agile dogs.

Canine agility is a discipline which requires a thorough understanding between dog and trainer. A dog’s trainer leads and gives instructions while the dog runs a course with obstacles. But the trainer is not allowed to physically help the dog by any other means than by giving direction. The dog has to do its problem solving and running all by itself.

In agile software development, product owners play a similar role in relation to agile teams. The product manager defines the course with obstacles, often referred to as a sprint backlog. The team then has to do its problem solving all by itself. In both sports, the athletes (the dogs and DevOps teams) are unable to navigate through the course without its handler (or product manager).

Picture published based on Wikimedia Commons license. For details about the copyright owner, see  Golden Retriever agility teeter.

No roadmap, no performance

On the agile dog course it becomes obvious that a dog cannot find the track by itself. In agile software development, the importance of the handler’s role sometimes seems to be forgotten. Neglected product management is a common reason for project failures. Indeed, in many software engineering environments, a principal uncertainty exists with regard to product management.

Illustration of an agile development modelAgile came about to enable software developers to adapt to changing requirements. Rather than making detailed plans up front, like in old waterfall models, agile development promised to have the ability to adapt, to onboard new feature requirements, if not on-the-fly, so at least “on-the-sprint”. In a DevOps culture, aiming at delivering a potential new release on a daily basis, the focus on the ability to adapt is even stronger.

With agile, the old style product manager with long-term plans detailed out in Market Requirements Documents (MRD) and Product Requirement Documents (PRD) went out of fashion. In some engineering organizations this created a cultural gap. There is a common misunderstanding, that agile somehow makes roadmap planning redundant, that we just change the roadmap and its vision with every sprint iteration. Yet, experience has proven that without an understanding of overall objectives, DevOps brings no value. In the Standish report, the skills of the executive sponsors of a project, and the strategy and roadmap owners, is a key factor that is often the difference between success and failure.

Hierarchy of objectives

A hierarchy of objectives sets the priorities for each daily prioritization of tasks:

  • The strategy defines overall vision(s) and objective(s)
  • A product roadmap outlines high-level goals for upcoming product versions aligned with the strategic vision
  • A release plan juggles the three factors that can be altered to meet goals: time (=planned release date), budget (=resources), and the feature scope. If your feature scope is too large, you can move the release date or try to add resources (budget) to get more done in a shorter time. In planning upcoming sprint backlogs, the product owner calibrates these factors over and over in relation roadmap goals.
  • Sprint backlog: This is where the product owner puts the prioritized tasks for the DevOps team to focus on.

While this may seem obvious and simple, real-world software projects often fail to establish and communicate this hierarchy of objectives. As a result, DevOps teams risk running astray like agile dogs without a handler.

DevOps needs more

DevOps also adds further requirements on managerial control. For instance, in your DevOps vision, you have to take Delivery & Support into account. For one, agile teams need agile infrastructures, which we have written about in our ITSM section. But furthermore, your application lifecycle management must embed the support chain, from 1st line, to 2nd line and 3rd line. This also impacts your DevOps organization.

At Data Ductus, we find these topics highly interesting. We are always happy to share our experiences with others. If you are interested, we’d be glad to present our DevOps workshop program.  All dogs are welcome!

Services

Service areas in which Data Ductus contributes with expertise and skills include areas such as the ones listed below. In what ways may we help you? Try us! Get in touch and let us discuss your next project!

Packaged services

Our packaged services are clearly scoped service offerings intended to achieve a measurable outcome or result to overcome a clearly defined business challenge in a given context relevant for a specific audience.

Cases studies: IT and IoT

The case studies are examples of what we did for other clients. Click Download full report to see complete details about each project. Do not hesitate to contact us for more information or references.

Drying wood using IoT

Valutec is one of Europe’s leading suppliers of industrial timber kilns. With operations in Sweden, Finland, Russia and Canada, the company has annual sales of around 30M USD and has delivered over 4,000 kilns to the market.

The challenge

Assist Valutec in the implementation of a long-term smart control system for industrial kilns that is easy-to-use, can withstand the harsh kiln environment, and can deliver optimal timber drying conditions in accordance with different wood types and drying routines.

The solution

A robust hardware solution with a Programmable Logic Controller (PLC) connected to temperature sensors, heat coils, fans, etc. A PC monitors and controls all PLCs. The PLC and the PC are connected to a PROFIBUS or PROFINET network. The PC software uses Open Control Communication (OPC) standards to communicate with the PLCs.

How we did it

We worked closely with Valutec’s in-house team, utilizing several open source and commercial add-ons from Valutec partners such as Siemens. A test system was used as a beta site to perform incremental updates and evaluate the results before solutions were deployed at live kiln sites.

Benefits

Thanks to the simple and intuitive user interface, a kiln can be configured locally to fit many different kiln types and suit individual wood drying requirements and processes. The system uses simulation software to fine-tune its operational settings in order to deliver optimal wood-drying results with minimal energy usage.

E.ON Delivers a Game-Changing Smart Home Solution to Customers

E.ON is one of the largest  companies in the European energy markets. The company serves many millions of people in Europe and beyond.

The challenge

Since market deregulation in the 1990s, it’s been easy for consumers to switch between suppliers. Electricity companies need to offer customers a much better service than simply providing electricity. Our challenge was to help E.ON develop a way to do this.

The solution

100Koll is an app-based control system that enables customers to manage electricity in the home. The smart home solution is built on a modern IoT-based service platform. The architecture is designed for continued agile service and business development.

How we did it

Working closely with enterprise architects at E.ON, we developed a new platform for IoT services. We then trialed the service with 10,000 E.ON customers, before rolling out nationally across Sweden.

Benefits

E.ON offers customers added value through Smart Home functionality, transparent usage and billing and potential energy savings. 100Koll meets the immediate needs of the market while catering for future IT and market developments.