Competing With Small Organisations (Using DevOps & Agile Thinking)

The advent of public cloud services has enabled smaller companies to compete more effectively against larger enterprises. This article discusses this phenomenon and using DevOps and agile thinking to help large companies compete with small organizations.

The Agile Advantage

Smaller organizations tend to be more agile than their larger counterparts. When running a range of public and private cloud services many companies tend to separate businesses in terms of technical function or capabilities in order to achieve scalability. Networking, capacity, operations, application services, are often managed by separate teams.

For large scale capabilities to work effectively it is important to have properly defined business interfaces that are measured. We need KPIs that are specific to our process and the KPIs working together show a picture of your organization rather than only a picture of how you implement standard frameworks.

An automatic advantage that small companies have is that they are often working end to end, covering everything they need to provide a service. Its easier to do when you do not have several teams involved to provide a single service. Often each team in a large organization is having their own business interests. If an organization measures each business unit only by business growth or financial metrics it can be challenging to get people to work together. The KPI’s need to help, to be agreed at a business level between teams

In a larger organization you need to still be able to see things end to end. Someone needs to be held responsible for the synchronization of all of the working parts that are needed to provide a service (This could be an Enterprise Architect). Whoever is tasked to achieve that needs to have visibility into the components that make up the full service. This could well be the job of an enterprise architect.

Photo by rawpixel.com on Pexels.com

DevOps

People often lose sight of how to use KPI’s, and the meaning of DevOps. DevOps as a practice is capable of producing some crazy efficiency benefits – take a look at the Puppet DevOps Report and you will see clear examples.

DevOps is not only about application development – it can be applied to infrastructure services too. Since the advent of virtualization and the mechanisms that can be used for things such as infrastructure as code – the differences between working with software and hardware are less pronounced. In both cases we need proper change and release management.

Some important things to note that all large companies should consider are shown below.

Have Properly Defined KPI’s

Not just things such as KPIs around ITIL and incidents. If you read my article on Risk Analyzing BPMN you will realize that processes have a lot of potential risks. The way you mitigate some of those operational risks is to put KPIs in place to monitor potential hot spots. The KPIs that are recommended with specific frameworks such as ITIL will only get you so far because they are generic.

For example – If a process step is to deliver server hardware to a data centre, and the next step is to set it up, potentially there is a risk around hardware delivery – either it not happening or not happening on time. It would make a lot of sense to measure & monitor delivery time. This should be a KPI

Automating KPI’s

If we are measuring time to deliver hardware, we should have the KPI existing as part of a dashboard. It is essential to automate KPIs as far as is possible – with some kind of systems integration. In our example, If you have to manually track every delivery its time consuming and there’s a risk around human error. In a modern world there’s not much excuse for not automating – tools exist to make it very easy. There may be some manual tasks that need to be performed by a person, but when that happens we can make it easy for them to receive the task and mark the task complete as part of an automated system rather than relying on one person to talk to another. Doing business in an email box is an old fashioned outdated practice, and doesn’t scale well.

Automating Services

When we are designing services we should be thinking hard about how to automate them. For a services company such as Tieto, I think its important to have a balance here – because although we need to be able to scale and automate infrastructure in much the same way as companies such as Microsoft and Google do, we also need to maintain a customer connection. In designing systems we need an automation strategy and to ask ourselves some questions:

  • What do we automate? Deciding the tasks and the services we need to automate is important to ensure our customers still have a personal touch, whilst at the same time deciding ensuring that the right tasks which need no interaction can be handled quickly. For example password change is a no brainer for automation – as might be server creation – but what about a platform migration? Its complex, and a customer needs human interaction to help them feel more comfortable with the process.
  • What is our automation policy? Some things are too costly to automate. If for example we have a password change that is simple repetitive work with a minimum of interaction which provides great benefit with automation usually. A more complex system that is used less frequently may not even cover the costs of automating it. Deciding clearly where automation is a good idea saves time.
  • How do approach the automation of legacy infrastructure? This is part of automation policy but is worth a special mention.

Enabling Play

As a simple example of automation, you can use Microsoft flow to manage approvals. You can tie this into a SharePoint list and then to use Power BI to consume information and create nice dashboards. This opens up a number of other opportunities for you around analytics.

Smaller companies tend to leverage such advances in technology a lot easier than larger companies do. When I worked in a smaller company – playing was easy. In a larger company it takes time to get anything done, especially as internal work tends to get de-prioritized. Not all goals in a large company should be customer related.

Security has a part to play here too. Smaller companies trust their team. Whilst more people means more risk, as I have said before in my Information & Security Thinking blog, being too restrictive leads people to think of alternative solutions for things.

It also demotivates passionate people when they have to jump through many hoops to do simple things. It puts the business at a disadvantage. If an organization disables the use of Microsoft Flow because of its potential of abuse, they are also disabling a possibility to innovate and grow and create some fantastic things.

Security needs to be more about enabling people and making them aware than restricting.

Zero Click Deployment

To truly achieve scalability you should be asking the question “how do we achieve zero click deployment?”. By this I mean operations has to do nothing because everything is automated. Whilst its true that in some cases this is not possible because of a need for manual steps, the closer you get to this goal, the more efficient and scalable your systems become.

I have seen many people thinking that single click deployment on the side of the service provider is enough. It is very different to zero click. I have seen teams script deployments of services very well – coming to the point where they need to create an xml config file, and just run that. Its very good to be able to get to that level – but it still requires a person to sit down and manually do work. Even if it takes 15 minutes only to do that – this will accumulate over time.

If we have a system like service now in the background that customers interact with directly we should look at creating mechanisms that can get rid of the manual configuration. Because in doing that we are also getting rid of an unnecessary communication overhead; an unnecessary point where things can be miscommunication, and where resources are needed.

But what about the people?

The world changes, and roles need to change with it. If we took away the need for that 1 click, we are freeing up a resource to focus on things that provide more value to our customers. It doesnt necessarily mean we need to reduce resources, we are enabling our existing guys to focus on value. In implementing devops we are abstracting away from technical minutiae and looking more at the things that really matter. Of course automated environments can also go wrong. Roles transform over time.

Where does Enterprise Architecture Fit Into This?

Silos need to be broken down – this means more than just telling one team they need to talk to another – it means aligning goals, objectives and working practices.

We need to define proper interfaces between business units and we need to make this all traceable & measurable. We have to enable innovation. Our key systems should have standard interfaces that we can consume information from.

If a finance system uses a 30 year old interface as the only mechanism for getting information out – we should use some imagination – maybe we can do something with Robotics Process Automation (RPA).

All of this needs to be part of thought out strategy. We need to balance the advantages carefully with the risks.

Summing it up

I do not advocate a totally open approach where we allow everyone to do everything – for larger companies, because risk does of course need to be considered becuase as our headcount goes up, so does the risk of mistakes. I ask that security teams think carefully about the implications of denying things. A company needs a level of trust in its employees.

DevOps and agility in business are essential. This only happens if there is a level of transparency and if by implementing automation we enable our employees to bringing business value to an enterprise rather than being caught up in an a mundane security restricted environment..

Architecture Languages, Standards And Frameworks

A discussion on the different architecture Languages, Standards and Frameworks I like to use in conjunction with each other when providing large scale IT & managed services..

My goal in these first posts is to introduce some of the basic ideas and concepts that we will build upon in later posts. I would love to present forward some things such as TOGAF ADM, and the ArchiMate full model, but those things are copyrighted so I can’t present them forward for now.

Modelling Languages

There are 3 industry standard mechanisms I normally use when modelling architecture; and although you may choose to use any other, the mechanisms below give a very good coverage of the different aspects of architecture, so when I talk about the role of architect within our company, I am normally having expectations on the languages I expect an architect to be able to communicate in.

An Example Implementation & Migration View in Archimate
  • ArchiMate – Is a modelling language that allows us to easily bring together different aspects of architecture (Strategic, Business, Application, Technology, Physical, Implementation & Migration, and Motivational). Its important we understand how the business is motivated, how it is realized, how our services are constructed and how these things connect together with both applications and the physical world below it. If we provide consistent architectures and have discipline most ArchiMate tools can provide invaluable business intelligence – I will blog on this further at a later date. Archimate perfectly complements TOGAF. Read more here.
  • BPMN – Archimate is good, but when it comes to understanding how a complex set of tasks are performed with multiple stakeholders BPMN shines – what would be done over several views in Archimate can be done as a single page with BPMN – its designed to model processes that look much more like traditional flow diagrams within swim lanes. I normally align these more in-depth BPMN processes with Archimate – For example in ArchiMate I model the process, with the different business events that align to the BPMN model. and normally assign the different roles and actors – but then the actual steps, happen within the BPMN model. Read more here.
  • UML – Is very good for low level data modelling & class modelling & has been around since the beginning of time. In EA I am tending not to use it too much, because at a high level I can represent data objects & their interactions with Archimate – But UML still has its uses with those who focus more on breaking information down into detail & those with a more application based focus. Read more here.
An Example of BPMN

Project & Architecture Governance

  • Scaled Agile Framework (SAFe) – I like this framework because its relatively simple to use and implement, and provides us a agile development methodology which supports us in a movement towards Dev-Ops, and the continuous delivery of value. Read more here.
  • The Open Group Architecture Framework (TOGAF). A lot of people have said to me that they feel this is a heavy framework that is monolithic and too documentation focused. It doesn’t have to be this way; we can adapt things to be fit for purpose. I think the framework nicely captures the things we need to consider in architecture. Read more here.

For me, a good working practice is actually a hybrid of these two mechanisms – because we need to flexible methodology that covers all the key areas but delivers continuous value.

ISO 42010

This is the international standard for architecture description – and its one of my favorites. When I first read it I was very excited, because it formally said a lot of things I had been standing on my soapbox about for years. It talks about how to describe architecture and covers core needs such as concerns management, and how these need to map onto an architecture solution that considers its stakeholders. Its under a 100 pages long and a recommended read for anyone who is serious about doing architecture.

The heart of ISO 42010

Adding Additional Value

This wouldn’t be complete without mentioning IT4IT and ITIL.

  • IT4IT – Provides a reference architecture for managing the business of IT; at its core is the IT Value Chain, and is very much focused on providing value to the business.. Read more here.
  • IT Infrastructure Library (ITIL) is a set of practices of IT service management – very well know, and very well supported. It can be used in conjunction with IT4IT. I do not know a good resource for free information on ITIL – there’s a lot of material out there though.

Summing It Up…

There are of course many standards we can apply at are equally as good as what I show above, but these are the ones that i found work well together and I like to have implemented together when i have a choice

ArchiMate is designed with ISO 42010 in mind, it’s by The Open Group, who also do TOGAF and IT4IT, so there’s a fair amount of alignment between these. SAFe and TOGAF require a bit of effort to establish good working practices. and IT4IT quite nicely facilitates ITIL which is very much an industry standard.

As always, feel free to comment my post.