The advent of public cloud services has enabled smaller companies to compete more effectively against larger enterprises. This article discusses this phenomenon and using DevOps and agile thinking to help large companies compete with small organizations.
The Agile Advantage
Smaller organizations tend to be more agile than their larger counterparts. When running a range of public and private cloud services many companies tend to separate businesses in terms of technical function or capabilities in order to achieve scalability. Networking, capacity, operations, application services, are often managed by separate teams.
For large scale capabilities to work effectively it is important to have properly defined business interfaces that are measured. We need KPIs that are specific to our process and the KPIs working together show a picture of your organization rather than only a picture of how you implement standard frameworks.
An automatic advantage that small companies have is that they are often working end to end, covering everything they need to provide a service. Its easier to do when you do not have several teams involved to provide a single service. Often each team in a large organization is having their own business interests. If an organization measures each business unit only by business growth or financial metrics it can be challenging to get people to work together. The KPI’s need to help, to be agreed at a business level between teams
In a larger organization you need to still be able to see things end to end. Someone needs to be held responsible for the synchronization of all of the working parts that are needed to provide a service (This could be an Enterprise Architect). Whoever is tasked to achieve that needs to have visibility into the components that make up the full service. This could well be the job of an enterprise architect.
People often lose sight of how to use KPI’s, and the meaning of DevOps. DevOps as a practice is capable of producing some crazy efficiency benefits – take a look at the Puppet DevOps Report and you will see clear examples.
DevOps is not only about application development – it can be applied to infrastructure services too. Since the advent of virtualization and the mechanisms that can be used for things such as infrastructure as code – the differences between working with software and hardware are less pronounced. In both cases we need proper change and release management.
Some important things to note that all large companies should consider are shown below.
Have Properly Defined KPI’s
Not just things such as KPIs around ITIL and incidents. If you read my article on Risk Analyzing BPMN you will realize that processes have a lot of potential risks. The way you mitigate some of those operational risks is to put KPIs in place to monitor potential hot spots. The KPIs that are recommended with specific frameworks such as ITIL will only get you so far because they are generic.
For example – If a process step is to deliver server hardware to a data centre, and the next step is to set it up, potentially there is a risk around hardware delivery – either it not happening or not happening on time. It would make a lot of sense to measure & monitor delivery time. This should be a KPI
If we are measuring time to deliver hardware, we should have the KPI existing as part of a dashboard. It is essential to automate KPIs as far as is possible – with some kind of systems integration. In our example, If you have to manually track every delivery its time consuming and there’s a risk around human error. In a modern world there’s not much excuse for not automating – tools exist to make it very easy. There may be some manual tasks that need to be performed by a person, but when that happens we can make it easy for them to receive the task and mark the task complete as part of an automated system rather than relying on one person to talk to another. Doing business in an email box is an old fashioned outdated practice, and doesn’t scale well.
When we are designing services we should be thinking hard about how to automate them. For a services company such as Tieto, I think its important to have a balance here – because although we need to be able to scale and automate infrastructure in much the same way as companies such as Microsoft and Google do, we also need to maintain a customer connection. In designing systems we need an automation strategy and to ask ourselves some questions:
What do we automate? Deciding the tasks and the services we need to automate is important to ensure our customers still have a personal touch, whilst at the same time deciding ensuring that the right tasks which need no interaction can be handled quickly. For example password change is a no brainer for automation – as might be server creation – but what about a platform migration? Its complex, and a customer needs human interaction to help them feel more comfortable with the process.
What is our automation policy? Some things are too costly to automate. If for example we have a password change that is simple repetitive work with a minimum of interaction which provides great benefit with automation usually. A more complex system that is used less frequently may not even cover the costs of automating it. Deciding clearly where automation is a good idea saves time.
How do approach the automation of legacy infrastructure? This is part of automation policy but is worth a special mention.
As a simple example of automation, you can use Microsoft flow to manage approvals. You can tie this into a SharePoint list and then to use Power BI to consume information and create nice dashboards. This opens up a number of other opportunities for you around analytics.
Smaller companies tend to leverage such advances in technology a lot easier than larger companies do. When I worked in a smaller company – playing was easy. In a larger company it takes time to get anything done, especially as internal work tends to get de-prioritized. Not all goals in a large company should be customer related.
Security has a part to play here too. Smaller companies trust their team. Whilst more people means more risk, as I have said before in my Information & Security Thinking blog, being too restrictive leads people to think of alternative solutions for things.
It also demotivates passionate people when they have to jump through many hoops to do simple things. It puts the business at a disadvantage. If an organization disables the use of Microsoft Flow because of its potential of abuse, they are also disabling a possibility to innovate and grow and create some fantastic things.
Security needs to be more about enabling people and making them aware than restricting.
Zero Click Deployment
To truly achieve scalability you should be asking the question “how do we achieve zero click deployment?”. By this I mean operations has to do nothing because everything is automated. Whilst its true that in some cases this is not possible because of a need for manual steps, the closer you get to this goal, the more efficient and scalable your systems become.
I have seen many people thinking that single click deployment on the side of the service provider is enough. It is very different to zero click. I have seen teams script deployments of services very well – coming to the point where they need to create an xml config file, and just run that. Its very good to be able to get to that level – but it still requires a person to sit down and manually do work. Even if it takes 15 minutes only to do that – this will accumulate over time.
If we have a system like service now in the background that customers interact with directly we should look at creating mechanisms that can get rid of the manual configuration. Because in doing that we are also getting rid of an unnecessary communication overhead; an unnecessary point where things can be miscommunication, and where resources are needed.
But what about the people?
The world changes, and roles need to change with it. If we took away the need for that 1 click, we are freeing up a resource to focus on things that provide more value to our customers. It doesnt necessarily mean we need to reduce resources, we are enabling our existing guys to focus on value. In implementing devops we are abstracting away from technical minutiae and looking more at the things that really matter. Of course automated environments can also go wrong. Roles transform over time.
Where does Enterprise Architecture Fit Into This?
Silos need to be broken down – this means more than just telling one team they need to talk to another – it means aligning goals, objectives and working practices.
We need to define proper interfaces between business units and we need to make this all traceable & measurable. We have to enable innovation. Our key systems should have standard interfaces that we can consume information from.
If a finance system uses a 30 year old interface as the only mechanism for getting information out – we should use some imagination – maybe we can do something with Robotics Process Automation (RPA).
All of this needs to be part of thought out strategy. We need to balance the advantages carefully with the risks.
Summing it up
I do not advocate a totally open approach where we allow everyone to do everything – for larger companies, because risk does of course need to be considered becuase as our headcount goes up, so does the risk of mistakes. I ask that security teams think carefully about the implications of denying things. A company needs a level of trust in its employees.
DevOps and agility in business are essential. This only happens if there is a level of transparency and if by implementing automation we enable our employees to bringing business value to an enterprise rather than being caught up in an a mundane security restricted environment..
I get models and diagrams presented to me in many different forms and levels of quality. In this blog I want to look at talk at improving ArchiMate Model Quality by telling a story with your Archimate views, and covering some other basics.
Telling A Story
When I produce a new diagram I will tell a story with it – I will walk through the diagram and try to put it in a number of sentences, and I will think about the value that the view is trying to express to its stakeholders. I will show a fictional example of a motivational view that looks similar to what a junior architect may produce:
What it effectively says:
“We have a requirement for data protection, identity management, and ease of access. We are driven by a need for security services for Microsoft cloud. This has a positive effect on our goal to have end user identity and access.”
It doesn’t really tell me much – I had to look at the documentation for the elements hoping in particular to find if the requirements had rationales, and impacts defined. I was a little lost to what the author had to say. I normally start by looking at the drivers. Security service for Microsoft cloud didn’t make sense, as it is not a reason to do something. In strict terms I would question the value of this view. If I didn’t actually know a little of the subject matter may be really confused.
Finding Some Value In Confusing ArchiMate Views
The diagram above was easy to put into story because of its layout – it was easy to follow the flow of the thinking because the arrows went in a single direction from point to point, even though the actual story was a little confusing from an strict ArchiMate perspective. In some cases where relationships flow in many directions its hard to tell the story. As a disciplined architect its easy to get a little confused when you get a diagram of the quality i have shown – because its arguable that those requirements should be goals, etc… The usage of the elements is wrong. To get clarity we need to forget the strict definition of the elements a little. If you look at it below, a non-architect may actually find this easier to understand.
I could have taken the influencing relationships off the diagram; and start to think about reworking it. Sometimes when I get models from less experienced architects in my head I am doing exactly as above – trying to understand the thought behind the structure – it normally helps me get some kind of value. I of course do the same thing with the relationships as with the blocks. I am abstracting away from the modelling language So the diagrams above provide some value but not much – if we had strictly understood the usage of each element type and each relationship the diagram would look significantly different – and would fit into a larger architecture in a way that makes sense.
It wouldn’t make sense to reuse the elements in this because they are ambiguous and incorrectly typed – so the value to the overall architecture is greatly diminished – although we cannot make things better unless we have a starting point – and I would be positive towards the architect because this is a starting point for improvement and we can now start a discussion around the things that either the architect or the stakeholders find important.
Looking at a better view
With motivational layer diagrams I normally start by looking for the drivers – the reason why, add a few assessments – and work from there.
Lets look at another view:
CWAD Is Consumer Workspace Architecture Domain – a team I used to run. The story this tells:
A CWAD architect, domain lead, product manager and service delivery manager are assigned to do CWAD road mapping. this project will deliver a product level road map, concern and motivation views, and work package implementation and migration views. This project needs to ensure that each service work package is mapped onto a product level road map, is properly scoped, and that rational and related concerns are clearly identified;this will all give us improved trace-ability into CWAD workloads.
The story was easy to tell for a few reasons.
the arrows clearly go down the page and flow the same way
The different layers (business, implementation and motivation) are clearly grouped together
The elements are well named and typed.
The author is clearly telling a story in the views design – including everything that the reader needs to understand the scope of a project.
When I am trying to read a view in general I start with the business layer, or strategy layer and then work down following the arrows. Within each of the layers I look for the external elements and work inwards – so I will look for external behavior elements (services) and other external active structures (such as interfaces) – and will work from there. A good diagram will highlight those elements. In the diagram above its easy to see because the business layer active elements are at top – and flow downwards towards the other elements.
When creating diagrams I try to follow the order of things in the ArchiMate core model – if i have strategy elements they are on top – the Business, Application, Technology, Physical, Implementation & migration, going down the page with motivation elements on the right hand side because it connects to all layers. Sometimes it doesn’t make sense to do it that way, but as a rule of thumb, you end up with nice consistent models. Having relationships crossing over each other will sometimes push me to changing the order of things; ultimately we try to tell a nice clean story that makes sense. You have probably seen the Archimate Full Framework which has a nice diagram of the layers – I don’t publish it here because that image is under copyright.
It can be tricky to do take that approach on occasions, and you can be forced to do things very differently – This particularly happens to me whenever I do things with implementation and migration because you can be realizing elements in multiple layers, or with requirements realization views that have motivation elements everywhere. I still try to group layers together but its equally important to stop a spaghetti view with relationships crossing everywhere. Another reason that following the flow of layers in the full framework is a good idea is because of the way some of the standard viewpoints fit together – for example Service realization goes from Business to Application – Technology usage sits beneath it connecting application to technology.
People in the western world find it much easier following diagrams that go from top to bottom – or left to right. There are of course times that you cannot avoid to have arrows in different directions. Sometimes diagrams have to be complex. As a rule of thumb if a view has more than 30 elements i think about breaking it out into a separate view.
Some other miscellany
Some general things I do to keep things tidy and consistent:
Avoid relationships crossing each other on the canvas – sometimes this will end up as a jigsaw puzzle and its not always possible. But mostly it is.
Don’t use diagonals – I use right angled connections.
Resize boxes to highlight importance – on a service realization view – I will make the service big.
Resize boxes to make the line connections simpler. See figure 3 – i made the work package larger in order to avoid having to bend my lines. the association relationships now go straight down.
Summing it up…
These are more guidelines than rules. There are times where its not practical to follow all of the formatting suggestions i have put forward today, and that’s OK. The most important thing to remember here is that every view you are creating is telling a stakeholder a story. We need to do that in a way that makes sense, so we have a little creative licence.
Don’t be shy about getting your views peer reviewed by friends – or asking them to tell your story – it will help you improve your game. Ant view is better than no view.
The more we model and review – the better we get – there’s no one way to do things – but practice will improve your game.
Creating a Business Continuity Plan (BCP) requires thought and planning. This blog explores what a BCP is, a high level approach to defining a BCP and how it differs from a Disaster Recover Plan (DRP).
So What’s The Difference Between BCP And DRP?
The obvious answer is that BCP deals with business continuity and Disaster Recovery Plans (DRP) deal with the restoration of systems after a disaster.
Normally DRPs are far more focused on actual technology and steps, where as BCPs have to consider everything surrounding it. The Business Continuity Plan must look at risks to business and likely scenarios we need to manage, where as DRP’s normally are more specific; although they may also be scenario based. Typically the DRP is written by a technical specialist with experience and scope around what happens with specific technologies.
BCPs are important because they consider the needs of the business and not only the technology. Technical subjects, Such as daily backups have a business implication. Performing daily backups implies a Restore Point Objective (RPO) has been set to 24hrs which effectively means that at any point up to 24 hrs of data can be lost. Is that acceptable in a large company? Possibly not. It’s a business decision that sometimes is made by a technical resource with little thought for the fact that loosing a day of business could result in extremely high costs; If one person loses a working day of information the cost may be considered to be 8 hrs of work, but if 100 people lost 8 hrs of information, the cost could be 800 hours.
If you have ever been in the unfortunate position to lose a large system and have to recover from backup due to a series of extremely improbable events you will see some of these issues first hand – It can take months to restore things, causing all manner of financial penalties and chaos. BCP’s should be tested at least once a year – because business and technology change. Even if you only use public cloud services, you still need a BCP in place.
In a large systems failure a simple question can greatly reduced the cost to your business and customers – for example, “What order should we do restores in?”.
Operations might restore in an order that made sense to them, but that’s not necessarily the right order for the customer or the business. Its possible that key critical services & infrastructure for our customers can be grouped together and restored first to minimize impact to them.
The same question could be asked for a product team – in the event of a catastrophe – what do we really need to get things working as a bare minimum? Operations cannot know whats business critical by itself, the BCP guides them.
Customer vs Internal BCP
In an IT services operation its important to remember the customer and supplier are two different business entities. In pretty much every business model the customer doesn’t want to spend money – they want to receive value. As a provider, We want to provide value in the most efficient way we can so we can reduce risk, optimise our costs and improve our profit.
At the heart of a BCP we are managing risk to our business – and the customer must manage risk to theirs. In order to manage risk to business you need an understanding of the business strategy and goals, A customers BCP is about managing risk to their business and not ours. Properly defining a BCP with a customer can be considered to be consultancy work which may involve connecting to their stakeholders, understanding and modelling their strategy, analysing their working practices, risks, and potential business impact. This requires a level of intimacy with the customer.
Similarly, a service provider does not want to expose customers to all the risks that they need to mitigate; they need to protect the business, and its a level of detail they do not normally need. Typically BCPs are classified as “Internal”, or “Confidential”.
For these reasons above, It’s essential that a service provider doesn’t mix these up.
How do I build a BCP?
People often just pick up a template and fill things in on it – which often gives unpredictable results and isn’t really covering the things that are critical to business. Consider a structured approach:
Risk Analysis (RA)
This is key to building a proper BCP. If we haven’t identified our risk, then how do we know a business continuity plan is providing value, and mitigating key risks to the business? I have seem businesses that have not gone through risk analysis at all, leading to some very high level scenarios, which have no value, because at that level, in an emergency, just making things up would work equally as well. There are formal mechanisms we can use such as SABSA, or if we modelled our business continuity scenarios and processes in BPMN, we could apply something like I suggested in my blog Risk Analyzing BPMN Models.
Thinking about your end to end delivery is a good starting point for doing BCP work and then drawing it as a process.
Requirements are also a good place to start; do not forget those come from all kinds of places – we have customer requirements & wishes, security non functional requirements and may also have goal related or other requirements from our business. Understanding priority requirements and looking at possible risks to meeting them can form a basis for a risk analysis.
Of course a skilled architect designing solutions and documenting them to an ISO 42010 standard would already be managing stakeholder concerns, and would be able to identify the key concerns easily.
Business Impact Analysis (BIA)
Once we identify risks we need to establish their cost to business. There may already be guidelines around this; many people like to asses in terms of potential financial loss. In a well defined business there are normally a set of established metrics defined in architecture, and / or a policy around how we measure risk impact. Very basic values can be calculated with a set of assumptions – for example – if we have defined a risk that there will be a loss of customer data we could say the impact hits us in several ways: financial penalties, reputation, potential loss of customers.
If we think a risk will impact multiple customers – such as may happen if we lose a complete platform, you may wish to assess how many customers we might permanently lose, as the missing revenue may impact us in a long term.
We could make a rough guess on the percentage of customers we might lose, but we could also look at previous examples of similar events – for example how many customers did we loose when we lost our servers during a previous outage? what did it cost? You can use such figures or percentages as a guideline when calculating potential impact. Think about how you can use the figures you have at your disposal and use those to influence your assumptions. Once you have run through an impact analysis you may actually decide to re-.prioritize your requirements.
Writing the BCP
Once you have done the RA and BIA you have a good starting point to all the key areas you need to cover in a BCP. With or without a template, we have a good idea of the scenarios we need to cover in our BCP. A few things to note:
Normally I try to avoid repeating information I have in other places – i would rather refer to other documents. Doing that however means that you need to ensure the referred documents are accessible to the entire audience of your BCP. The audience of a BCP is something that needs to be considered carefully. Of course, you have all the security related people but that is the tip of the iceberg. All the players in our BCP process need to be aware of their part in it and need to agree to their part in it. The owner of the BCP must ensure access to the BCP and all related resources for its audience. At this point we can take a document template and start to build a document that really brings value.
There is another school of thought on where to keep information – when I discussed it with some members of my employers security team, they preferred to copy and paste key material into the BCP template. The value that has is that all the information you have is in a single location – the disadvantage is its another place to maintain which can easily become outdated.
In some previous jobs I have had to also maintain a print copy in a physical safe. Of course we are supposed to regularly test this so its arguable that the document will be kept up to date… you decide.
I eluded to the fact that there can be a hierarchy of BCP’s; depending on the structure of the business; and there can also be dependencies on disaster recovery plans and on teams and people – its important that as part of the disaster recovery planning excercise you ensure the availability of all of the things you depend on – be it resources, systems, documentation.
Bear in mind that if your services rely on other teams, or other companies they could well be an integral part of your BCP and its important to establish a proper interface and expectation. This becomes a lot easier if you have defined your process using a notation such as BPMN as i mentioned earlier.
Discipline & Testing
Testing should be regularly done – at least one a year – and documented. If its not documented it never happened. Things change.
Losing The Value Of BCP
If its not tested, it loses its value. If its not communicated, it loses its value. if its done as a copy+paste exercise without walking through your processes and thinking though its goals and your business… it loses value.
Like many things in Architecture the value in a BCP is not in the result, but more in the process you take to reach it. Without the risk analysis or an idea on how your business actually works the value is greatly diminished. The question isn’t actually – “Do I have a BCP Document?” the question is “Do I understand the key areas of risk in my business and do i have a solid plan defined and communicated for if something catastrophic happens?”
Should I Be Doing A BCP?
Business Continuity Plans are Rarely a singular document. In a large corporation some scenarios are taken care of at corporate level and then the different levels of the corporation should have their own – individual service areas, and in the case of Tieto, individual Products/services.
At a corporate level a BCP will usually cover things such as Loss Of Life, and how we should handle things like the media in a catastrophe. Bear in mind also we may rely on other BCPs and should document if its the case.
I’ve heard from product managers sometimes in different service organizations that they shouldn’t be responsible for BCPs – its a customer thing. The reality is, if you have a business that you value, you need to be able to protect it, which is why the BCP exists. There may be exceptions but at the end of the day product teams and operations are running parts of the business too – often together – We should not forget the architecture side of this – we are defining a solution that needs to cover our aspects – That looks at risk relating to People, Processes & Roles, Tools & Technology, Organization, and Information. Product managers have P&L responsibilities – so naturally the continuation of business should be of interest to both.
Summing it up
If you have a business, how important is it to you that it continues? If its important at all, then why not spend a little bit of time protecting it, and rather than blindly running through a paper exercise, really think about what you need to do to protect it. Maybe take in some key resources to work together and take a structured approach. It may be that your BCP exercise yields unexpected results, and improvements to your architectures.
Value is a key focus area of architecture which is often misunderstood. This blog explores this subject..
Value & Professionalism
It has sometimes amazed me how far architecture can be devalued in an organisation. As an Enterprise Architect, I have not only had to show the value to different business stakeholders, but on occasion I have even had to explain it to architects. The reality is, there are a lot of misconceptions around what architecture is and what architects should do. I started to explore expectations in my blog “What is an IT Architect?” because I wanted to get consistent value from my architects.
Getting our stakeholders to understand value can be a hard nut to crack,. They often have their own ideas of what architecture is which normally revolve around technology. I have found the differentiation between a technical specialist and an architect can sometimes be blurry in the minds of our stakeholders. Communication of the different aspects of architecture in a language that stakeholders can understand is essential.
As architects, It’s very important that we work to ensure we are not perceived to be a painful function by our stakeholders. This happens if architects go from meeting to meeting just criticizing other peoples work. It starts with a mindset.
Stakeholders need to understand that, as architects, we are working with solutions and not problems; we must be encouraging by nature and we must take time to express things in terms of positives and growth where possible. We want to be working with our stakeholders rather than against them. Our role is normally to advise – at the end of the day architects do not usually own the business. If we ask rather than tell, engaging our stakeholders with a positive attitude, we can show our value as architects, the value of architecture, and our stakeholders can become the architecture marketing department.
Showing value enables an Enterprise Architect’s stakeholders to look good. This helps us gain stakeholder commitment and makes life easier for everyone at the same time.
For me personally, the best moments come when someone who is not an architect stands up in front of an organization, shows some of the things that we have done together and passionately shows the value that has come through a project, attributing it to architecture not because they have to, but because they genuinely see the benefits, and want everyone else to.
The Reason I Started To Internally Blog
One of the reasons I started internally blogging in my company was that I could not get enough face to face time with stakeholders to express architecture value. I had to show them value by influencing the people around them.
When management has multiple escalations and start to breach service level agreements, they will often start to search for a root cause. Not having architecture is akin to not planning and not managing risk.
In those cases, where poor planning leads to a risk being realised we do not scream “I told you so”. It’s an opportunity to show how as professionals we can help. In those situations normally people are under a lot of pressure.
There are are a lot of techniques we can apply to help identify issues, and when we have a model repository in place, with a competent architecture team we can not only help with solving architecture issues as they are unfolding but we can also put in the steps to ensure those problems don’t happen again. Having a good architecture concerns management process helps with that.
In order to get a baseline of consistency and understanding the blogging helps a lot. Normally I have meetings where I explain things, as we tackle practical problems but if in a meeting i am introducing a lot of key concepts its easy to overwhelm people with the technical side of architecture.
As an example the first time I introduced mechanisms for work planning with ArchiMate. At the backbone, even though i simplified it, still some people didn’t quite follow. The Planning Work With Archimate blog helped. It gave people a reference they can follow at their own pace. It helped me because I can set a consistent expectation.
I think its the most popular blog I have at the time of writing this blog; its been re-blogged and shared a few times, and received a few comments, which is fantastic.
I think the reason for its success is a lot to do with the fact it has clear value. It offers an easy way to use the ArchiMate modelling language to track and define work; which can be challenging – especially if your architects do not have direct line reporting to you.
Capturing the value from stakeholders
To begin with, I think its a mark of any professional that they think about value regardless of what is being done.. It’s a general life idea – if you think about who you are doing things for, why you are trying and try to anticipate needs, the chances are, your resultant work will be better.
Normally at the beginning of a project I am establishing requirements; I sit down with the team and create a series of user stories.
To my mind if you are having trouble defining fully formed user stories for your work, the chances are you need to talk someone and get a little help in understanding the needs of the stakeholders. Most commonly when I see user stories go wrong its because people have missed the bit at the end that speaks of value. It’s interesting to see how often people cannot directly express the value in the things they do; its common to think because value is implied it doesn’t have to be explicitly defined. The problem there is what is an obvious value for one person is not always obvious for another.
In architecture ISO 42010 they give a good ground list for types of stakeholder to consider in our architectures.
Understanding and taking time to explicitly define the needs of our stakeholders is the first step to building an architecture that maximizes the usage of the architecture and its value to all stakeholders. When we validate an architecture that we have created with a user story we are validating the creation of value.
Why Doesn’t Architecture Always Bring Value?
Not correctly practicing architecture leads to not getting value from it. If an architect starts the architecture work after a project, or views the creation of architecture as a documentation exercise, its normally lost most of its value. If an architect only concerns him/herself with producing a couple of technical diagrams, again, its lost a lot of value.
Much of the value in architecture is in its application and the process taken to define it. How we decide what appears in the documentation, all the decisions made around the architecture, as well as the needs of all the stakeholders should be managed & documented as a project progresses for architecture to achieve its intrinsic value.
A Common Problem With Commitment
Because stakeholders don’t always understand what architecture is or how its practiced, it simply isn’t practiced in some cases.
We can have a good group of people that want to do architecture but are not given time. In management terms, architecture is often measured in documents, because these are tangible:
Its easy to see architecture as a documentation exercise if you are working with processes that demand specific types of deliverable, because the emphasis is on the resultant documents. If you write a description of a service it makes sense to plan the service, consider risks, agree the service, and capture the reasons for it. You have to give architects time to plan and think. That thinking time should be spent properly – preferably following some kind of methodology where the thought process is documented.
Even with a management commitment of 70% of an architect’s time allocated to be spent doing architecture, there can still be a fundamental gap if our stakeholders do not understand that architecture process is much more important than these documents.
If architects spend a majority of their time just filling in documents for management, technical writing or building presentations, they will not have time to actually follow methodology and design. Every view an architect does is to manage a concern from a specific stakeholder – therefore you can say every view that is not completed represents a risk that a stakeholder hasn’t been fully considered. Time must be given to do baseline architecture and analysis. It improves the quality of the resultant work and more importantly it ensures not only architects provide value, but their designs do.
A good architecture designed by a proficient architect meets the needs of its stakeholders and shows how it does that – it identifies and mitigates risk. On the flip side, a document that is put together without architecture design is likely to lack good quality. In these cases you will normally see failures in operations – either lower productivity where things are not efficient – or in complete failure of process leading to penalties.
Value In Architecture Modelling
I’ve heard it said a few times that architecture is about drawing boxes; even from some architects. It’s not.
First the obvious. The tools we use are modelling tools not drawing tools. Most people see an ArchiMate diagram in a presentation and miss the true power behind it – because our views all contribute towards a fully relational model we can easily traverse the model using our tools, and more importantly can use it to easily help management understand the impact of change – at the simple level architects can navigate the model – like shown here – for PRTG Network Monitor:
We can expand out those nodes and see all the relationships, to other elements, and then drill down through those. We can reuse the elements int he model in different ways and generate views. The more information we put into the models, the more value we get out. Each of the elements and relationships can have its own documentation. Seeing a picture sent in mail is nothing like having the interactive model implemented and available to traverse.
The act of modelling with a proficient modeler building a standard view – nearly always raises questions on how things work, and which things should be connected to other things; these questions open up areas that can easily be forgotten, and its better for the architect to think things through in design phase with stakeholders, than to blindly move onto executing a project. The costs can be very high if part way through a project you need to scrap everything an start again, and architecture can help prevent such things happening.
In addition to all of this we can apply metadata to an architecture model – making it possible to represent a myriad of different interests to our stakeholders, if we take time to model, and use the right information sources; and then use our models to inform and enable our business leaders.
Summing Up The Value Of Modelling
I will sum up some of the value of modelling architecture:
Consistent communication – everyone get the same views in a repository and common understanding; there’s a reduction in communication overhead.
Enabling Scaling – having consistent communication in a common place makes it easier for us to on board new resources, and for many architects to work together in the enterprise.
Reduced time to find things – Navigating through a model from element to element enables us to easily find related information quickly.
Can abstract many views from the same model. As you develop some views with relationships, you enable automatic generation of other views reusing the same information
Reduction of work – If you rename an element in one view it renames it in all views – in the same way the element documentation is automatically reused.
Cost savings – having architecture modeled gives us opportunities to easily see and optimize architecture, as well as to identify risk.
Better more reusable architecture – modelling forces us to break down complex tasks into reusable components.
Reduced complexity – in a model we can focus on only parts of it in different views to make it easier to consume to different stakeholders.
A model develops itself – as it starts to mature using algorithms we can find new relationships rather than have to explicitly state them.
Better understanding – at the same time as we establish new components, it normally raises new questions around how things fit together and forces us to think. We can also very precisely and easily model and understand the impact changes in the technology, organization or other architecture layers have on our architecture.
The value of ArchiMate
The Archimate full model shows the different layers that make up ArchiMate. Each layer contains different types of elements in the modelling language – for example we have Business Layer business services, Business functions, business processes.
The ArchiMate language specifies the different types of elements and the ways that they are allowed to connect. You can see that ArchiMate covers everything from the why (motivation layer), to the what (Strategy, Business, Application, Technology, Physical & Implementation). This enables us to represent how all of these things work in conjunction with each other.
Following the strict rules of the ArchiMate language forces us to think a certain way – to consider our internal working components and how ex expose them out in different layers. It also has the added benefit of enabling us to derive relationships. A simple example – Owen is related to Max, and Max is related to Christopher – and we could represent this on a model with the association relationships. Even if we do not explicitly say it; because we know these relationships exist the modelling software knows that Owen is related to Christopher. More complex derivations exist – which means that as we mature a model,, it starts to provide value beyond what we directly model.
Summing Up The Value Of Archimate
To sum up the value of ArchiMate:
Internationally known standard – each element and relationship has a very specific meaning. using this standard i can take a model from a completely different company and understand its meaning.
Multiple layers – breaking down into layers such as Business, Application, and technology, enables us to align to many standard methodologies such as TOGAF, and to easily differentiatiate these different concepts.
Better architecture structure – Because ArchiMate has strict rules on what can connect to what, and how elements are used it forces us to think in a more service oriented value driven way.
Better connected architecture – Essentially with all the layers together we can answer the questions what, why, when, how, who? The layered approach make it very flexible.
Better intimacy with stakeholders – Because we define viewpoints for each of the stakeholders we can provide better value for them and get
Aligns to ISO 42010. Archimate as a language was designed to complement ISO 42010 and make it easier to conform to this standard. ISO42010 introduces concepts of elements and relationships – Archimate creates different types of these elements and relationships, that reduces the amount of work we need to describe concepts.
The Value Of ISO 42010
I speak about ISO 42010 quite often because it lays down the things you need for an architecture to be completely documented.
Everything in ISO 42010 has a purpose and everything that we do not do in there could be represented as a risk.
Its great we do Visio diagrams with infra and we cover some technology, but for these Visio views to be valid we need to understand the decisions that are made related to them. We need to understand the concerns that different people who need the systems we design have, which includes things like risk. We need to understand how it is that the Visio diagrams produced meet the needs of different people. ISO 42010 lays down a structure that could be used for this.
When you run through an architecture and ask how each individual requirement is being met by the architecture you provide, questions and concerns start to arise. Its better for a single architect or a team of architects to sit down and address them and actually think through the design process, than it is for a project team to start booking lots of meetings with different people. understanding the needs (concerns) of our stakeholders forms the foundation for formal risk analysis – because the risk is basically recognizing where a need might not be met for some reason. Using a managed approach to architecture concerns provides significantly better coverage of your risks.
When we enter projects there’s often a lot of people asking the same questions – who is doing what, how and where? how will the requirements be managed or realized. Its not supposed to be a project managers role to make those decisions – its supposed to come though someone smart sitting down and creating an architecture design.
Applying Discipline to Architecture
Applying discipline to architecture raises many hard to answer questions that need to be managed. If they aren’t captured and answered at design time they will come up later at some point during an escalation. If you don’t apply a method to your architecture, and do it throughout a project things will get missed resulting in project overheads, delays, and costs in both penalties and incidents.
Applying discipline helps us effectively manage change, and it helps to ensure that issues that may cause problems come up sooner rather than later.
The alternative to this approach of following methodology is to be surprised – this is characterized for example, by getting part way into a project and realizing the active directory you had build needs to be rebuilt – or discovering that one part of the service you provide will just never work with another part, forcing you to look for some kind of bad designed compromise to meet deadlines.
ISO 42010 essentially gives us an international standard on what an architecture description should include – it enables us to build traceable architectures that meet the needs of our stakeholders, and it does so in a very scaleable way, It enables a consistency of understanding and expectation when transferring architecture from point to point.
Explicit Value In Motivation Modelling
Up until this point its worth noting that most of the value i have spoken about is relating in generic terms to the benefits given, but of course ArchiMate and some other modelling methodologies allow us to represent values within the architecture going well beyond the generic. ArchiMate has the motivational layer which is there to show the reason why we do things, and has an actual value element – and of course we can easily derive value normally by just looking at our goals, and outcomes. Take a look at the example:
The example is a motivation view for an architecture concern. As part of our common practice we might connect values to our goals directly in the model. When you follow the flow of the motivation view above from top to bottom – the values become obvious at the point we get to the goal element – so we model them too.
All architecture should provide value, and in this case we explicitly define it for this requirement (Reduced Capital Expenditure, Reduced Maintenance Cost, Modern Future proof solution).
An implementation & migration viewpoint is focused on how we deliver and meet requirements, and not its value – which is one of the key reasons I also state in addition to implementation and migration, motivational modelling is also a good idea. Of course because this is part of a model we are presented with a powerful mechanism for prioritization workloads – when our management wants to run initiatives to reduce costs for example – we could easily auto generate a viewpoint to show which of our work packages contribute to that, and then we can take a discussion with those stakeholders about re-prioritizing workload in a structured fashion and understand how our other values and goals are effected in doing so.
Summing it up…
Architecture and design before doing things are an essential mechanism for avoiding risk and cost. Architecture is a discipline, and unless you take time to do things before during and after a project you never realize its value. Architects must think and be trained; and they must be given time to run through a design process that applies methodology in order to get the value. If we do this, we will literally save millions of euros in penalties, and will have better more focused projects with a significantly higher success rate. Communication overheads will reduce and better communication will be enabled.
To those who think architecture is only drawing pictures – I would say you do not understand what an architect is, or what an architect does, and would recommend you read ISO 42010; For each part of an architecture design ask the question “does that provide value, and what is the risk if I do not do that?”
To those that leave architecture to the end of a project, I would say, you’ve lost most of the value architecture could have given you already – because you didn’t see the risks coming fully, may have missed some requirements and you likely had a communication overhead. There’s still some value in doing it so others may follow what you went through and why, and of course anything that adds to an architecture model is a good thing.
Architecture discipline brings architecture value. I would love to hear from you.
In this blog I talk about requirements, and the process of choosing anything as an architect. It could be a hardware solution, like a suitable laptop, or a software decision like choosing between Teams and Slack.
A really important thing to note here, its that its much more important to think about the methodology behind what I present here than the tools I use.
Choosing The Right Solution
As a rule of thumb, it’s always a good idea to assess 2-3 different technologies before choosing one. Its good to know if your primary option fails for whatever reason, there’s a secondary solution. We want to avoid vendor lock-in ; if there is only a single vendor we should risk assess them – which is a whole other subject unto itself.
Decisions should be made based upon the requirements of our different stakeholders to ensure that the solution is fit for purpose. Its tempting to look at software and think about the feature set that the software gives. Some people choose one piece of software over another because it has better features. This is normally a bad approach; you may end up paying for a solution that’s expensive that will never be fully utilized. The same approach equally applies to hardware and software.
When considering replacement technologies or upgrades you should also revert back to the requirements.
An Example With Disk Capacity
For example – If we are looking to order new disk capacity – a vendor may offer us a new model of disk capacity which is 5% faster. It may look at a good idea on the surface, but that’s not necessarily the case. If we do not require faster disk capacity, then in fact there may be a cost overhead. Let’s consider TELOS for a moment (Technology, Economy, Legal, Operational, Scheduling). We may realize that to implement new hardware means potential incompatibility and a risk to operational efficiency. It may also mean we need people to support or train with the new technology; its one more technology type to manage.
In addition to this – we should of course be tracking decisions, in a work log or other system. We might also consider having release management on versions of our requirements and the approvals of them.
Modelling Device & Requirement Mapping
A practical example. I needed to choose a new laptop. Not knowing what to get I did an exercise in ArchiMate. I started by modelling the top choices I had narrowed down to (I created a Technology View). Of course, if we were deciding software items we could just as easily use application components in another view.
From there I needed to decide my requirements. I am using BiZZdesign’s Enterprise Architect and multi-added some items (it took 2 minutes). I followed this by using a property table to assign priorities to my requirements. The resultant requirements ended up looking like shown below. It’s a Motivation View, using a MoSCoW color filter:
You can see above the priorities in had on my requirements. Normally in doing requirements I am thinking about TELOS; we could also consider ensuring we capture requirements from all the different stakeholder types named in ISO 42010. Note, in my laptop decision I was doing quick and dirty modelling.
Realizing The Requirements
Once I had the requirements it was a matter of deciding which devices met the requirements. I could have put both the devices & requirements in a requirements realization view; Instead I used another cool Enterprise Studio feature – I created a cross reference table using these options:
From here it created a table and I could just click on the table cells to generate realization relationships between the requirements and the devices.
Visualizing this – it was easy to see the best option there was the EliteBook. I could have easily from this point generated a ArchiMate view if I wanted to using the auto-generate functions in enterprise studio but I just didn’t need it. I could have also saved this table as a Enterprise Studio Viewpoint and reused it later so I didn’t have to re-select the options again. Note – Viewpoints in this case refers the the Enterprise Studio functionality. In my agony to make the right choice I did in fact produce one last motivation view:
The whole exercise took 30 minutes. There are distinct advantages to modelling your requirements; when it comes to making sure nothing is missed in requirements realization and tying requirements into other bits of architecture. We can of course document the requirement, its rationale, and influence.
Working With Larger Projects
When working as part of a larger project you might have to periodically sync requirements, or compromise on how you work with them – I could for example in a requirements realization view show a single element such as “Citrix Hardware Requirements” and then in the documentation of the element just link to a confluence page where a team of people not using my modelling tool can manage them.
We can also document the relationships; in relationship documentation you could express who had actually agreed or confirmed the requirement can be realized alongside any justification or documentation you have.
Capturing Requirements In A Collaborative Tool
We can capture requirements using any collaboration tool – be it something like OneNote or Confluence. Of course its good if the tool you use allows versioning; regardless of the actual tool you need to consider the following:
State who the requirements are for.
We clearly identify who is responsible , which product it pertains to, and other people that have been involved in identifying these requirements in a header block.
Sometimes I break requirements down following TELOS – to ensure whoever fills in the template considers things around Technology, Economy, Legal, Operational, Scheduling.
Minimum Needs For Requirements
Normally the actual requirements table as a minimum need has:
Who – is the source of the requirement
Service/area – gives an indication as to the general area/category of the requirement
The requirement – should be clearly defined and easy to validate (no fuzzy vague wording)
The rationale should explain why the requirement exists
Priority Follows MoSCoW (Must Have, Should Have, Could have, Won’t Have)
We have compliance columns for each option we assess.
The levels of compliance normally has the status, and the name of the person who has responsibility for meeting our requirement – for example, if we have a requirement for the network team someone in the network team needs to agree that they can fulfill requirements.The compliance statuses i normally include with the name:
Full – means that the device/service/software fully meets the requirement.
Partial – Means the device partially meets the requirement. In this case we would also include words in this table cell to explain why it is only partially compliant.
Non Compliant – Is obvious – again the reason for non compliance should be stated
Undetermined – Means we have asked but just don’t have an answer yet.
Summing It Up…
Its important to capture requirements and then to assess different technologies against those requirements rather than to look at the feature set a tool or application gives us. If it looks like a solution has a feature we didn’t realize we wanted, this is a change in scope for our solution and we should reassess our business case.
In a world where technology is ever changing its essential that we document the decisions we make or we can loose those reasoning over time, or in bigger projects end up jumping from meeting to meeting essentially discussing the same thing. This is a cost overhead in time and is a risk in terms of miscommunication or the possibility that things get lost. Its possible to have meetings discuss requirements and keep things together within meeting minutes but architects should be looking to understand these things consistently and to group information together, in a way that in a years time when we look at answering the question “why did we buy this, and can we replace it?” we have something we can go back to.