Scoring Documentation Completion

This blog demonstrates a quick and easy way to score documentation completeness and improving quality which is more accurate than guesstimating,

The Goal

To be able to with a reasonable degree of accuracy estimate how complete documentation is, and to not need to take more that 30 minutes per document to do that. I need a balance between speed and detail.

Its not always the job of an EA to actually do these assessments but as an EA its good to get an understanding of documentation that’s based on an assessment rather than a guess at how complete something is, so I often teach this approach.

Some of the benefits of this approach

In following a structured approach we get

  • Better understanding of expectations. When the document authors understand the things we want to see the chances of getting them increases
  • Easy to visualize status – because we are quantifiable we can easily report status in a number of ways visually.
  • Better trace ability of progress – because again, being quantifiable means we can show improvement over time.
  • More realistic assessment of completion. Normally when i ask someone how complete something is they say 70 or 80% but when you start to look its often not so – people want to naturally please other people and tend to often give higher estimates, or sometimes do not think of the detail involved. This mechanism forces people to look at the work in more detail than if you were to “guesstimate”
  • Raises other quality issues. In doing an assessment like below you are reading and aligning to a set of criteria, which are the things that are important to you. Normally when If run a quick assessment i will come up with half dozen or so comments along side the review.

The End Result

I am looking to get an overall percentage of completion – performing an assessment of something i normally end up with a table which will look a little like this:

Figure 1 – A resultant score of several architectures.

Bear in mind the table is only an example. If I was creating scoring for high level design I may have more detail. You can see across the top I have listed some different architecture documentation. In this example its possible that a project may be composed of several teams working to deliver different things to a customer.

Going down on the left is the breakdown of the things we want to see in the document (Lets call them documentation areas). Each area is given a score of 0 to 5 based on some simple criteria which I will show in a while.

Another important thing to note is this is an estimation of completeness as a percentage and not necessarily work. Some of the things I give points for take longer than others. Reading through an assessment can help estimate the remaining work though.

An Overview Of The Process

The process i use is normally fairly simple. I normally pick a few people that will be assessors and give them an hour of training time with me. We run through a live document together and I explain what each document area means – and this description is normally supported with a template. So for each document we are assessing we normally have one template filled out – and we may go back several times over the course of a project to run the assessments again.

The assessment will happen between the assessor and the document owner. They run through each criteria and agree scores together and make notes to agree improvement areas. It can be done without the document author but i find that the communication between an author and and assessor in a meeting is more personal than just sending a filled in assessment. It also gets better commitment for improvement and enables the assessor to explain each criteria and answer questions.This can also be done as a peer to peer exercise.

I make sure that all people referred to in an assessment (approvers, technical validators) have actually reviewed the work and are aware of its state throughout the process.

Creating An Assessment Template

Its really easy to create a template that can be used for assessing the documentation. It can be done in any format – as a confluence page, as a word document, or as a google form, or Microsoft form for even easier collation.

Minimum Header Information

We always need to know as a minimum:

  • What is assessed – the document name normally – with a version number. I normally keep a copy of the actual document assessed with an assessment, rather than a URL to the current version of a document.
  • Who the assessor was – Just the name of the person who is running the assessment.
  • The document / design author – The person that is actually doing documentation work.
  • When the review was performed – just a date
  • A link back to any previous versions of the same assessment – If you are automating this process for a project in google forms for example the link back to a previous version might not be explicitly stated – you could calculate it if you know what was assessed and when. so if we did three separate reviews of the same thing it would have three different rows of data in your google forms.

The Criteria information

Normally I will expect to see this information each one of the different criteria

  • The criteria – The criteria we are assessing.
  • A short description of the criteria – normally a paragraph just to clarify things
  • The agreed score – using the scoring mechanism below
  • The scored mnemonics – I explain what each letter means in the scoring section.
  • The name of the person that approves the work – has to be a person with approval authority – not normally a project manager. Projects are transitory and end. The architecture normally gets delivered to someone in operations, and they should approve it.
  • The name of the person that technically validated things – normally a technical person who is not the author – its good to get a technically competent person from another team if possible.
  • Notes – any helpful information captured during a review.

In a very large project with multiple architects involved in an assessment I might also add the need for the Architect name, and the individual date You can see part of an example in figure 2:

Figure 2 – A partial example of a template

Scoring Documentation Areas

The Score

I apply a very simple scoring system; the order I show here is the order I assess on. I don’t normally assess something is fully complete until its partially complete. I don’t assess language before something is fully complete, and so on.

You don’t allow people to be their own technical validator or approver.

Figure 3 – The scoring

Short here is a mnemonic letter I use to show easily in assessments what I gave points for. So if something is at least partially addressed a point is given – an extra point is given once its fully addressed.

The Math

The math for calculating percentages in figure 1 is simple, but i will just mention it

  • There are 9 different things to assess and a maximum score of 5 points for each area – giving 45 points total as a possible score.
  • Row 14 is just a sum of the proceeding rows.
  • Row 15 is the percentage calculation – so we are calculating what percentage of 45 is in row 14. For column B that would be =(100*B14)/45

Summing It Up

This has been a quick introduction to how I approach doing quick assessments of documentation status when in large complex projects. Its not perfect but it is fairly flexible and quick and easy to implement. Its saved me many hours of having to do more in depth reviews before teams are ready to do so.

I hope this helps someone out there!

Competing With Small Organisations (Using DevOps & Agile Thinking)

The advent of public cloud services has enabled smaller companies to compete more effectively against larger enterprises. This article discusses this phenomenon and using DevOps and agile thinking to help large companies compete with small organizations.

The Agile Advantage

Smaller organizations tend to be more agile than their larger counterparts. When running a range of public and private cloud services many companies tend to separate businesses in terms of technical function or capabilities in order to achieve scalability. Networking, capacity, operations, application services, are often managed by separate teams.

For large scale capabilities to work effectively it is important to have properly defined business interfaces that are measured. We need KPIs that are specific to our process and the KPIs working together show a picture of your organization rather than only a picture of how you implement standard frameworks.

An automatic advantage that small companies have is that they are often working end to end, covering everything they need to provide a service. Its easier to do when you do not have several teams involved to provide a single service. Often each team in a large organization is having their own business interests. If an organization measures each business unit only by business growth or financial metrics it can be challenging to get people to work together. The KPI’s need to help, to be agreed at a business level between teams

In a larger organization you need to still be able to see things end to end. Someone needs to be held responsible for the synchronization of all of the working parts that are needed to provide a service (This could be an Enterprise Architect). Whoever is tasked to achieve that needs to have visibility into the components that make up the full service. This could well be the job of an enterprise architect.

Photo by rawpixel.com on Pexels.com

DevOps

People often lose sight of how to use KPI’s, and the meaning of DevOps. DevOps as a practice is capable of producing some crazy efficiency benefits – take a look at the Puppet DevOps Report and you will see clear examples.

DevOps is not only about application development – it can be applied to infrastructure services too. Since the advent of virtualization and the mechanisms that can be used for things such as infrastructure as code – the differences between working with software and hardware are less pronounced. In both cases we need proper change and release management.

Some important things to note that all large companies should consider are shown below.

Have Properly Defined KPI’s

Not just things such as KPIs around ITIL and incidents. If you read my article on Risk Analyzing BPMN you will realize that processes have a lot of potential risks. The way you mitigate some of those operational risks is to put KPIs in place to monitor potential hot spots. The KPIs that are recommended with specific frameworks such as ITIL will only get you so far because they are generic.

For example – If a process step is to deliver server hardware to a data centre, and the next step is to set it up, potentially there is a risk around hardware delivery – either it not happening or not happening on time. It would make a lot of sense to measure & monitor delivery time. This should be a KPI

Automating KPI’s

If we are measuring time to deliver hardware, we should have the KPI existing as part of a dashboard. It is essential to automate KPIs as far as is possible – with some kind of systems integration. In our example, If you have to manually track every delivery its time consuming and there’s a risk around human error. In a modern world there’s not much excuse for not automating – tools exist to make it very easy. There may be some manual tasks that need to be performed by a person, but when that happens we can make it easy for them to receive the task and mark the task complete as part of an automated system rather than relying on one person to talk to another. Doing business in an email box is an old fashioned outdated practice, and doesn’t scale well.

Automating Services

When we are designing services we should be thinking hard about how to automate them. For a services company such as Tieto, I think its important to have a balance here – because although we need to be able to scale and automate infrastructure in much the same way as companies such as Microsoft and Google do, we also need to maintain a customer connection. In designing systems we need an automation strategy and to ask ourselves some questions:

  • What do we automate? Deciding the tasks and the services we need to automate is important to ensure our customers still have a personal touch, whilst at the same time deciding ensuring that the right tasks which need no interaction can be handled quickly. For example password change is a no brainer for automation – as might be server creation – but what about a platform migration? Its complex, and a customer needs human interaction to help them feel more comfortable with the process.
  • What is our automation policy? Some things are too costly to automate. If for example we have a password change that is simple repetitive work with a minimum of interaction which provides great benefit with automation usually. A more complex system that is used less frequently may not even cover the costs of automating it. Deciding clearly where automation is a good idea saves time.
  • How do approach the automation of legacy infrastructure? This is part of automation policy but is worth a special mention.

Enabling Play

As a simple example of automation, you can use Microsoft flow to manage approvals. You can tie this into a SharePoint list and then to use Power BI to consume information and create nice dashboards. This opens up a number of other opportunities for you around analytics.

Smaller companies tend to leverage such advances in technology a lot easier than larger companies do. When I worked in a smaller company – playing was easy. In a larger company it takes time to get anything done, especially as internal work tends to get de-prioritized. Not all goals in a large company should be customer related.

Security has a part to play here too. Smaller companies trust their team. Whilst more people means more risk, as I have said before in my Information & Security Thinking blog, being too restrictive leads people to think of alternative solutions for things.

It also demotivates passionate people when they have to jump through many hoops to do simple things. It puts the business at a disadvantage. If an organization disables the use of Microsoft Flow because of its potential of abuse, they are also disabling a possibility to innovate and grow and create some fantastic things.

Security needs to be more about enabling people and making them aware than restricting.

Zero Click Deployment

To truly achieve scalability you should be asking the question “how do we achieve zero click deployment?”. By this I mean operations has to do nothing because everything is automated. Whilst its true that in some cases this is not possible because of a need for manual steps, the closer you get to this goal, the more efficient and scalable your systems become.

I have seen many people thinking that single click deployment on the side of the service provider is enough. It is very different to zero click. I have seen teams script deployments of services very well – coming to the point where they need to create an xml config file, and just run that. Its very good to be able to get to that level – but it still requires a person to sit down and manually do work. Even if it takes 15 minutes only to do that – this will accumulate over time.

If we have a system like service now in the background that customers interact with directly we should look at creating mechanisms that can get rid of the manual configuration. Because in doing that we are also getting rid of an unnecessary communication overhead; an unnecessary point where things can be miscommunication, and where resources are needed.

But what about the people?

The world changes, and roles need to change with it. If we took away the need for that 1 click, we are freeing up a resource to focus on things that provide more value to our customers. It doesnt necessarily mean we need to reduce resources, we are enabling our existing guys to focus on value. In implementing devops we are abstracting away from technical minutiae and looking more at the things that really matter. Of course automated environments can also go wrong. Roles transform over time.

Where does Enterprise Architecture Fit Into This?

Silos need to be broken down – this means more than just telling one team they need to talk to another – it means aligning goals, objectives and working practices.

We need to define proper interfaces between business units and we need to make this all traceable & measurable. We have to enable innovation. Our key systems should have standard interfaces that we can consume information from.

If a finance system uses a 30 year old interface as the only mechanism for getting information out – we should use some imagination – maybe we can do something with Robotics Process Automation (RPA).

All of this needs to be part of thought out strategy. We need to balance the advantages carefully with the risks.

Summing it up

I do not advocate a totally open approach where we allow everyone to do everything – for larger companies, because risk does of course need to be considered becuase as our headcount goes up, so does the risk of mistakes. I ask that security teams think carefully about the implications of denying things. A company needs a level of trust in its employees.

DevOps and agility in business are essential. This only happens if there is a level of transparency and if by implementing automation we enable our employees to bringing business value to an enterprise rather than being caught up in an a mundane security restricted environment..