This blog demonstrates a quick and easy way to score documentation completeness and improving quality which is more accurate than guesstimating,
To be able to with a reasonable degree of accuracy estimate how complete documentation is, and to not need to take more that 30 minutes per document to do that. I need a balance between speed and detail.
Its not always the job of an EA to actually do these assessments but as an EA its good to get an understanding of documentation that’s based on an assessment rather than a guess at how complete something is, so I often teach this approach.
Some of the benefits of this approach
In following a structured approach we get
- Better understanding of expectations. When the document authors understand the things we want to see the chances of getting them increases
- Easy to visualize status – because we are quantifiable we can easily report status in a number of ways visually.
- Better trace ability of progress – because again, being quantifiable means we can show improvement over time.
- More realistic assessment of completion. Normally when i ask someone how complete something is they say 70 or 80% but when you start to look its often not so – people want to naturally please other people and tend to often give higher estimates, or sometimes do not think of the detail involved. This mechanism forces people to look at the work in more detail than if you were to “guesstimate”
- Raises other quality issues. In doing an assessment like below you are reading and aligning to a set of criteria, which are the things that are important to you. Normally when If run a quick assessment i will come up with half dozen or so comments along side the review.
The End Result
I am looking to get an overall percentage of completion – performing an assessment of something i normally end up with a table which will look a little like this:
Bear in mind the table is only an example. If I was creating scoring for high level design I may have more detail. You can see across the top I have listed some different architecture documentation. In this example its possible that a project may be composed of several teams working to deliver different things to a customer.
Going down on the left is the breakdown of the things we want to see in the document (Lets call them documentation areas). Each area is given a score of 0 to 5 based on some simple criteria which I will show in a while.
Another important thing to note is this is an estimation of completeness as a percentage and not necessarily work. Some of the things I give points for take longer than others. Reading through an assessment can help estimate the remaining work though.
An Overview Of The Process
The process i use is normally fairly simple. I normally pick a few people that will be assessors and give them an hour of training time with me. We run through a live document together and I explain what each document area means – and this description is normally supported with a template. So for each document we are assessing we normally have one template filled out – and we may go back several times over the course of a project to run the assessments again.
The assessment will happen between the assessor and the document owner. They run through each criteria and agree scores together and make notes to agree improvement areas. It can be done without the document author but i find that the communication between an author and and assessor in a meeting is more personal than just sending a filled in assessment. It also gets better commitment for improvement and enables the assessor to explain each criteria and answer questions.This can also be done as a peer to peer exercise.
I make sure that all people referred to in an assessment (approvers, technical validators) have actually reviewed the work and are aware of its state throughout the process.
Creating An Assessment Template
Its really easy to create a template that can be used for assessing the documentation. It can be done in any format – as a confluence page, as a word document, or as a google form, or Microsoft form for even easier collation.
Minimum Header Information
We always need to know as a minimum:
- What is assessed – the document name normally – with a version number. I normally keep a copy of the actual document assessed with an assessment, rather than a URL to the current version of a document.
- Who the assessor was – Just the name of the person who is running the assessment.
- The document / design author – The person that is actually doing documentation work.
- When the review was performed – just a date
- A link back to any previous versions of the same assessment – If you are automating this process for a project in google forms for example the link back to a previous version might not be explicitly stated – you could calculate it if you know what was assessed and when. so if we did three separate reviews of the same thing it would have three different rows of data in your google forms.
The Criteria information
Normally I will expect to see this information each one of the different criteria
- The criteria – The criteria we are assessing.
- A short description of the criteria – normally a paragraph just to clarify things
- The agreed score – using the scoring mechanism below
- The scored mnemonics – I explain what each letter means in the scoring section.
- The name of the person that approves the work – has to be a person with approval authority – not normally a project manager. Projects are transitory and end. The architecture normally gets delivered to someone in operations, and they should approve it.
- The name of the person that technically validated things – normally a technical person who is not the author – its good to get a technically competent person from another team if possible.
- Notes – any helpful information captured during a review.
In a very large project with multiple architects involved in an assessment I might also add the need for the Architect name, and the individual date You can see part of an example in figure 2:
Scoring Documentation Areas
I apply a very simple scoring system; the order I show here is the order I assess on. I don’t normally assess something is fully complete until its partially complete. I don’t assess language before something is fully complete, and so on.
You don’t allow people to be their own technical validator or approver.
Short here is a mnemonic letter I use to show easily in assessments what I gave points for. So if something is at least partially addressed a point is given – an extra point is given once its fully addressed.
The math for calculating percentages in figure 1 is simple, but i will just mention it
- There are 9 different things to assess and a maximum score of 5 points for each area – giving 45 points total as a possible score.
- Row 14 is just a sum of the proceeding rows.
- Row 15 is the percentage calculation – so we are calculating what percentage of 45 is in row 14. For column B that would be =(100*B14)/45
Summing It Up
This has been a quick introduction to how I approach doing quick assessments of documentation status when in large complex projects. Its not perfect but it is fairly flexible and quick and easy to implement. Its saved me many hours of having to do more in depth reviews before teams are ready to do so.
I hope this helps someone out there!