Getting The Most Out Of An EA Tool

At work we use Bizzdesign’s Enterprise Studio (BES). I wanted to talk a little about getting the most out of an EA tool like this, if you want to get to a more advanced level of architecture model maturity.

Yes its true, I like Enterprise Studio, but not because of any affiliation with the company; more because its a powerful tool, and after several years of working with it, I am always finding more. Other tools have some similar capabilities, and strengths and weaknesses. I am talking about a tool I currently use, and a little about some of the things we need in order to reach a more advanced level of architecture maturity.

Basic Admin

Its worth mentioning that with different tools there are different access methods. With BES, at a basic level you can maintain your access to the repository by managing users and groups in HoriZZon (their web access portal). Its also worth mentioning, if you have your own servers you can also build an environment that integrates into your organizations active directory and control things via groups, eliminating the need for much of the manual grunt work around administration, and their licensing mechanisms are fairly integrated. This becomes important as tool usage grows.

Basic Modelling

It was lunchtime. I was getting hungry.

As a tool, one of the reasons we decided to use BES in our organization was that it was easy to jump into, whilst also having a fair level of complexity underneath. Whilst i tend to use it most for modelling in ArchiMate, it also covers some other standards I use, like BPMN. At a real basic level, we have a managed repository, with the ability to check out and check in changes, and maintain a level of version control. This I would expect of any tool that has a number of EAs that need to collaborate. Basic functionality needs to be able to manage elements and the relationships between them, offering functionality to navigate the object model and create new views from that – take a look at Archimate – Looking Beyond Diagrams. The better tools out there enable you to use smart connection mechanisms where you can draw relationships, and then pick the correct relation from a list of possible relations between the two elements you selected. This is especially important when you have junior architects – it saves time if they don’t have to chose from every relationship in the language.

Considering Stakeholders

A BES architecture repository can be exposed out to stakeholders of architecture via a web browser using Horizzon. It’s really important to try and get stakeholders engaged, and to do that sometimes we hide ArchiMate, by changing graphics and simplifying things.

How simple model or represent architecture depends on who our stakeholders are for a particular view. Getting stakeholders connected to what we model is an important first step in getting them to start to see the value of the work we do. Being able to traverse the model gives a level of power by itself. Horizzon gives a lot of power and directly connects to the repository for up to date information, Some tools allow HTML outputs, which is also handy.

Appending Basic Information to a Model

We can easily import information into a model a whole bunch of different ways – starting with the basic updating and creation of elements and relations. You can within BES you can use new multiple and easily just copy and paste things from excel for quick addition or updating of elements and their metadata

Its possible to apply a level of automation to that. We can create connections to Excel workbooks, SQL, ServiceNow, and automatically refresh parts of the model package or update them, and we can do mapping from different data sources into the model These are things that take a bit of knowledge and getting used to – but on the most part, you don’t need to be a developer. Basic information can be pulled into the model quite easily if you think a little about how you are formatting it as part of the project that you are working on.

Working With Metrics & Avoiding The Metamodel

I have a love/hate relationship with the meta-model editor, and as a policy where I work we don’t edit it. Some tools work with meta-models easier than others, and working with the meta-model editor that BES has provided lead to some fairly painful mistakes. Its a very flexible and powerful thing, which with a bit of experience can be used, but I don’t often find I need to customize it. In older versions of the tool every time the tool needed upgrading, the meta-model would need redoing – but a lot of new features have come along that make it significantly easier now. As a rule though I tell people not to touch it because when you don’t know what you are doing you can break things in spectacular ways.

When I want to append information to an element these days, mostly I add metrics to the element that needs to contain information. From there its easy to create colour displays or play with labeling to your hearts desire. There are some restrictions in this technique.

Scripting

BES has a really powerful scripting language that sits behind it; you can use it to do all manner of things including creating views, tables of information, modifying the model or colouring the existing model. You can create custom scripts to keep metrics up to date.

The more information you get into the repository the more value it brings. Its good to be able to traverse the portfolio in a number of different ways and represent different things on the same model..

Workflows

Something really cool is workflows – I haven’t used these as much as I would like. Being able to create a workflow in BPMN and then apply it to a model package so we can have approval workflows for example is very cool. We can also use this for getting stakeholders to update specific properties, so we can update our information from the browser without even having to open a modelling tool

API Integration

Here the possibilities start to get a bit insane. I was thinking the other day about building a C# app to pull architecture concerns out of Jira and automatically create/update views in BES. But you could do so many things. What about building your org charts from scanning your active directory? Or pulling a project roadmap directly from a tool like SharePoint? What if we can model our financials? Currently the BES API supports data enrichment; between the API, scripting and other methods I mention today, an imaginative developer can achieve can achieve a lot of cool things.

Creating these integrations have benefits in a number of directions – when you start to expose different information sources, you can not only build models that are correct and business relevant. Doing this sort of work also teaches us a lot about the data consistency and hygiene of our various information sources

Summing It Up

When you start to get to the point where all your different organization sources are being pulled into a repository we can start to connect and traverse them in exciting and interesting ways.

If you are working in an organization that does everything in Excel, PowerPoint and Word, and doesn’t leverage the benefits of technologies like Power BI, I would stop and look at your Enterprise Architecture. Chances are you are spending a lot of time in meetings, and are spending a lot of time looking for the right people and information, rather than focusing on business. Automation doesn’t make people redundant – it enables them to stop doing trivial work so they can focus on things that are important.

It takes time, money, and some developers to truly leverage the best value out of a Tool like BES. The problem I have always seen is a struggle to put together a business case for it. We can save a huge amount of man hours and communication overhead, but calculating how much is hard before you have taken the time to make the connections and make your organization more traceable through its EA.

Tools like BES are pretty expensive to own. We can also spend a significant amount of time and money customizing them. I would suggest that the value of such a tool is unlocked by how you use it – Its worth spending some time in training. The tool is nothing without a good EA behind it, with a bit of imagination, and a little bit of time and money from committed stakeholders. Do that, and you won’t be thinking about your tool spend, because you will be using your tool to unlock your existing information sources and reusing them in creating new views in exciting new ways, rather than just manually modelling things from unmanaged stale data sources like PowerPoint.

Information and Security Thinking

When I first started working with the Tieto Office 365 internal initiative we hadn’t made too many decisions on how to move forward with implementing a collaboration platform – This blog is about the first thoughts I had back then; which still hold true now.

Information Management

At the core of any business, and any collaboration system is information – the management and protection of that information one of the keys to its success. Tieto, like pretty much all companies has information policies and its essential that we adhere to them. Some core things to consider:

  • Information classification – We have a standard set of classifications and those classifications determine how we manage information. Anyone is allowed to see public information – where as confidential information has a controlled access list for example. any information we store has a classification, and that has to be identified in our information model. Typically the classification of information is related to the risk of its exposure to various parties.
  • Information Ownership – Information is always owned by someone, and that someone is responsible for the classification of information – although there may be some mandatory rules an information owner may need to adhere to. Its also important to know there are differences between an information owner, and information author. although in many cases its often the same person assuming those responsibilities.
  • Information Traceability – establishing ownership is part of this but we need to be able to effectively track or locate information.
  • Information life-cycle – its important to understand if information is current or outdated, and to establish rules around things such as information retention.

What this means in real terms is we need to ensure anyone using our systems can classify information and it means we have to put in mechanisms to in some cases enforce policy. Discussions started early on over minimums that our internal security team needs in place – but at its core, before we can do anything we need to ensure our information needs are managed and then add the layers of security on top of that – for example we need to consider things like multi-factor authentication. Requirements are drawn up by our security team in collaboration with the architects, and in some cases we need to consider modernizing. Our versioning policy is a prime example of this. On most modern systems we have a simple major/minor approach to version management – many people are unaware of the formal policy we have at work, because our information systems don’t support a version that is expressed something like 1.0.1-2D.

Requirements Management

With any kind of architecture engagement requirements management is important; one of the biggest problems I have had working in my current role is getting focus to be more at a business layer than a technology one. 

Security is not exception to this – its very easy for a security policy to be dictated by the functionality of a tool, and we should be very careful not to do that. This is why, with our security team we have tried to lay out the requirements before even talking technology – even so, I sometimes get the feeling that some of the see come directly from a Microsoft manual. Its important that we discuss and balance the requirements – and decide what is and is not in scope. Some things will be mandatory, and some things may not be in scope; that’s OK – it can be managed as a risk – and sometimes the business can decide to accept a risk – because business drives security, not the other way round.

Balancing Security

There’s a balance. The users in the modern age expect a certain amount of freedom on how they work with information, but at the same time we need some controls in place to protect the organisation and its members.

Too much freedom – and you have risks related to information getting into the wrong hands, or lost, or worse. Too little freedom and it invites users to find innovative ways to work around the systems you put in place. If I restrict who can access a site for example, then people will work around it – they may start emailing files around and suddenly you lose control of where the latest version of your file is, or who has access to it. If I cannot create my own teams site, then maybe I will want to use something else. In such a case by restricting access we have effectively lost control of access all together.

We have been very mindful of this from the start of the Tieto project – Tieto has many ways of working, and no one way fits all. When we first started on-boarding users into Office 365 some policy decisions were made – people in specific customers were not allowed to be on Office 365. When it comes to collaboration, Not allowing people on creates a very real problem. suddenly those customer teams are alienated from a wider Tieto community, which means we either loose our connection to them, or they find a way of working around the mechanisms we have in place. In Tieto, any restrictive policy we put in place is going to impact someone somewhere.

So how do we address this? We know already the information we must keep to have a minimal level of security, but more important if for us to understand our information policies

Knowing Your Responsibilities

As information owners we all know pretty much what we should and shouldn’t do, and to be successful we need to have a level of trust that our users will know both our information policy and what their industry/Customer does or doesn’t allow. Rather than restrict we need to educate.

For those users in customers that are not allowed to have information online we need to ensure we have a system in place that makes it very easy for the to understand where they are – whether it be on O365, or on our private internal solution. At the top level we decided we would color code. The site theme for Office 365 should be different to on premise so that we can immediately see where we publish.

We need to make sure that as part of this project our communications team makes it fairly clear what our responsibilities are.

How This Is Realized In Technology Terms

We implement core content types that are mandatory and a basic template that all others are derived from – out of the box SharePoint, and we have taken other decisions on things like Multi-factor authentication. We then continued a discussion on how and what we need to implement around EMS and other technologies. 

These are the things we were considering at the beginning and still form important parts of the ongoing work because of course outside the office 365 conversation, there is also a device management conversation going on.

I hope this gives a little insight into some of the information considerations we had when practically moving to Office 365 & its surrounding technologies.