Home Featured Integrations; what you need to ask/know (part one – business case)

Integrations; what you need to ask/know (part one – business case)

0

In today’s guest post, integration and migration specialist, Kevin Ryder, shares his advice on what to know in order to have a successful PLM project. Today’s piece is the first instalment of a three-part series around this topic – each piece picking up where the last left off.

Having worked in the PLM sector for more than two decades, I’ve been involved in around 40-50 integrations and migrations of various sizes and scopes. In that time I’ve worked alongside stakeholders at various levels and experienced a wide range of challenges. Drawing on this experience, I want to examine and share all angles of what it takes for a successful PLM project to come to fruition.

Business Case

In today’s ever-expanding working world, there are more and more systems that help us do our jobs. These systems, naturally, have crossover. The typical immediate reaction to this is, “Wouldn’t it be good if these systems talked to each other?” However, on too many occasions suggestions are put forward to link all of these systems when, in fact, one or two subtle integrations are all that is required.

There are many stakeholders here, from C-Level, whom have invested heavily in infrastructure and want to show a return and benefits of why the systems were chosen, to IT Departments, whom ultimately have to support it to the end user – those with daily working routines revolving around these systems.

The key questions to consider are what you actually want to integrate and why? And what benefits will it provide? The standard answers tend to centre on flow of data, speeding up the process, and removing dual entry and the errors this can cause. Integration allows you to bring a product to market quicker as the data flows through your business process (and systems).

The business process is an area which can benefit most from the added attention but is often overlooked. When you talk about an integration project people mostly tend to focus on the development of the software. Too often I’ve heard, “This is what we want; let’s integrate A to B.” While that is the starting point, the next questions should be, “How should it flow? Does our process work? Can it be refined/optimised?” I’ve often found that combining the business process with the development design stage leads to a better overall process and a more efficient integration.

Another area – and probably the defining factor in considering integration – is the budget; this will dictate how much can be accomplished and should also help you to prioritise the areas you require. Big bang is not the best approach and not one I recommend; consider phasing integrations in. This will allow for continuous enhancements to your business process and systems over time and spread it across budget windows.

Flow of Data – Review the Process

Reviewing the business process, I like to use a phrase “one source of the truth”; ideally data should flow from A to B to C, etc. One-way flows are generally the least complicated and easiest to map out. You move through your process in a logical manner. However, there are very good arguments for bi-directional flows, where you may send an item off to your ERP system from PLM and receive a code back.

Even the simple flows can come with some challenges. A prime example is as follows: colours are generated in system A, colours are generated in system B; different user groups, different processes. In some cases both are created with the same code but not very often. Most of the time you have no matching reference codes whatsoever, yet they could be in effect the same colour. Once these are required in PLM you have a problem: how do you link a colour that has no matching references? This is where refining your business flow helps. You have options to standardise reference codes (not always possible), customise the colour systems so you have a unique reference code/attribute or finally have “one source of the truth”; only create colours in one system by refining the business process. There are options and the one that fits for you will depend on the systems and restrictions that these impose.

Identifying ownership of the data should be the goal, ideally there should only be one owner/creator in terms of a system and the rest flows through the process. A side note: I have faced an issue where the two colour systems were both old legacy systems, which meant there was no way to customise and no way to remove one from the process; the only solution was to create separate palettes with an attribute to identify the source system; not ideal but a working solution.

To keep the idea of “one source of the truth”, a simple technique I’ve used is to lock down data. Once it reaches a certain level/status and the data is ‘moved’ onto the next system, the data can be locked down, avoiding erroneous updates that are not captured. There are ways around this such as continuous integration but this can create a large amount of unwanted data flow so it should be considered carefully. Having a lock down and an administrator override forces you to question if changes are really necessary as the item has already moved onto the next phase of the process. For bi-directional data it is also handy to think about read-only fields if they are for reference only.

Technology & Architecture

Although technology and architecture is often seen as a separate task, I believe it is integral as you map the business process and flow of data. Building a one-off integration may well be the solution, but you should plan ahead and consider all your options; sometimes, though, budget or policy will override the best-laid plans.

When designing the integration the areas I tend to focus on are:

  1. Are there IT policies in place? I’ve worked with several companies that use message buses for example, and all the data flows through these channels. This way the data can be picked up by another system; maybe not now, but in the future. Your output and design is already defined.
  2. Is this a bespoke system? Is it so heavily customised to your client that it can’t be used elsewhere? Or can you work with the client and the other software vendor to start to build a generic integration that will benefit all parties and allow greater collaboration?
  3. Clients have read white papers, have an idea to use a certain technology but it doesn’t fit with your software. Having experience and examples to back up your arguments helps here. I’ve been forced down a route that wasn’t right and in the end it came back round to the original solution, but not without a lot of heartache and cost involved. It’s an experience I learned a lot from.
  4. How should the data flow and what restrictions do we have?
    • Is it real-time or batch processing – this can help decide whether to use an ETL (Extract, Transform and Load) tool for batch or APIs or Web Services for real time.
    • Security can play a part; Web Services tend to be contained locally and application to application and ETLs are local; whereas APIs can be exposed and you have to consider your IT policy as well.
    • Formats – legacy systems can prove to be very inflexible and dictate the output such as XML, staging tables, flat files etc.
  5. Providing flexibility; what are the parameters under which you want to extract data (this is geared more towards batch processing) but allow for a UI to change filters such as seasons/divisions, run times, and file locations if necessary etc. Allow for the integration to be switched on/off. Providing this configuration ability allows the user to be in control and limits regular development updates. I’ve seen so many custom extracts/reports that are hardcoded to a specific set of parameters causing the projects to be revisited every season/year.
  6. Consider how to deal with downtime. If you’re using an ETL you can easily factor in for any downtime of either system. It will work on extraction based on a set of parameters: if PLM is down it will run when it’s back; if the ERP system is down it can pick up the extracted data from the file/staging tables later. However, it gets more complex on real-time systems when you save your record; where does your data go if services are down, if the ERP is down, etc? Handshaking is one method successfully used; keeping the transaction from being processed until the receiving party has acknowledged and given receipt. Another option is to build a local stack; there are many options but this needs to be considered in the design/cost phase as it can have a serious impact further down the line.
  7. Error tracking should always be considered and how this is achieved can vary dramatically.
    • Batch processing – I always prefer to keep error tables linked via the unique codes; reports can be created against these and are invaluable at identifying where the issues occur. Where flat files are used I also create duplicate files in a “safe” location to prevent the deletion of a file once it is processed and thus making the error harder to trace. For good housekeeping, the safe area should be purged on a regular basis.
    • Real Time – updates to the user are the most efficient option, providing meaningful information so the user can act upon it straight away.

Now we have made our business case, reviewed/created the business process and designed the architecture, part two of this series will delve into stakeholders, planning / timeline, and testing.

Lydia Hanson Lydia Hanson has been part of the WhichPLM team for over four years now. She has a creative and media background, and is responsible for maintaining and updating our website content, liaising with advertisers, working on special projects like the Annual Review, and more.Joining mid-2013 as our Online Editor, she has since become WhichPLM’s Editor. In addition to taking on writing and interviewing responsibilities, Lydia has also become the primary point of contact for news, events, features and other aspects of our ever-growing online content library and tools.