Integration and migration specialist, Kevin Ryder, shares the second instalment in his exclusive series today. The first instalment centred on the PLM business case, technology and architecture; this second instalment delves into the importance of key stakeholders, timelines, and test environments.
Following on from the first article we have now made our business case, reviewed/created the business process and designed the architecture.
Every software project has stakeholders – too many can cause issues as everyone has differing opinions and too much can be crammed into the scope, and too few and when the software is delivered not everyone buys into it as it’s not to their requirements. This is nothing new to integration projects; they act just like any other software project. However, I have found working with a group of five key stakeholders to be optimum. How much involvement they have throughout varies from project to project but to keep these key members in the loop helps to deliver a successful project.
- C-Level – the person who will ultimately sign off on the project; generally control budget.
- Team leader/User admin – a representative of the user base who it will affect directly.
- IT Department – responsible for supporting the integration locally; so often this group is overlooked but we need their “buy-in” for the success of the project.
- Third party vendor – I’ve experienced the good and the bad here. Vendors that worked side by side, tested output as soon as it was produced, allowing you to tweak the output and retry until it fully integrates. Vendors support and guide you through the nuances of their system and the pitfalls you may encounter. The flip side of the coin is I’ve experienced vendors who have not engaged and when trying to engage basically quoted consultancy rates despite charging the client as well.
- PLM Vendor (PM).
Integration software projects are, for the most part, like any other software project. However, there are some subtle differences. When building your timeline it is not just your software that is involved; commitment from clients, developers and third parties – for any work or testing that is required on their side needs to be taken into account. This is why it is vital to get the buy-in from the third party vendor; their timelines may well differ from yours as different scope and commitments occur.
I always recommend to build an integration on a stable PLM system, meaning a functionally-tested and signed off system. PLM by its nature can change – work can be ongoing, new functionality added or it could be a brand new installation. I’ve worked in both stable and changing systems and can easily say the former is the best practice.
Having PLM installed and configured or having new functionality added (often as part of the integration requirements) is nothing new. However, trying to design and build an integration on a moving platform is not ideal. Configurations change, functionality gets updated once tested and your integration scope changes. Trying to work to a timeline and project plan in these circumstances is not ideal and certainly not best practice; you’d need to allow for a lot of risk factors in here and the timeline would ultimately be a very basic guideline. I liken it to building a house on solid foundations: get the underlying architecture correct and stable before moving into the integration phase.
Test Environments & Boundaries
Every client I’ve ever worked with has had at least one test environment for PLM; this has made it easy for testing new deployments for functionality and configurations. Due to the nature of some of the ERP systems or just down to the size of them, the ability to have a test environment is not always there.
As previously mentioned there are several options in terms of integrations, batch processing or using APIs/Web Services. With the nature of a batch extraction you can test it locally and check your output; you are not directly reliant on the ERP/third party system for testing the integration. The problem arises when you use Web Services or APIs; you now require a test system in order to talk to and upload the data. There are ways around this if no test system is available. A dummy receiver can be created to accept the data and act as the connection piece; however, this requires additional work and still doesn’t give you the real world testing of actually having the end system upload the data. This is even more complicated if data is flowing to PLM.
When we talk about upload to the end system this creates some interesting issues. As a PLM vendor you should not be expected to know the receiving system and its back-end architecture; therefore, for any connection, be it from a batch extraction or API/Web Service, the uploading functionality has to be available/provided by the third party vendor.
Imagine the scenario whereby data is pushed into another system without using a validated portal. Who is responsible for the support of the system? Just knowing basic information such as mandatory fields requires an in-depth knowledge of another system. This brings me back to the stakeholders, it’s vital to have their support on board in order to have the correct technical functionality and knowledge available. Most support contracts that I have seen are invalidated if the backend is tampered with by unlicensed sources (which is generally the supplier’s team or certified partners).
I have experienced this first-hand where a client developed their own integration into a PLM system; without the full system knowledge it created numerous issues such as breaking data integrity, missing correct codes and application mandatory fields. The downtime and cost incurred to get the system back to a running state can be quite consuming.
Don’t miss the final instalment to this series, where I discuss the recent surge in moving to cloud, and what this means for integration.