Home News The Technical Aspects of PLM – An Introduction to Hosting

The Technical Aspects of PLM – An Introduction to Hosting

0

In a market where capabilities, points of integration and marketing gloss dominate, the technical aspects (or “nuts and bolts”) can be relegated to the status of secondary concerns.  Too often, complex principles are considered to be common knowledge, when in fact that may be the source of significant misunderstandings between customers and suppliers.  In this – the first instalment of a series designed to expose and explain the nuts of bolts of PLM – John Jobson of the Product Development Partnership (PDP) introduces our readers to hosting and explains what the right (or wrong) choices in this area can mean for their businesses.

As a Consultant with the Product Development Partnership, I deal with the technical aspects of both Core and Extended-PLM for retail, brands, footwear and Apparel on a daily basis.  I have worked with some of the world’s leading solution providers, for clients in a variety of sectors – from High Fashion, Career, Footwear all the way to  Accessories – and I have often found that basic principles are neglected in the rush to scope, implement and go live.   When working with PLM (as with virtually any enterprise-level system) the most basic currency of all is “Master Data”; images, measurements, materials, tests, styles, product types, sizing and tech packs are the lifeblood of product development.  And yet end users are often confused as to where their valuable Master Data resides, how they access it, and what, precisely, “hosting” means.

So, what is hosting?  In its broadest possible sense, hosting means the arrangements put in place for the storage and authorised retrieval of your “Master Data”.  The same definition holds true whether we are talking about a small website, a CRM system, a PDM system, or a full PLM solution.  No matter that their purposes are completely different, each of those systems relies on centralised storage and accessibility.  Master Data is only useful if it can be placed and then reliably found in the same location, and hosting in its essence is the place or machine where it resides.

As many would agree, hosting sits at the core of any PLM implementation – whether it’s for the smallest of teams in a single office, or on behalf of a large, multinational corporation seeking to implement global collaboration.  Whatever the size or scope of the implementation, the safety and integrity of the company’s data is a primary concern.  The most fundamental choice when it comes to choosing hosting arrangements, therefore, is whether end users need to be able to access that data from just a single location, or whether the company requires a multitude of users to have access to it from around the world.  The two potential outcomes of this choice are: a company purchasing and installing its own physical server and hosting their data locally, or a company approaching one of the many worldwide hosting providers who compete to offer flexible off-site storage at a reasonable cost.

Neither option is objectively “better” than the other, with each having their advantages and disadvantages depending on the unique needs of the company in question.

Operating a local server (with ownership, administration and maintenance managed in-house) can be a daunting prospect.  The range of customisation options and configuration parameters can be overwhelming – even for experienced network administrators.  Originally this method of hosting was only financially viable for larger companies, particularly since their evolving needs required dedicated and skilled technicians to maintain and develop the hosting environment.  This has changed in recent years.  The growing availability of relatively cheap components, and the continued development of stable, open-source, Linux-based operating systems (of which Ubuntu is a prominent example), have brought down the cost and the required expertise of operating a local server.  Today almost anybody with a little technical expertise and experience of network administration can set up a low-powered server for a relatively minimal cost.

The freedom and flexibility of setting up a low-cost server in-house, however, is counterbalanced by the need to maintain a server administrator on staff, as well as concerns about security and contingency planning.  It is easy to imagine a local server, administered in-house, but (due to lack of experience or foresight) missing the kind of rigorous disaster planning that off-site hosting provides as standard.  Unless proper mirrors and off-site backups are constructed and maintained, the loss of that single server could lead to the irretrievable loss of style, product and supplier information.  This kind of data loss would be disastrous for even the largest of companies, and is something from which a smaller organisation may not be able to recover.  As you can see, choosing local hosting requires more than just the server itself: it must be accompanied by a thorough analysis of data contingency requirements and backup strategies.

Another viable option, if a company needs the freedom to choosing their own hardware and software, but is concerned about the responsibility of day to day hardware maintenance and disaster planning, is what is called “colocation”.  Under a colocation arrangement, the company can choose their own hardware configuration and specify the software that is installed on their server, but the server itself resides with a hosting provider.  The company is responsible for maintaining that off-site server, but associated services like uninterruptable power supplies, climate control, and internet connectivity remain the responsibility of the provider.  Similar to the local server option, colocation provides a level of freedom and customisation that is limited only by the technical capabilities of the company’s technical administration team.

The third choice is that of entirely off-site hosting, with all responsibility for hardware and ongoing administration delegated to a hosting provider.  This removes the responsibility (and also the freedom) inherent in buying or building a local server – handing it to a hosting provider who is then solely accountable for the server in terms of uptime, development and maintenance.  In this scenario, the choice of components will be limited to a small range of set configurations specified by the hosting company.  These generally come in low, midrange and high-end configurations, each with some limited choice of operating system and pre-installed software. These can then be customised to the same extent as the local and co-located options.

There is, however, some further sub-division where entirely off-site hosting is concerned.  Typically, a hosting provider will offer several different options: dedicated hosting, virtual private servers, and cloud hosting (sometimes referred to as “grid” hosting).

Dedicated hosting, as the name suggests, is a physical server dedicated solely to a single company and its tasks.  Dedicated servers are the most expensive of the three options (since servers occupy slots in what are called “racks” in the physical data centres operated by the hosting provider, and dedicating rackspace to a single customer effectively removes that server and that slot from use by anybody else), and are in many cases unnecessary.  Many business have come to the conclusion that they simply cannot afford or do not want to risk the cost of renting and maintaining a physical server dedicated entirely to them. This is where the more affordable options of virtual private servers and cloud-based hosting come in.

As the name virtual private server suggests, this is not a singular physical server but rather dynamic software entity which is hosted (alongside others) on a dedicated physical server. Virtual servers represent a fully-functioning proxy of a system architecture, running in parallel with other such virtual servers on a single physical machine.  They are self-contained in almost every sense, and the only interaction between each separate entity is the fact that they are operating on the same machine. One of the primary benefits of a virtual server is that it can be quickly and easily transferred to another physical machine and resume functioning just as it would have on its original physical server.  Where hardware problems or maintenance can cripple single-server environments, a virtual server can continue operation with barely any noticeable interruption to service, as the virtual environment is temporarily or permanently migrated to a new physical host.  This independence also brings with it the associated benefit of being able to upgrade the server environment almost on-the-fly.  Whereas a physical machine will always be limited by its hardware, a virtual server is only limited by the physical server it is running on.  This means that a virtual server can actively have its specifications increased from a low-end configuration to a high-spec configuration (and back), with changes occurring in response to the company’s requirements at any particular time.

Cloud (or “grid”) hosting is similar in essence to a virtual server environment, but instead of running on just the one physical server, the virtual server or application is distributed across a range of systems.  This is what’s commonly referred to as distributed computing, and it carries with it the benefit that by using a portion of each machine’s processing power (and adapting as necessary if a link to one of these machines is broken) reliability is second to none.  Unlike dedicated hosting, the physical machines or the virtual machines on which the cloud based server is running can be of any specification, and each machine can contribute to the operation of several different environments.  This works to create a very reliable system at a very low cost.  As is the case with the scalability of virtual servers, a cloud-based environment can be adapted as needed: speeds can be increased by simply adding more machines to the distributed system, or even by “pushing” the current set further – allocating them more processing time and increased use of resources such as RAM (Random Access Memory).  As a result, cloud-based hosting can guarantee 100% uptime, but with the loss of flexibility that comes with owning and managing a physical server.

It is important to note that the right hosting solution is the one that best fits your unique business requirements – whether it’s on-site or in the cloud.

 

– John Jobson is a Consultant for the Product Development Partnership.

 

John’s introduction to hosting represents the start of an ongoing series here at WhichPLM, where experts in a particular field set out to dispel misconceptions about certain aspects of PLM and ensure that our readers are able to make informed choices at every stage of the extended PLM selection, configuration and implementation process.

Lydia Mageean Lydia Mageean has been part of the WhichPLM team for over six years now. She has a creative and media background, and is responsible for maintaining and updating our website content, liaising with advertisers, working on special projects like the Annual Review, and more.Joining mid-2013 as our Online Editor, she has since become WhichPLM’s Editor. In addition to taking on writing and interviewing responsibilities, Lydia has also become the primary point of contact for news, events, features and other aspects of our ever-growing online content library and tools.

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *