Home Featured Closing the data ‘black hole’ in high volume visual content production

Closing the data ‘black hole’ in high volume visual content production

0

Wide Lens

Eric Fulmer, VP Operations & Strategic Growth at Capture Integration, shares his second exclusive article with WhichPLM. Following his first guest article with us, this piece explores what Eric refers to as the “black hole” of content production. Visual content production is an increasingly data driven process that fails to link critical data with the assets that need to be associated with it.

Discovering the Black Hole

For all the recent changes technology has brought to processes, from industrial manufacturing to hailing a cab, there are some areas where it has made surprisingly little impact. When we look at photography and video production, the biggest change in the last century was (of course) the shift from film to digital. The digital revolution was certainly transformative in its time, but that was more than a decade ago.

It’s truly surprising how little has changed in high volume visual production since the death of film. In 2016, many brands and studios still:

  • Identify product samples with handwritten tags and track them with manually updated spreadsheets
  • Work from manually created spreadsheet “Shot Lists” that are distributed to Sets and marked off with colored markers
  • Communicate via unstructured emails, phone calls and “tribal knowledge” that often results in last minute “fire drills”
  • Manually enter complex filenames from that printed Shot List to identify each captured image or video clip
  • Capture content utilizing models who have contractual usage rights associated with their appearance, while failing to capture those usage rights limits within the assets themselves (potentially exposing their organization to millions of dollars in liability)
  • Process images and videos with essentially zero “relevant” metadata attached to them, making filename the only “link” to identify that asset (yet a filename can be changed by anyone who handles the file during the production process)
  • Manually rename and copy files from local systems to server and back again as retouching, approval and other workflow steps progress
  • Deliver content to downstream recipients that must be painstakingly “tagged” with data that already exists in corporate systems (such as PLM/PIM and ERP)

I spent more than 20 years selling the most advanced digital capture hardware and software to large production studios, but it took a long time for me to realize how little efficiency was being achieved in the overall production flow, despite continual technology advances in individual tools. The “gaps” had to do with how knowledge and information (data) became trapped in silos within the process. This led to a realization that there is a data “black hole” in the majority of high volume visual content production processes.

There is typically significant data aggregation and management “up front” in order to determine who, what, when and where production will happen, and there is significant data in other corporate systems about the lines/assortments, products, delivery requirements from channels/partners, creative input from various team members, talent information (such as contracted usage rights for models) and much more. But that data is typically “lost” within the visual content production process. So, “dumb” assets are dumped into the many receiving channels that are desperate for “smart” assets.

Not my DAM Problem

As my professional roles shifted from hardware-centric to software-centric, I initially thought the answer was a three-letter acronym that has been pitched as the Holy Grail of this problem for nearly 20 years: DAM. Among the seemingly endless buzzwords produced by technology and marketing gurus, “Digital Asset Management” may be the most overused and least understood of them all.

The story seems the same every time. When the significant problems associated with an overload of “dumb” visual assets are recognized, the first response is: “We need a DAM.” I don’t dispute the conclusion— every organization with large numbers of assets to manage and leverage does need  a DAM. My issue is with the assumption that just having a DAM solves the problem of “dumb” assets. It doesn’t. It just creates a common repository for all the “dumb” assets to live in. And the fantasy story many brands tell themselves is that their already overworked staff will somehow find the time to go in there and manually “tag” those dumb assets to make them “smart.” It just doesn’t happen. Not only because of the painful nature of tagging four assets with one set of data, and then finding the six assets that require another set of data …but the limited knowledge of any given user means they can only tag the data they know.  So now we’re going to have five different users individually tagging small groups of assets with their “slice” of data? This approach amounts to madness at any kind of scale.

Lightbulb 1

Instead, I see two primary paths that clients take to address this problem via DAM:

  1. Blame the DAM search engine and go off to find a magical DAM that will make their dumb assets behave like smart ones. It’s a fool’s errand. If there’s no metadata, the DAM can’t magically create it. Dumb assets are not the DAM’s fault and many DAMs don’t offer an easy fix. Dumb assets are the fault of the production process that created them.
  2. Integrate the DAM to PIM/PLM to attach product data automatically based on asset filename. This tends to be an expensive and complex process that rarely pans out the way the client hoped. Even in the best case scenario, where the integration is done properly and product data is attached reliably, there is a lot of critical data that exists outside the PIM/PLM. For example, from a corporate cost/benefit standpoint, it’s far more important for many fashion brands who shoot with models to address usage rights of assets (which, if mishandled, can cost the brand millions in agency fees), and the PIM/PLM is not going to address the question of which model was used in the photo shoot.

In 2016, we should expect every asset that moves through a production process to bring along the critical descriptive data needed by downstream teams to leverage that asset. Anything less is placing an undue burden on those who need the assets, but who share little or none of the creative production team’s “tribal knowledge” of what that asset may represent.

Bridging the Gaps

The process of closing this “black hole” requires a deep understanding of the real-world pain points related to high volume visual content production, and how existing tools must be integrated into the workflow process.

One key is the ubiquitous nature of a single software tool for photo capture: Phase One Capture One (full disclosure: I worked for Phase One from 2011-2012). It’s been known for years that Capture One has the best tethered shooting workflow (regardless of camera) and many believe it also has the best RAW image processing algorithms (regardless of camera). However, Capture One typically functions as just another “silo” in the photo production workflow— not connected to any other tools or data sources.

Lightbulb 2

It took one of the world’s top Phase One Partners, working in conjunction with Phase One, to develop a specialized “plug in” capability for Capture One to provide a critical piece of the puzzle.

However, just being able to control Capture One isn’t the end game. The key to transforming the production process is data, which has to come from somewhere. Unfortunately, each client has a patchwork of different systems where visual production data is housed, and many use primarily spreadsheets to aggregate the data from these disparate corporate systems and the “tribal knowledge” of multiple production teams.

So enabling a “single source of truth” for all that production, product, sample, grouping, shot, Set list, and Set data is critical to transforming the photo production process, but there is no such thing as a “one size fits all” structure for that data. And not only is each client data schema unique, but the Roles and Permissions, business rules (client A requires a different number of default shots for products in Department 2 vs. Department 7) and process steps vary as well.

The biggest mistake I see from workflow technology vendors is defining a single database and workflow structure to accommodate many clients – I believe that approach is doomed to fail. Rather, a highly configurable database toolset is essential for adapting the “front end” of the process to work for many different clients.

In addition, visual content production is not always done inside the corporate firewall. Increasingly, it is a mobile process that happens on beaches, urban streetscapes, or in a contracted studio that may be one mile or a thousand miles from the corporate office. This means cloud is the way to go for the central database source. But the actual capture work is done locally, and often “offline” in remote locations.

So, a hybrid cloud/local architecture is key.

This hybrid architecture of linking a stable, secure, configurable, and agile cloud database platform as a “front end;” with synchronization to local capture systems as a “back end,” forms the foundation for creating “smart assets” at the point of capture.

Building on a Foundation

Once a platform is in place that can tag individual assets with rich metadata, a world of possibilities opens for automating asset-driven processes based on metadata “triggers” within smart assets.

Some scenarios we are addressing with our clients include:

  • Automated routing of assets through the appropriate review/approval workflow based on the channel(s) tagged in the asset (e.g. an asset tagged as “campaign” is automatically routed to the “campaign” approval workflow, which has different review/approval steps than “ecommerce” assets)
  • Enabling of extensive search capabilities in a DAM/WCM platform based on automated extraction of metadata tags
  • Web Search Engine Optimization (SEO) based on the presence of rich metadata

In my next article I will address how smart assets can help enable modern production and approval workflows, and in this series’ final article I will discuss how smart assets become a key component of a “DAM revolution” to serve modern fashion brands.

Eric Fulmer Eric Fulmer has been a pioneer in digital photography and digital asset workflow since the early 1990s when he joined Fuji’s Digital Imaging Division, where he worked with major corporations and cultural institutions including The Smithsonian Institution and The Metropolitan Museum of Art as they adopted digital workflows. Later, Eric worked with leading photography studios at both Leaf and Phase One, pioneers in digital capture systems and image processing workflows. He has led DAM integrations at major government institutions and played multiple roles in a startup SaaS platform vendor providing end-to-end creative production solutions for the world’s largest fashion and retail brands. He now leads the software team at Capture Integration, developing the ShotFlow One visual content production platform.