Home Featured What We Talk About When We Talk About Intelligence

What We Talk About When We Talk About Intelligence

0

Originally running in our 7th Edition Report, ex-Editor, reporter and contributor to WhichPLM, Ben Hanson, discusses the theme of ‘intelligence’. This piece accompanies a dedicated series from Ben around intelligence, A.I, and data-driven design and development in retail – all of which you can find in our 7th Edition. 

[For any reference to timelines throughout this piece it should be noted that the publication date for our 7th Edition Report was August 2017. Similarly, you will find references to other ‘features’, which denote to the other editorial pieces in our 7th Edition Report.]

Ask someone to name a science fiction concept, and the odds are good that they will come back with either domestic and industrial robots that are indistinguishable from humans, or a benevolent (or malevolent) supercomputer that comes to rule the world. These themes have recurred in future-set tales for more than a hundred years. They are popular because, as with all good speculative storytelling, they ask us to consider what our place, as thinking, breathing humans, is likely to be in a world where either our physical or mental capacities have been outmatched by machines.

The upshot of having such a body of sci-fi dedicated to these subjects is that some of the smartest people in the world have spent a great deal of time considering their implications. Some putting pen to paper; others toiling away in laboratories and research centres to bring the ideas to life; each spurring the other on with fresh ideas.

Given how well the workings of our bodies are understood, compared to the almost complete opacity that shrouds the way we think, you might expect science to have made the most progress on the physical robots. In practice, the opposite is true.

There is a name for this subversion of expectations: Moravec’s Paradox. Outlined by roboticist and futurist Hans Moravec in the 1980s, the paradox is expressed like this: “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” In brief, what Moravec meant is that our mental processes and actions – the results of a consciousness, individualism, and autonomism that generations of scientists and faith scholars have been unable to adequately define – have proved far easier to replicate than the coordinated motions of our bones, tendons and muscles, which entry-level biology has long since explained away.

The mobility side of Moravec’s paradox is still a challenge. Even the best contemporary attempts at having robots mimic human face and body movements come off as unsettling, and companies have, by and large, paused – if not abandoned – their attempts to make robots that look and move the way we do. While people might make good models for thinking and problem solving, it transpires that there are far more practical examples to follow for picking things up and moving them, navigating through war zones, or conducting fine-grain assembly tasks.

We have, however, made considerable progress on having machines pass common (and even complex) intelligence tests, as well as on introducing techniques that allow for computers to perceive and recognise the world around them. These advances all fall under the broader umbrella of Artificial Intelligence (A.I). And that’s an umbrella you have more than likely seen pop up in the news a lot lately.

“Japanese company replaces office workers with artificial intelligence,” reads a typical A.I. headline from The Guardian at the beginning of 2017. In isolation, it may sound alarmist or unbelievable, but it is accurate: Fukoku Mutual Life Insurance in Japan made 30 experienced staff redundant because their jobs, calculating policy payouts, could be done cheaper and better by an A.I. Fukoku expects this to increase productivity by 30%, and to become a profitable investment in less than two years.

Maybe it’s because automation and the resulting mass layoffs are happening on a much grander and more pressing scale in manufacturing, but I find it remarkable how unremarkable these kinds of stories have become. They hit the headlines and fade quickly, inspiring fear in other insurance workers, perhaps, but not registering on the tectonic scale you might expect given that A.I. programs are now putting even white collars out of work. To put this into context, Moravec’s Paradox talks about the likelihood of replicating the intelligence of a one-year-old; I have a baby in the house as I write this, and I’m not expecting him to be performing any insurance calculations or underwriting policies for a while yet. Seems as though we’ve moved rather rapidly from crawl to walk to run there, doesn’t it?

Perhaps that aforementioned wealth of sci-fi literature has made these kinds of things feel mundane. Or perhaps it’s because we all already interact with A.I. of one form or another almost every day – whether we realise it or not.

Self-driving cars are, of course, the most obvious example. Hugely complex arrays of hardware sensors and interpretation and analysis algorithms, autonomous vehicles are a sci-fi trope writ large, and will soon be hitting streets near you if they haven’t already. Less obvious but no less valid as applications of A.I. are chatbots (that initial gateway of interaction you sometimes encounter on a company’s support or sales channels before reaching a human advisor) and visual recognition platforms like Google Photos, which uses algorithms to tag your photographs with their locations, subjects, settings, and a host of other information so you can later search for a sunset or a smile.

Each of these examples, however, has something in common. All, to my mind and the minds of many actual experts, fall short of the definition of intelligence. In fact, these and other examples like them are only really categorised as A.I. because, for the 99% of us who are not intelligence programmers or philosophers of mind, a catchy acronym is needed to capture the potential of a raft of different technologies.

“When I’m talking to a business audience, I use A.I. as an umbrella term, and under that umbrella are specific applications of AI like cognitive services, machine learning, deep learning, neural networks and so on,” said ShiSh Shridhar, Director of Business Development, Data, Analytics and IoT for Microsoft, during our interview in the summer of 2017. “For a business audience, that’s a confusing set of terms, and to complicate things even further, a lot of people use them interchangeably. As a result, I stick to the top-level term rather than having a brand or retailer worrying about the small nuances that separate individual components like neural networks and deep learning. I realise it’s a big umbrella, but it’s an appropriate one.”

While ShiSh’s advice is sound, we are, however, going to have to talk about these components individually at least to begin with, as foundations for the visible outcomes of A.I. It is only by understanding the essential building blocks of machine intelligence that we can assess where and why it exceeds, equals, or falls short of human intelligence – and why, in many cases, a machine not being truly smart does not necessarily matter in the open market.

“I think machine learning is the preferable term for what we’re talking about today,” said Julian McAuley, an Assistant Professor at University College San Diego, who has spoken about A.I. in fashion previously, and whose research into behavioural models and visual recognition caught my attention as I and WhichPLM were gathering material for these features [within WhichPLM’s 7th Edition]. “The research I do would probably be called A.I. by the public, and even by our department here at the University, but I think that term is overused. We are not specifically trying to build up a general intelligence; we’re taking a data-driven approach to solving a specific problem – nothing more. When people shop online and are shown product recommendations or adverts that they feel are uncanny, they tend to assume that an A.I. recommendation system is truly intelligent and knows something intimate about them. The truth is less exciting: all the typical algorithm needs to know to do its job is that one person performed an action that was similar to another person’s action, like buying a product, and it can recommend them both similar other products as a consequence.”

McAuley’s example, as well as the others I referred to before – photo categorisation, selfdriving vehicles, and chatbots – are good examples of the state of commercial A.I. today. They seem intelligent at first glance, but their smarts are actually confined to very small, very specific areas. To choose the right pre-canned responses, a chatbot need only concern itself with the customer’s immediate concerns. To recognise a mountain range in a photograph, Google Photos will look for indicators that a mountain is present, but it cannot be said to actually know what a mountain is, and neither does it need this additional context to achieve its purpose. Generally intelligent these things are not.

Let us shed a little more light on why this is the case, and why the distinction matters. To recognise that mountain, Google Photos (or an equivalent, although Google leads the pack in this area) uses neural networks, which are algorithms arranged in an approximation of the way we believe our brains work, designed to allow machines to look at things, and then to remember and learn from them. These are arranged in a way that permits what is called deep learning, which means that the results the end user receives are arrived at from the separate or sequential input of several – or hundreds – of layers of hyper-focused neural networks. One such network layer might detect naturally-occurring edges and nothing else; one might look for the texture of a rock face; another might scan the top portion of every image for sky, and the portions below that for peaks and snow. At the top level, yet another neural network will take the composite output of the nets below it and conclude, with 90% certainty, that there is a mountain in the photograph at hand. If the algorithm has a library of other mountain images to compare this one to, perhaps it will also go away and try to match its peaks and valleys to other photographs, before inferring that this picture was taken in Yosemite.

Make no mistake, this is an exciting, road tested application of a class of technologies with massive potential to change the way we live our lives – particularly when we realise that its use need not be confined to static photographs. Recently, Microsoft unveiled a research project it calls “Seeing AI,” which uses the same principles as other photo recognition software, but applies them to both pre-recorded and real-time moving video as well as still images. In use, a deep learning solution evaluates the real world through a smartphone camera, and turns this hugely complex data feed into synthesised voice narration designed to help blind and partially-sighted people to navigate their environment, recognise people, read mail, and discover products.

Let me be unambiguous about this: I find the Seeing AI application incredibly impressive. I am only 35 years old, but this is one of several occasions where I have been reminded of just how much has been achieved in my lifetime. Using a handheld computer more powerful than the mainframe that used to occupy almost the entirety of my father’s business premises, and a camera sensor the size of a fingernail, a partially sighted person can now have an A.I. in the cloud on the other side of the planet describe the world to them in real-time.

But while machine learning systems like these are terrific at the tasks they are assigned – looking for recognisable things in pictures or moving video – a significant amount of work has gone into achieving this level of recognition. As the name suggests, machine learning systems are not created with intelligence and understanding built in, but rather acquire it through the same processes that human beings do.

“To understand how machine learning works, we only have to look at the way humans learn,” explains Ganesh Subramanian, Founder & CEO of real-time fashion analytics platform Stylumia. “Point to a chair and ask a young child to identify it, and unless they have previously been told what it is, they will not instinctively know its name. If a parent or guardian reinforces the label the next few times the child sees a chair, the child will very quickly pick up that this object remains a chair every time he or she sees it, and that when they see similar-looking objects, they are also likely to be chairs. Machine learning is very similar, because the machine does not inherently know what a dress, a top, or a pair of jeans looks like – we have to keep throwing example images at the computer vision portion to train it. We steadily reinforce that this is what a dress looks like, for instance, and then the program will later be able to infer that other styles and shapes like it are also probably dresses. The major difference is that machines find things easy that humans do not, and vice versa. Ask a person to add up twenty different, twelve-digit numbers, for instance, and you can expect to wait a while for an answer. A machine can perform that calculation in nanoseconds. But point to a red curtain and ask a person what that object is and what colour it is, and you will get a quick answer; a computer that hasn’t been properly trained will not be able to respond at all.”

While the rest of this feature series [found in our 7th Edition] is given over to the applications and impact of A.I. on the retail, footwear, and apparel industry, the differences in the inherent capabilities of humans and machines raise an issue that no real examination of artificial intelligence can overlook: is there something that makes us special? More specifically, for our purposes, is there a natural aptitude for context and creativity that makes humans capable of recognising and responding to art or style? And if so, can that, too, be taught to a machine?

To briefly address this thorny philosophical line of thinking, we’re going to revert to Google Photos as a case study. In the interests of building an image recognition engine that can spot a mountain in a photograph, Google’s engineers will have focused on only what counts: the visual elements of a mountain. Given access to Google’s vast repositories of data, the machine may also be able to call up the temperature at the mountain’s peak, or list the mineral composition of its rocks. What it will not – and some will say cannot – have access to are the feelings that a photograph of a mountain can conjure up in the mind of a human being: freedom, exploration, openness, vertigo. All of these contain an element of what philosophers call ‘qualia’ – discrete instances of subjective experience that, arguably, serve as evidence that humans, and only humans, are conscious and self-aware.

As an example, were I to look at a photograph of a mountain, I could imagine myself on its slopes, free from responsibility. I could, also, picture myself at its peak, gazing at the vista spread out below, and considering my life choices. I could pretend I was on the edge of that mountain, worried about my mortality if I should fall.

These thoughts and experiences all come naturally to you and I, and almost everyone would agree that they are what separates people from machines. And while not everyone will know the philosophical term for their stance, most people would equally argue that a machine could not have that kind of conscious experience and understanding, no matter how good it becomes at performing outwardly visible tasks like recognising shapes.

Entire careers have been dedicated to tackling this issue, so I cannot hope to address it here. I did, however, reach out to Susan Schneider, an author, TED Talk alumnus, and Associate Professor in the Department of Philosophy at the University of Connecticut, for her expert perspective.

“For me the mark of the mental is conscious experience, and I care an awful lot about whether an artificial system can be said to have an actual experience”, Schneider told me. “From that point of view, I think we can eventually have a super smart A.I. that outthinks us and outperforms us, but without it being what I’d define as conscious. There are reasons why I don’t think information processing and consciousness necessarily go hand in hand. For example, consciousness in humans is recognised through our ability to perform novel learning tasks, reasoning and so on – things that an artificial intelligence could theoretically replicate. So building a machine that can also reason would get us to the point where that A.I. would technically qualify as intelligent, without the need to have the same kind of conscious experience that we have. I think in the long run it’s likely to be an empirical matter: what kind of architecture a machine has will determine whether or not it’s conscious. Our biology has given rise to consciousness and information processing, but other substrates may not be able to host consciousness even though they’re as good as we are at information processing – or better.”

This does not mean, though, that there is a hard limit on the capabilities of machine learning, or that A.I. programs are forever constrained to following rules – with no chance to translate their unique way of learning into some kind of creativity – as Schneider explained:

“I think A.I. is already more than up to the tasks of visual recognition, logic, and even extremely complex games, where an A.I. has already exhibited the ability to think outside the box to some extent. The team behind AlphaGo, [a recent milestone in A.I. research, where a machine beat a grandmaster at the ancient, incredibly complex game of Go,] specifically chose the game because it’s extremely combinatorically difficult and is therefore a great proving ground for whether an A.I. can break its boundaries, so to speak. And if you look at the transcripts of the people who are analysing the A.I.’s moves, they do describe some of them as creative and intuitive. And that’s mind blowing even to the people who developed the program. One move was even referred to as the “bathroom move” because the opponent disappeared into the bathroom for fifteen minutes because of how perplexing it was. The same goes for IBM’s Watson A.I. playing Jeopardy; in a way you can say that the rules of that game are known, but what the A.I. did with natural language processing involved making what I would call novel judgments. And as A.I. gets more sophisticated and goes from being intelligent in only specific domains – like Go or Jeopardy – to being intelligent in a variety of ways, we’re going to potentially see genuine artificial creativity emerge, with or without consciousness to go along with it.”

The key word in Schneider’s second answer is “specific”. Today, there is an accepted delineation between two kinds of A.I. and intelligence research: general A.I., which is an attempt to either replicate total human intelligence or create an entirely new kind of supreme machine intelligence, and specific or “narrow” AI, which is also referred to as Artificial Narrow Intelligence, or ANI.

As you may have guessed from the autonomous vehicles, chatbots, and visual recognition examples I referred to earlier, we already live in a world of narrow A.I. Today, algorithmic traders deal in more than half of all equity shares traded in the US, while gameplaying ANI programs hold the world champion titles in chess, checkers, Scrabble, Backgammon and more. Everything else from email spam filters to machine translation services is yet another manifestation of ANI – and more and more products and services are being optimised or revitalised through the use of narrow A.I. by the day. As grandmaster Gary Kasparov, who famously lost his chess rematch against an A.I., puts it in his book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, “Artificial Intelligence is on a path to transforming every part of our lives in a way not seen since the creation of the Internet, perhaps even since we harnessed electricity.”

A key point to understand here is that narrow A.I. is not to be considered a poor version of general intelligence. It is, rather, a way of translating the fundamental principles of machine intelligence into manageable use cases. And after all, in an industry like fashion, where craftsmanship and experience count for a great deal, since when was specialisation considered a bad thing?

“All the current applications of A.I. in our industry are narrow,” I was told by Courtney Connell, Marketing Manager for lingerie brand and retailer Cosabella, which recently used an A.I. platform to completely redefine the way it markets and sells its products online. “I wrote a piece for Womenswear Daily about A.I. in general, and afterwards someone contacted me to say that I was wrong to be discussing limitations, because an A.I. had recently beaten a professional poker player by bluffing, so that meant the sky was the limit. The person who contacted me wasn’t technically incorrect, but he was repeating a widely-held misunderstanding about the value of specialisation. If an A.I. can read facial micro movements and tone of voice more accurately than any human being can, then of course it’s going to beat people at poker. The problem is that people look at these narrow applications and conclude that results from a single hyper-focused program mean that every A.I. everywhere is now able to achieve the same results. It’s important to remember that commercial A.I. is a tool, built for a particular purpose, and is not necessarily all that useful outside its particular focus.”

While specialisation is valuable in its own right, where extensive domain expertise counts the most is in specialist markets. In these industries – and fashion is unquestionably one – the ability to understand minor variations between thousands or millions of different products is essential.

“Take the automotive industry,” explained Eric Brassard, a former executive at Saks Fifth Avenue, and now CEO of Propulse Analytics, which is using deep learning to reinvent product recommendations. “When it first started, 120 years or so ago, any vehicle with four wheels and a steering wheel was a car. Today there’s a significant difference between an SUV, a Ferrari, and a regular sedan in most people’s eyes. In fact, there’s an equally significant difference between a sports sedan and a comfort sedan. As any industry evolves, specialism breeds complexity and subjectivity, and making sense of those, at scale, requires the appropriate, specialised solution. We started from the belief that A.I. is the right solution to the problems of complexity and subjectivity in highly specific industries, and our position has not changed since. Right now no A.I. is omnipotent, and the ones that are highly potent are only potent in their highly specialised fields.”

Specialisation in A.I. also has compounded benefits. As an algorithm becomes better able to spot micro-scale variations in fit, component placement, hem length, grading rules and so on, it gains the ability to plot these changes as trends over time, and to then refer back to its own research and conclusions in order to draw further conclusions. “True AI systems are self-learning, which goes a step beyond machine learning,” explained Andy Narayanan, VP of Intelligent Commerce at Silicon Valley A.I. company Sentient. “Looking at it computer vision exclusively as a way of using an algorithm to conclude that this is a blouse, that’s a belt, and so on, is limiting its potential. Being self-learning means that the algorithms should be capable of adding another dimension, taking a view of how blouses and belts are changing over time, and determining what kinds of each are likely to be in fashion today – and why. This is where I think the true promise of A.I. is: understanding what features, facets, colours and textures a product category has now that it didn’t have before, and adapting its recommendations to that evolution in real-time, with no human intervention.”

Another corollary benefit of specialisation, of course, is trustworthiness. Generally speaking, we tend to put our confidence in people who we  believe know what they are talking about. This is, of course, also particularly true in specialised industries, where a new entrant has little choice but to rely on the experience and expertise of long-serving specialists. And just as we have faith in human beings who exhibit that kind of specialisation, the same is already proving to be the case in other industries like financial technology (or “fintech”) where account holders in WhichPLM’s native UK are trusting consumer facing A.I. platforms to analyse and advise on their monthly spending.

The most interesting of these platforms is Cleo (currently available in the UK only at www. meetcleo.com), which marries a friendly conversational interface with deep learning that identifies spending patterns, trend lines, savings opportunities and other insights that the account holder would be either unlikely to research on their own, or would perhaps not even be capable of analysing. I road-tested Cleo as part of the research that went into this publication, and that test revealed two insights that surprised even me: first, I am willing to accept a lot more intrusion from an A.I. than I would a human; second, I trust what Cleo says implicitly. The application may not have the best user experience in the world, but when it shows me a trend line of my savings over the last quarter, my instinct is not to argue, like I might with a person providing the same information, but to accept that her (for Cleo is a she) specialist knowledge trumps my own arms-length acquaintance with my own income and expenditure.

Drawing these kinds of parallels between A.I. and human experts does, however, raise the question of how – and how transparently and reliably – both acquire their experience and expertise. After all, conclusions reached on the basis of inaccurate data, however intelligent the analysis, are still incorrect conclusions.  We are all familiar with the way this process works in humans – we work diligently to build a career, collecting experience as we progress, and eventually narrowing down our specialisms until our knowledge of a chosen domain is deeper than others’. For a machine, the spirit is similar, but the speed and the method differ dramatically. For some purposes, specialism to the requisite degree can be achieved by letting one or more algorithms loose on a large data pool, or presenting them with regularly-repeated images on a similar theme until they demonstrate the ability to distinguish their contents.

More often than not, though, the process of specialising an A.I. is actually a human-led one, whether the humans are aware they are participating in a learning activity, or whether it is tacked on as a by-product of another product or service. Google’s recent changes to its CAPTCHA system (the gatekeeper of comment forms and login screens everywhere) have seen users asked to identify squares of images, clicking all those that contain taxis, or churches, or street signs. And while the search giant has not exactly admitted this, the results are all but guaranteed to be going towards the specialisation of algorithms designed to recognise these things.

But this wide-net approach is not viable in situations where the luxury of testing at scale on an audience that’s ignorant of your intentions does not exist, or where guesswork or incorrect answers on behalf of the human training pool are unacceptable. “We provide training data as a service, helping companies developing A.I.s or implementing machine learning to get their data into the right shape to train those systems,” explains Daryn Nakhuda, Founder and CTO of Seattle-based Mighty AI, which prides itself on being able to educate A.I.s through both public, vetted, training exercises and domain-specific data collection and training for fashion, retail, and autonomous vehicles. “Everyone talks about big data, and how much of it they have, but not a lot of it actually ends up being used. To get around this, we identify particular attributes that a client wants to be able to search or perform recognition on, and figure out how it can be categorised and organised into some kind of taxonomy. In a lot of cases, the data that contains those attributes is unstructured, coming from social media or other similar sources, and isn’t provided in a form that an A.I. can easily use. To that end we also have a dedicated web application, Spare5, that allows real people to perform different micro-tasks that help train these A.I.s. For fashion that might be identifying the individual garments in a photo, characterising the fit of an outfit, or distinguishing between formal and informal wear – things that businesses have never been able to model at scale before. We’re using humans to make judgments that machines are not able to make, because, with the right people making the right kind of subjective decisions or identifications, the machines will eventually learn to make those decisions themselves, and will begin to identify patterns that no human would be able to pick out.”

As Nakhuda’s answer suggests, experience on the part of the training pool is essential. In short, we cannot expect to create a specialist A.I. without specialist educators. While the general public, for example, may be able to successfully pick out certain elements of an outfit, their ability to distinguish between different fits, different weaves, different materials, and other metrics will be limited. And, equally importantly, their ability to correctly identify key criteria will be based on how closely they align with the retailer or brand’s customer demographic.

“We’re not necessarily looking for fashion expertise in our trainers, but rather an alignment with the retailer or designer’s target audience,” added Nakhuda. “We can certainly qualify people by asking them to recognise the difference between an Oxford collar and a spread collar, but it’s equally important that we qualify them on the basis of cultural fit. A retailer needs to know that if their A.I. is going to describe an outfit as appropriate for work, it’s appropriate for work in their target market. What we wear to the office here in Seattle, for example, is pretty different to the way people dress for work in Miami or Europe. And the same goes for women’s clothing, where styles vary dramatically depending on climate and culture. You don’t want your AI to be a onesize-fits-all system; you want one that will share perspectives and experience with the people who will interact with it”.

So where now for A.I., specialised, trained with appropriate human knowledge, and capable of bending the rules from time to time? A common thread among all the interviewees that I and WhichPLM solicited for this publication is that general intelligence remains a pipe dream – and that the quintessential human experience may never be transferred to a machine. This does not, however, preclude the development of significantly advanced A.I.s who do not require consciousness in order to exceed our every limitation, or put a limit on the potential of the best ANI solutions to achieve mass adoption in their specialist markets – whether they are in mass market fashion, media or the military.

Indeed, outside our industry, A.I. is already being pegged as the next international arms race. Just a week before going to print, Russian leader Vladimir Putin, asked about the future of A.I. went  on record as saying that “the one who becomes the leader in this sphere will be the ruler of the world”. As endorsements come, this is perhaps an uncomfortable one, but the message is clear: after several generations of being confined to stories and research labs, A.I. can no longer be ignored.

It is unlikely, though, that you, the reader of WhichPLM’s 7th Edition Report has world domination in mind. More likely, you are wondering just how the dawn of the intelligence era will impact your business. And while this primer has been far-ranging, let me assure you that everything contained in these pages should be considered grounding for understanding a technology that I guarantee will transform your day to day life at work and at home far sooner than you may expect.

“At this point in time general A.I. is very far out, and I honestly think it still resides in the world of fiction,” added Andy Narayanan of Sentient. “Mass extinction of jobs because of the creation of a super intelligence just isn’t likely to happen any time soon – and any general A.I. might prove to be too broad to actually solve immediate problems. A better application of A.I. is instead to solve a very thin slice of problems at a much deeper level – that’s where the applications are today. There are a variety of areas in the apparel business alone where human decision making is sub-optimal, which we are now handing off to A.I., because we know that A.I. can do them better and faster, and can do them at scale. Instead, humans are going to work on something more creative. Personally, I feel like this is the best way to make A.I. successful. Rather than having these high expectations of a general or super intelligence that we might not be able to meet in even the long term, we’re better off setting narrow goals that have a big business impact. The best thing we can do is use A.I. to augment human capacity in ways that deliver hundreds of millions of dollars of value for retailers and brands here and now.”

Ben Hanson Ben Hanson is one of WhichPLM’s top contributors. Ben has worked for magazines, newspapers, local government agencies, multi-million pound conservation projects, museums and creative publications before his eventual migration to the Retail, Footwear and Apparel industry.Having previously served as WhichPLM’s Editor, Ben knows the WhichPLM style, and has been responsible for many of our on-the-ground reports and interviews over the last few years. With a background in literature, marketing and communications, Ben has more than a decade’s worth of experience, and is now viewed as one of the industry’s best-known writers.