Home Featured Artificial Intelligence, Machine Learning & Design – Combining Logic & Emotion

Artificial Intelligence, Machine Learning & Design – Combining Logic & Emotion


In this article Thomas Teger, CPO co-founder of swatchbook gives an outlook on how Artificial Intelligence (AI) and Machine Learning (ML) could impact design and visualization tools to aide designers in their daily design process – not to eliminate their job but rather to give the ability to create better designs faster.

First off – and I have to reiterate this, I’m talking about “designers” and “design” in the creative and emotional context and not the engineering context (see my previous article where I discuss this pet peeve of mine).

“Create better designs faster” – that certainly sounds trite and overused. “Artificial Intelligence” and “Machine Learning” are buzzwords that everyone likes to use these days. They are certainly talked about a lot when it comes to automation of processes. In fashion, it is heralded as the savior of retail. Some companies think AI will allow their 30+ year old software products to deliver a better user experience. Yet, often overlooked are the little things where AI and ML come into play. Everybody knows Siri. But have you tried to type in “jeans” when searching photos in your photo library on your iPhone? Also, image based search in the Google search engine has been around for quite some time.

It is somewhat curious to talk about “design” and “intelligence” in the same context. “Design” is emotional, while “intelligence” is more or less about logic. We think that there are great opportunities in how these seemingly polar opposites can work together.

Create better designs faster

What do I mean by that? AI and ML can help designers to explore a broader variety of new designs before making a final decision on designs. Utilizing AI and ML will allow designers to draw from past design decisions, both good and bad, but more importantly from accepted vs. rejected. It is important that the system not only understands what were the design iterations that had an overall higher acceptance rate versus the ones that were rejected throughout the process. And these design decisions can be based on various factors: form, function, material, color, manufacturing are the ones that jump out immediately.

In addition to this, how would external influences be able to affect the design? For example, a collection of images that capture certain trends. In addition to the visual and emotional influencers, how about taking into consideration demographics and location? How could that data be used to influence your design? Of course all this information is available somewhere, somehow. But how do you get access to it? And how do you use this information in your design? Having access to this kind of information in a more automated way will be able to affect the designer and the design much more impactfully, allowing to iterate more, to exhaust more possibilities and ideas.

You are still in charge

Even though this may sound like the machine will be taking over, in the end you as the designer are still in charge. Design is highly creative, and creativity is fueled by emotion. This is not something that can or will be easily replaced. Design is a skill, a talent. It is not something that can be taught to a machine. At least not yet. Engineers and bean counters may think differently. Just like they did when they saw a render button show up in their CAD system and thought this will create a pretty picture “automagically” just by pushing it.

Ultimately you will determine how much or how little you will let the machine assist you. And by doing so, guess what? You are actually teaching the machine.

Machines learn fast

The more information and the better that information is for you to feed the system, the better and quicker you will have a system that can truly help you. But this also requires smart development by the company providing the solution. ML is based on rules and user input that will help to make the system smarter. There are various ways on how to teach the system based on how it is being used, and what kind of system it is. For example, an application that is being used by an individual will learn based on the input from that individual, while a system that is used by many people – like a material marketplace for example – can be taught by many individuals. Even when the system is being taught by an individual, what is being taught may be the result of decisions made by others. And this is what makes the system so powerful. You are teaching the system from various angles, feeding it with data that is collected from many different sources, and not just one individual. No matter what – the more input is provided, the better it is, and the faster the machine will learn. But keep in mind, the quality of AI will largely depend on the information that is being fed to the system.

Data is good, good data is better.

An example from the fashion industry – searching for the “golden” swatch

Digital material databases, whether curated by mills and tanneries or by fashion brands in house, are the future of materials, in particular when the materials are “real materials” that properly represent the physical sample from a visual, simulation and metadata standpoint. Searching these databases works well, particularly when materials are well tagged and described, and the system provides a sophisticated filtering system. But imagine if you enhance this database now in the following aspects:

  • Computer vision – Based on the “information” that is given in an image the system can recognize certain elements in an image, and tag the image automatically. For example an image with a dog will automatically have the tag “dog” attached to it.
  • Visual search – This is basically your traditional “image search” yet it can be significantly enhanced when there is more information available. For example, if the material is a result of an actual scan then addition maps are available that can be used in the search.
  • Machine learning – Machine learning is part of what improves computer vision and visual search. This is where algorithms, rules, and user input become crucial. The more and the better the input, the smarter the database will get.

When combining the “code” mentioned above with the available data – both in overall quantity and depth – and good training, you will have an incredibly powerful database that will continue to grow smarter very quickly, and help you find that “golden” swatch that you have been looking for.

Final thoughts

Of course this can expand into so many aspects of design: modeling, texturing, rendering – you name it. But again, the human aspect can’t be underestimated. You can buy the most expensive digital camera, but if you don’t know how to compose an image, arrange the lighting, in short don’t know how to take a picture, it doesn’t matter how good the “machine” is that you just bought. The better the designer, the better the information that is being fed to the machine, and therefore the more “intelligent” the system or solution will be. Stay tuned.

Lydia Mageean Lydia Mageean has been part of the WhichPLM team for eight years now. She has a creative and media background, and is responsible for maintaining and updating our website content, liaising with advertisers, working on special projects like our PLM Project Pack, or our Annual Publications, and more.Joining mid-2013 as our Online Editor, she has since become WhichPLM’s Editor. In addition to taking on writing and interviewing responsibilities, Lydia has also become the primary point of contact for news, events, features and other aspects of our ever-growing online content library and tools.