Earlier this year, Ted Mann, CEO of Slyce, was kind enough to have a chat with WhichPLM. Slyce works in visual search, image and product recognition and operates multiple apps for retailers and brands to utilise. Ted spoke with us about the different facets of the business; the capabilities of Slyce’s technology, and his desire to see machine learning keep growing within apparel. He also opens up the floor today for our readers to reach out for potential brainstorming.
Name: Ted Mann
Occupation: CEO, Slyce
Likes: I love just about any technology that is pushing the limits of what you can do with the mobile camera. The iPhone 7 Plus dual camera and portrait mode is incredible. Thanks to this I have permanently ditched the SLR. Currently I’m using Google’s Street View app all the time to take 360-degree photos, which you can easily post to Facebook or show your family what a far-off destination is like with Careboard or one of the other VR headsets. I’m also testing out Snap’s Spectacles, which are nifty, though I wish the plugged into more platforms. Can’t wait to get Hololens.
Dislikes: Sticking to my area of focus, visual search, I’m not a big fan of QR codes. They are functional, but pretty ugly and off-putting to most folks. Why not just use visual search to accomplish the same end, without the ugliness and need to download a QR scanning app. Augmented Reality is an area I’m super excited about, but I dislike any experiences where the graphic or visualization adds no real information or enhances the experience. There are a lot of novelty AR campaigns, which drive me nuts.
Words to live by: The biggest mistake you can make in mobile is not watching and learning from how people interact with your experience – especially with their hands. The UX and gestures that will bring your product or experience to life will show themselves in the way people touch your app.
WhichPLM: Ted, to begin, can you give us an overview of Slyce in your own words? We know it as a leading visual search provider for brands and retailers?
Ted Mann: Slyce is an image recognition company, specializing in an area called visual search – which is basically product search with your camera. The company was founded originally with a similar brand ambition to build a consumer-facing app that did this – almost a ‘Shazam for stuff’ if you will. An app where you use your camera to take a picture of something, find it, and buy it; whenever you have that moment of inspiration or see something you really like you can immediately find it and buy it.
What we realized pretty quickly was that building that consumer app at a scale was going to take some time. There was a more immediate opportunity to take our technology and license it through retailers. In the retail space, there’s this one little company that has everyone’s attention – Amazon – and everyone wants to be able to keep up with that and run with the big boy. A lot of the retailers we were working with saw our technology as a way to essentially white label the kind of functionality that Amazon had already launched. This wasn’t our original plan, but we began taking out technology and licensing it as an SDK, which is a softer development kit that’s integrated into retailer apps. We got a lot of retailers on board very quickly.
Now, about two years after we made that pivot, we have 25 major retailers that have integrated our SDK. They are some pretty big names: Home Depot, Nordstrom, JC Penney, Best Buy. And in all of those retailers, the way you find Slyce is by launching the app and looking for the camera icon, which is usually in the search bar or on the home screen of the app. When you tap on the camera icon it takes you right into our Slyce camera class, where you can take a picture of anything; you can scan a retailer’s catalogue, or scan a barcode, but what’s probably the most sexy and exciting is that you can take a picture of a product. If you see some shoes or a purse that you really like, you just snap a photo and we find that match in the retailer’s catalogue.
WhichPLM: So do you work on an individual, consumer level as well as directly with retailers?
Ted Mann: That’s right. We have two divisions: a consumer app division and a business to business licensing division. In our consumer app group we have 3 apps that we own and operate, which collectively have about 6 million users. Those apps are Pounce, Craves and SnipSnap – the latter of which I founded. The B2B side is where we license the tech to all of those retailers I listed earlier.
WhichPLM: You’re one step ahead of us there, mentioning the various apps under the Slyce umbrella. What’s the difference, or the similarities, between the three?
Ted Mann: The common thread is that they all leverage image recognition. The difference is in the use cases. The SnipSnap app is a coupon app, so the use case there is all about scanning coupons you get in the mail in a newspaper, and turning them into mobile coupons saved to your phone. Over the last four or so years we’ve digitized roughly 300 million coupons through SnipSnap. It’s one of the most popular coupon apps. What’s really neat about SnipSnap is that those coupons feed into a crowd-sourced coupon database, allowing other users to benefit from that. That’s what’s led us to have such tremendous usage on SnipSnap. By our estimates, we’ve saved our users about half a billion dollars in coupon discounts.
Of the other two apps, Pounce is a product discovery app, where you see a product that you love and add it to your wish list by taking a picture of it; and Craves is really a fashion discovery app. What’s interesting with Crave’s evolution is that we’ve found that our users love taking pictures of celebrities. We’ve seen that behavior shift, and we re-tooled the whole app experience around celebrity fashion discovery, so now you don’t even need to take a picture, you can go in and see the other pictures of celebrities other people have taken, and what they’re wearing. There’s even an option to find visually similar versions of that same dress, or those same shoes.
WhichPLM: You mentioned a lot of big names earlier; what kind of reception have you had from the fashion and apparel space?
Ted Mann: In the apparel space we work with some dedicated fast-fashion brands like American Eagle, Express, Urban Outfitters; and then we also work with some big department stores like Nordstrom, Neiman Marcus and JC Penney. Usage has been great, and been growing by about 20 percent month on month across retailers.
WhichPLM: Some great reception there, and a lot from your neck of the woods. Correct me if I’m wrong here but you guys are located in North America; do you have plans to open other global offices and expand your reach if not already?
Ted Mann: We’re working with a number of retailers in the EU right now, and we’ve started to do some work in South East Asia. We are based in North America as you said, with a large office in Phildelphia and another in Nova Scotia, so that’s where we’ve focused most of our energies to date. We’ve had the good fortune to land so many retailers in the US, that now we almost need to expand internationally in order to continue to grow.
WhichPLM: Great, we’ll get onto growth a little more shortly, but first I just wanted to explore the technological side a little more. What attributes exactly do you measure? You promote as dealing in one, two and three-dimensional images.
Ted Mann: That’s our own funny way of describing it, really. 1D would be barcodes; we support scanning of all different barcodes and symbologies. 2D is the name that we’ve given to, in it’s most common incarnation, catalogue scanning; the underlying tech base means that we can scan any fixed reference image, and we match against that reference image. You can also use that same tech to identify billboards, for example, or match against the individual frames of a video. As long as we have that reference image in advance, we can match it. Most of our retailers use it for enabling catalogue scanning. The third type – what we call 3D – is basically product recognition; you can take a picture of any product in the physical world, be it a hat or a pair of shoes, and we can identify that and find it in a retailer’s catalogue. If the retailer doesn’t carry that exact item, we find the most visually similar product that they do carry.
WhichPLM: Is your software capable of working with other technology platforms? We specialize primarily in PLM of course but we cover a lot of other related technologies, and we’re interested whether there is any synergy there?
Ted Mann: We’ve done experiments and proof of concepts in a bunch of different areas. Believe it or not, we’ve found that the technology works for certain healthcare use cases: identifying wounds or other dermatologic conditions. We’ve started working with Home Depot, where there’s a need to understand parts (nuts and bolts and screws) and realized it’s actually really good at identifying that too. So, for the automotive businesses we’ve talked to, we can offer identification of parts and other difficult-to-describe objects.
The technology is really versatile. It’s all about figuring out the different use cases; what’s the use case that image recognition can help solve?
WhichPLM: We can understand brands cataloguing their images so that their teams can bring up all of the footwear in a certain colour or style for example. Does your software have the possibility to do the same thing with trend? As a designer, let’s say, I’ve been asked to build a product to match a certain brief – we’ll go with red floral dresses – and I want to know what the world is doing in ‘red floral dresses’ right now.
Ted Mann: There are two ways to look at trends. You can look at what the trends are on the retailer’s own website or app and see what the visual search usage has been – we can, and do, do that. The number of visual searches is growing, but the total volume isn’t yet enough to make really interesting insights out of that date, in my view. I think the other angle you were proposing was looking across what people are doing, maybe across social media and across different channels. You can look at red floral dresses across all of Pinterest and Instagram to see the trend there – and we can do that as well, which is a little bit more interesting. We can even do identification of products to see what products are starting to trend on social media; you can see whether a specific jacket or pair of shoes is getting a lot of interest on Pinterest or Instagram.
Analytics, I wouldn’t say, are our core business. There are other specialized insight companies that do that better for retail of course, but we do offer that as a value add to our existing service.
WhichPLM: I think what our question was more aligned to was, “can I do my analytics graphically”?
Ted Mann: Absolutely. One of the things that we’re doing in the upcoming versions of our SDK is that we’re giving a lot more control over that filtering and sorting of the different features and attributes into the hands of the end user. So what we’ve found is that people like the ability to have this system automatically take a look at a picture and be able to turn it into a results set. But a lot of times certain features that it has identified aren’t exactly right, or what they were looking for, or just not relevant to their thinking. And so the ability to ‘X’ those out and add in others is really helpful. In that sense the initial image recognition is just the jumping off point to a more robust search experience. That is definitely something that we believe strongly in. In the upcoming version of our SDK you’ll be able to see that in action. It’s pretty exciting.
WhichPLM: We think that’s some really important tech for retailers and brands around the world. Speaking of growth and upcoming versions, what are your other plans for the future, be it the next year, five years, or ten years? Do you have any developments already underway, aside from those already mentioned, that you are able to share with us?
Ted Mann: Sure. I don’t have any new products to share with you at this time, but we have a pretty robust R&D roadmap and it’s safe to say that we’re building classifiers (which is basically an automated image recognition module) to identify a whole host of other use cases. We aren’t just training classifiers to be able to identify apparel, but also furniture, food, and other specialized areas. As long as we can ingest enough imagery in that area then we can automate a system around it.
One of the unique things about Slyce, that differentiates us from other companies in our space, and allows us to get hyper-specific in retail, is that we build our own classifiers and our own training data. So we can actually tag all of the images that flow through the Slyce system in a very specific way using our own internal resources. We have people who can build this tracking data on the fly, which has been a huge advantage to us and enabled us to outperform others in the space.
WhichPLM: So you can offer the whole package, in a way. Something obviously very close to your hearts at Slyce, what are your opinions around the huge rise in mobile applications?
Ted Mann: We made a very clear bet early on to go all in on mobile. Most of our solutions began as native integrations into Android and iOS apps. We now have a mobile web SDK as well, which came a little bit later. We’re big believers in native apps. In retail the only, somewhat controversial, opinion that we have is that we believe retailers are better served by building apps that have really compelling use cases and aren’t simply carbon copies of their e-commerce website. In our opinion, if you’re going to build a mobile app then build a truly mobile experience. Leverage all of the things that make mobile great; leverage the hardware on the devices, like the camera and the microphone; make sure it’s designed in such a way that people with a smaller screen size are able to do things that make sense in that environment. The retailers that have done that the best have ended up having the most useful apps that you use day in and day out.
WhichPLM: That’s a great point, and something that I think a lot of people would agree with. Moving away from Slyce itself, what would you like to see coming from applications and other technology in the retail and apparel industry, in the not to distant future? What would you like to see more of?
Ted Mann: Well we’re starting to see how machine learning and deep learning has started to find it’s way into the retail space, whether it’s integrating machine learning into their e-commerce discovery experience with the site re-shaping itself to meet a potential customer’s needs and tastes. I think we’re going to start to see a lot more of that and I really love that; as a customer or visitor you can tell when a site has embraced machine learning and is more attuned to your tastes.
I also think augmented reality is starting to have a meaningful role in retail. For many years it had just been a gimmick, for want of a better word, but now it’s really establishing itself. I’ve worked with people who’ve built technology allowing us to virtually try on clothing, and I’ve worked with retailers in the cosmetics space that allow us to try on make-up. We all know what Snapchat’s done with the fun, silly masks and filters, but we’re starting to see how retailers, and in particular apparel retailers, can leverage that same technology to create a really cool augmented reality experience.
Those are really the areas I’m interested in.
WhichPLM: It’s great to hear you say that, as it’s very aligned with our views as well. We could talk for hours but, considering we’ll be exploring those very concepts in our 7th Edition Report, we won’t go into it now.
Visual search, in our opinion, is long overdue, and what you’re doing it something that really excites us. Do you have any lasting thoughts you want to share with our readers?
Ted Mann: Just one final point to say that we’re always open to new ideas and feedback from the industry. If anyone has questions around whether they can use image recognition for X, Y or Z we’re always eager to brainstorm new use cases.