Artificial Intelligence Agencies New Tricks

Google’s TensorFlow makes machine learning accessible, but is it viable for marketers?

Author

By Sam Bradley, Journalist

September 12, 2023 | 7 min read

We hear from Wunderman Thompson’s Cole Peterson about how the agency has come to utilize Google’s open-source TensorFlow software – and about its limitations.

An image of a web of LEDs, representing a neurla network

TensorFlow an open source program used to quickly develop machine learning models / Unsplash

You don’t always need a huge investment in tech to realize the benefits of AI. Sometimes, the key to efficiency can be found in using a smaller-scale, niche version of a required tool.

Take TensorFlow, for example. It’s an open-source software used to develop machine learning, especially training neural networks (software that mimics the way the human brain retains and processes new information) and is behind many of the AI tools being used right now. Because it’s open source, it’s free to use. Google also offers a suite of software tools called Google Vision AI, built using TensorFlow but more advanced than community-made applications – though these cost hard cash.

Developed open source by the Google Brain team, TensorFlow is older than most AI tools. Its first version was released back in 2015 and WPP agency Wunderman Thompson has been using it for several years.

Cole Peterson is director of creative technology at Wunderman Thompson. He’s part of a six-person team at the company tasked with prototyping AI use cases, often to help with new business pitches. The Global Creative Data Practice, as it’s called, spends its time “making mistakes and failing and trying to come up with cool stuff,” says Peterson.

Because the team is small and deadlines many, Peterson was looking for a means of developing models to power web browser applications.

A model is essentially a large dataset with a huge amount of assets – text, images or numbers. The content of the dataset is what the program uses as reference points, either to generate a response to a user’s prompt or for identification purposes. Large language models (LLMs) are used by ChatGPT to generate strings of text, though they don’t all need to be so large, depending on the intended use.

Typically, building a model takes ages. “It’s very time-consuming and arduous,” says Peterson. TensorFlow helps the team get to a working result faster. “You can feed a TensorFlow model 1,000 images of a cat and 1,000 of a dog. It’ll do some math behind the scenes that even I don’t know about and then, when you next begin to feed it new images, it’ll be able to sort them into ‘cat’ and ‘dog’ buckets.”

Peterson’s team has been using TensorFlow JS (the initials stand for JavaScript) to build browser apps around that functionality. Though you can use it to build models and training datasets from scratch, its library of pre-made models is useful in speeding up production time, he says.

“If I was a data scientist and I didn’t care about web apps, didn’t have users to interact with my applications in real time, I would just do this offline with Python or something. But I don’t have time to train models.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

For KitKat, it used TensorFlow to create software that would follow the eye movements of a user in a webcam stream. The end product was a browser game that pitted users against the computer in a ‘staring contest’, while attempting to distract them with pictures of animals to break their concentration (the idea being that the browser game was a chance to ‘have a break’).

“Once the user blinked, the game ended and you’d get a score and we’d put you on a leaderboard and KitKat gave out prizes.”

There are limitations to TensorFlow’s usefulness, says Peterson. The load time of applications created with it is large, though he says this has improved over time.

Secondly, the process for discovering pre-made models, or training one’s own, is not “intuitive,” he says. “I wish there was an easier way for me to train my own models, or be able to grab people’s models and test them out and see if they work for my particular project. It all seems pretty arduous to me.”

Pre-made models, he notes, must also be checked thoroughly for introducing bias into finished products.

A program his team developed during the pandemic that could recognize whether or not a picture of a person showed them wearing a mask identified women wearing masks incorrectly. Instead of a mask, it identified the object as duct tape covering their mouths – suggesting the image library originally used to create that model contained some pretty graphic imagery.

“When the men were testing the app, it always seemed to work. When the women were testing the app, it didn’t detect a mask; it detected other objects – a lot of the time, duct tape.”

The episode became a cautionary tale for the team, he says. “Since we weren’t building it for a client and it was an experiment, we turned it into an example of bias in AI models and wrote a white paper on it.”

Artificial Intelligence Agencies New Tricks

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +