This guy’s obscure PhD project is the only thing standing between humanity and AI image chaos

It’s rare that your esoteric, impossible-to-pronounce, decade-long research project becomes a technology so crucial to national security that the President of the United States calls it out from the White House.

But that’s exactly what happened to Dr. Eric Wengrowski, the CEO of Steg AI.

Wengrowski spent nearly a decade of his life advancing steganography, a deeply-technical method for tracking images as they travel through the machinery of the modern Internet, as the focus of his PHD at Rutgers University.

After earning his degree, Wengrowki and a team of co-founders rolled his tech into a small startup. For several years, the company grew, but mostly toiled away in relative obscurity.

Then, AI image generators exploded into the public’s consciousness. And for Wengrowki and Steg’s team, everything blew up.

Durable Marks

I met Wengrowski during the pandemic, when we both volunteered to help a media industry trade group rapidly pivot its yearly in-person conference to a Zoom format.

For years, I only knew Wengrowski as a cheerful, highly-intelligent floating head in my video chat window. I even interviewed him for my YouTube channel from the COVID-safe confines of our respective home offices.

When I finally met him in person in San Francisco in 2023, I discovered that he’s actually a towering 6 feet 3 inches tall.

It was one of those iconic pandemic professional meet-cute moments people joke about, where you find that someone you’ve virtually “known” for years looks totally different in person.

What wasn’t different about Wengrowski in real life was his intense interest and passion for his chosen field. Steganography (pronounced STEG-an-ography, like the “Steg” in “stegasaurus”) is a technique for embedding an invisible code into the pixels of an image.

Basically, a complex algorithm subtly changes selected pixels in a way that’s invisible to human perception. Images look no different after being marked with a steganographic watermark than they did before.

Yet, when special software looks at the marked image, the unique code embedded in its pixels comes through clearly to the software’s computerized eyes. 

The presence of that code lets companies like Steg track a marked image back to its source with extremely high accuracy.

Crucially, because the code is embedded directly into the image’s pixels, it’s also nearly impossible to remove. 

Bad actors can easily crop out a physical watermark from an image’s pixels, or use a tool like Photoshop to scrub data from the image’s IPTC or EXIF metadata fields.

In contrast, because steganographic watermarks live directly in the visual part of the image itself, they travel with the image no matter where it goes. And they survive the most common image-related funny business that nefarious people might try to use to remove them.

All steganographic watermarks can survive things that amateur image thieves might try, like aggressive cropping, or even the common practice of taking a screenshot of an image in order to stealthily steal it. 

But Steg’s tech goes even further, Wengrowski told me in an interview. If for example you load an image watermarked by the company’s tech on your computer screen, take out your phone, and photograph the physical screen, the company’s watermarks will survive in the new image on your phone. 

Your nefarious copy will remain traceable to the original with Steg’s tech.

AI Explodes Everything

When Wengrowski originally developed Steg’s technology, he knew it was cool. And he had a hunch that it was useful for something. But exactly what that “something” might involve wasn’t originally clear.

In the early days, Steg slowly grew by helping companies with legal compliance and image protection. Steg would embed its watermarks in copyrighted images, for example, and then trace where those images ended up.

If someone stole and used a copyrighted image without permission, Steg’s embedded watermarks could be used to prove the theft and could help lead to a legal settlement. 

The company also worked to safeguard things like pre-release images of a new product. If a company sent top-secret images of a new phone (marked with Steg’s tech) to a supplier, for example, and those photos suddenly ended up as a leak in TechCrunch, the company could trace the embedded watermark and know who to blame.

That was enough for Steg to grow slowly and steadily improve its tech. Then, in 2022, everything changed. 

All at once, OpenAI released its Davinci image generation model (remember the avocado chairs?), Midjourney rolled out its then world-beating image generation tech, and Google leaned into image generation within its Bard and later Gemini AI models.

Almost overnight, the world was awash in AI images. And very quickly, they became so realistic that everyday people had trouble knowing what was real and what was AI generated. 

This presented a huge problem for AI companies. They wanted to release their tech far and wide. But they fretted about the potential societal (and legal) consequences if their images were used for deepfakes to deceive people, or even to sway elections.

And more broadly, anyone with an interest in the veracity of images suddenly had a huge problem knowing what was real and what was AI-generated. 

Everything from news reporting to war crimes tribunals rely on imagery as evidence. What happens when that imagery can be quickly and cheaply spun up by an AI algorithm?

Yes, AI companies can physically watermark their images (such as by adding a little Gemini star in the lower right), or embed “Generated by AI” markers in their images’ metadata. But again, removing those markers is child’s play for even the least sophisticated scammers.

With AI image generators storming the world, the origins and veracity of every image online was suddenly called into question.

Thank You, Mr. President

That led to a bizarre situation for any deeply technical person pursuing their random, highly-specific passion in relative obscurity.

On October 30, 2023, Wengrowki woke to find that then-president Joe Biden had issued an executive order specifically calling out AI watermarking tech, highlighting it as a crucial factor in national security, and ordering all Federal agencies to use it.

Specifically, Biden’s order mandated “embedding information” that is “difficult to remove, into outputs created by AI — including into outputs such as photos, videos, audio clips, or text — for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.”

The order also specifically called for the rapid development of “science-back standards and techniques for…labeling synthetic content, such as using watermarking.”

Biden framed this as mission critical–the term “national security” appears 36 times in his executive order.

Basically, Biden was mandating the use of tech like steganography, and specifically calling it out from the White House. 

When that happened, Wengrowski told me, everything went crazy. Since the order–and the corresponding growth of AI imagery more broadly–Steg’s revenue has increased 500%.

Moreover, protecting the integrity of images appears bipartisan–Wengrowski told me that AI watermarking has been embraced by both the Biden and Trump administrations.

In an extremely tight AI job market where top researchers can command eight-figure salaries, Steg now employs five machine learning PHDs devoted to improving its technologies.

Although Wengrowski couldn’t share his customer list on the record, I can vouch for the fact that it’s wildly impressive.

While keeping its legal compliance and image tracing side alive, Steg has expanded aggressively into the world of cybersecurity and AI image watermarking.

For AI companies that want to ply their trade without ruining humanity’s trust in visual media, Steg’s tech is a lifeline. 

Companies can embed a steganographic watermark directly into AI images the moment they’re generated. For the life of an image, the code travels with it, even if it’s reposted, edited or altered.

If that image is used as a deepfake or used to manipulate or harass people, the company that created it can quickly read the embedded steganographic watermark in its pixels, definitively label it as a fake, and quickly dispel any damage the image might cause.

If you’ve created an AI image in the last year, you’ve almost certainly used steganography without even knowing it. Most major AI image generation companies now use the tech. Many use Steg’s. 

And in a world where AI images are so good that they easily fool most detectors (and even trained forensic image analysts), many companies see steganography as the only bulwark against AI’s total destruction of any truth still left in the visual world.

A Wild Ride

For Steg and for Wengrowski personally, it’s been a wild ride. Right as Biden issued his order, Wengrowski became a father, and now juggles the everyday struggles and joys of a young parent with the rigors of such things as constant travel and testifying in state legislatures.

The rise of AI imagery has also revealed some counterintuitive challenges. When Steg first launched, Wengrowski told me, he expected that people would yearn for a technology that could prove whether an image was real or fake.

In reality, he was surprised by how little people care. Many people are fine with seeing AI generated content, as long as it’s funny, informative or otherwise engaging. Whether or not it’s properly labeled as AI matters very little to them. 

More pointedly, it matters very little to the social media platforms that disseminate the content, too.

Again, though, for the companies who create that content–and who face legal and reputational risk if their tech runs awry–it matters an awful lot. Wengrowski tells me that Steg is continuing to improve its tech, making its watermarks even harder to beat.

The company is also entering the emerging field of “poisoning.” New software that Wengrowki showed me invisibly alters images in ways that trip up common deepfake algorithms.

If someone tries to turn the “poisoned” image into a deepfake, it comes out garbled and illegible. The tech works both when images are used for training deepfake models, and when a bad actor tries to create a deepfake of a specific person.

The idea is that an influencer, for example, could upload “poisoned” images of themselves to their social media. The images would look normal to human users. But if someone tried to deepfake the influencer, the poisoned images would thwart them. 

Wengrowki told me he’s especially excited to use the tech to help protect young influencers and teens in general, who are often targeted in abhorrent cyberbullying attacks involving explicit deepfakes.

More broadly, though, Wengrowski’s story is an inspiring one for anyone grinding away on an as-yet unproven technology, convinced of its value but unsure whether the world will ever see their work.

Reflecting on Steg’s success, Wengrowski acknowledged that “It’s probably best to start a business with a clear plan and an understanding of product/market fit.”

But in his words, “There’s also something to be said for knowing a technology is cool, continually improving it even if you have no idea where that will lead, and just trusting that eventually it will have some value for the world.” 

In Steg’s case, that’s indeed been a winning formula.

source https://www.fastcompany.com/91458224/steg-ai-images-watermark-phd-project-wengrowski


Discover more from The Veteran-Owned Business Blog

Subscribe to get the latest posts sent to your email.

Published by Veterans Support Syndicate

Veterans Support Syndicate works together with our allies, collaborators, partners and supporters, in improving the quality of life of U.S. military Service members and veterans nationwide, via our animal & mental health campaigns, extended homeless outreach initiatives, general advocacy of military & veteran causes and our veteran-owned business services.

Discover more from The Veteran-Owned Business Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Design a site like this with WordPress.com
Get started