Nepomuk Seiler
highfivve / Head of Engineering
AI is not a silver bullet
Like a lot of other nerds I know, I like playing games – video games, card games, board games, and sports games. I’m lucky enough that part of my work is also playing a game: solving puzzles in engineering and just being in Ad Tech. Online Advertising is a fairly young industry, and so far, there are not a lot of rules on how it’s played. We develop and try out rules in this game, all while new players constantly join in and find ways to skew the balance. We have ongoing efforts to battle fraudulent traffic, MFA sites (note how this term evolved from made for Adsense → Arbitrage → Advertisers) and brand safety.
The adults, in the form of the DOJ and the EU government [1, 2], have shown up as well to dictate some rules. Privacy laws and antitrust lawsuits try to set a safe and level playing field. And of course, players complain when they get nerfed and suddenly those apparent noobs get a chance (look at Google or Apple: they can only battle fraud and provide security if they can lock down their systems and hide everything).
Yet, like any good game, the fun only lasts when the rules are clear, the chances are fair, and everyone plays by them.
But now, the game is about to change entirely:
Al has entered the arena
Disclaimer: I hold the utmost respect for the scientists, engineers, and countless other professionals driving the development of generative AI systems. Generative AI is a remarkably powerful tool with immense potential to deliver value across a wide range of applications.
That said, I don’t believe AI will revolutionize everything—just as 3D printers didn’t, and neither did crypto. While AI, like crypto, has its share of dystopian potential, it also offers significant value in many areas. The key reason I think AI won’t transform everything as it exists today lies in the simple truth that not all social challenges can be solved through technology alone. In any game, balance isn’t achieved by endlessly adding items, skills, or abilities; sometimes, you simply need to establish better rules.
So you think that Al stuff is useless?
Quite the opposite. These tools help us write code faster, let Copilot clarify errors, and even turn those endless “FYI” email chains with 13 indentations into concise summaries. While it hasn’t revolutionized my work, it has certainly made me more efficient and productive.
Ok, so what exactly is your point?
AI can be harmful if it’s used without proper oversight, clear guidelines, or when its rules are dictated by arbitrary parties. In my opinion, one of the best arguments for why generative AI, in its current state, cannot replace humans is outlined in Why A.I. Isn’t Going to Make Art. AI lacks intention—it simply predicts the next most fitting word, pixel, or action. I’ve come across countless “AI will solve this for you” promises, and honestly, many of them strike me as more dystopian than revolutionary.
Al will automate everything
One of AI’s greatest strengths lies in its ability to optimize routine and repetitive tasks. Media planning, campaign optimization, audience targeting, and more can — and likely will — be fully handled by AI. This is incredibly appealing, as it frees us from time-consuming and unproductive manual work. And frankly, it’s the kind of work you should delegate to AI.
However, there are things you can’t and shouldn’t delegate to a system that is intransparent, even to its creators. A system that lacks intention and provides no accountability. This is the same reason why cryptocurrencies face significant challenges: blockchain was built for zero-trust systems, yet trust is the cornerstone of any functioning currency. I trust that the value of money will enable me to exchange it for something of equal worth — a principle that blockchain struggles to replicate. As long as car manufacturers refuse to take full accountability for their self-driving AI systems, it remains nothing more than marketing hype. Without accountability, trust cannot be established.
You should never rely on an AI tool that takes full control without the company offering accountability for potential failures — whether it’s unexplained budget losses (“Oops, the system is fully automated, and we’re not sure what happened”) or public disasters (“We have a best-in-class LLM, but no idea how those swastikas ended up in your sports event creative”). Accountability is non-negotiable. Getting this right is incredibly hard – a classic from old machine learning days is the “Chihuahua or muffin” detection. Now it is even harder to remove bias from the data, which leads to things like Google’s AI generating very diverse Nazis or suggesting to put glue on your pizza.
Generative AI is fantastic for helping you get started or making sense of chaos — and there’s plenty of it. As entropy and disorder continue to grow, AI can help interpret the mess but doesn’t actually reduce it. In fact, it often adds more complexity, which is the opposite of efficiency. Like blockchain, AI systems are incredibly resource-intensive. True efficiency comes from rules in the form of standards, which limit possibilities and deliver predictable, deterministic outcomes. OpenRTB, in my view, is a landmark achievement.
Nevertheless, we must try to find a balance. As a publisher, you should remain cautious and avoid placing blind trust in such systems. These tools often rely on content taken from publishers to build their models, only to serve ads generated from that same content to a small audience. Even Google recognizes that user-generated content holds far greater value than AI-generated or SEO-optimized junk. AI should be a tool we leverage—not a player in the game.
View the original article here.
Leave a Reply
You must be logged in to post a comment.